
Just as there are widely understood empirical laws of nature - for example, what goes up must come down, or every action has an equal and opposite reaction - the field of AI was long defined by a single idea: that more compute, more training data and more parameters makes a better AI model.
However, AI has since grown to need three distinct laws that describe how applying compute resources in different ways impacts model performance. Together, these AI scaling laws - pretraining scaling, post-training scaling and test-time scaling, also called long thinking - reflect how the field has evolved with techniques to use additional compute in a wide variety of increasingly complex AI use cases.
The recent rise of test-time scaling - applying more compute at inference time to improve accuracy - has enabled AI reasoning models, a new class of large language models (LLMs) that perform multiple inference passes to work through complex problems, while describing the steps required to solve a task. Test-time scaling requires intensive amounts of computational resources to support AI reasoning, which will drive further demand for accelerated computing.
What Is Pretraining Scaling? Pretraining scaling is the original law of AI development. It demonstrated that by increasing training dataset size, model parameter count and computational resources, developers could expect predictable improvements in model intelligence and accuracy.
Each of these three elements - data, model size, compute - is interrelated. Per the pretraining scaling law, outlined in this research paper, when larger models are fed with more data, the overall performance of the models improves. To make this feasible, developers must scale up their compute - creating the need for powerful accelerated computing resources to run those larger training workloads.
This principle of pretraining scaling led to large models that achieved groundbreaking capabilities. It also spurred major innovations in model architecture, including the rise of billion- and trillion-parameter transformer models, mixture of experts models and new distributed training techniques - all demanding significant compute.
And the relevance of the pretraining scaling law continues - as humans continue to produce growing amounts of multimodal data, this trove of text, images, audio, video and sensor information will be used to train powerful future AI models.
Pretraining scaling is the foundational principle of AI development, linking the size of models, datasets and compute to AI gains. Mixture of experts, depicted above, is a popular model architecture for AI training. What Is Post-Training Scaling? Pretraining a large foundation model isn't for everyone - it takes significant investment, skilled experts and datasets. But once an organization pretrains and releases a model, they lower the barrier to AI adoption by enabling others to use their pretrained model as a foundation to adapt for their own applications.
This post-training process drives additional cumulative demand for accelerated computing across enterprises and the broader developer community. Popular open-source models can have hundreds or thousands of derivative models, trained across numerous domains.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Post-training techniques can further improve a model's specificity and relevance for an organization's desired use case. While pretraining is like sending an AI model to school to learn foundational skills, post-training enhances the model with skills applicable to its intended job. An LLM, for example, could be post-trained to tackle a task like sentiment analysis or translation - or understand the jargon of a specific domain, like healthcare or law.
The post-training scaling law posits that a pretrained model's performance can further improve - in computational efficiency, accuracy or domain specificity - using techniques including fine-tuning, pruning, quantization, distillation, reinforcement learning and synthetic data augmentation.
Fine-tuning uses additional training data to tailor an AI model for specific domains and applications. This can be done using an organization's internal datasets, or with pairs of sample model input and outputs.
Distillation requires a pair of AI models: a large, complex teacher model and a lightweight student model. In the most common distillation technique, called offline distillation, the student model learns to mimic the outputs of a pretrained teacher model.
Reinforcement learning, or RL, is a machine learning technique that uses a reward model to train an agent to make decisions that align with a specific use case. The agent aims to make decisions that maximize cumulative rewards over time as it interacts with an environment - for example, a chatbot LLM that is positively reinforced by thumbs up reactions from users. This technique is known as reinforcement learning from human feedback (RLHF). Another, newer technique, reinforcement learning from AI feedback (RLAIF), instead uses feedback from AI models to guide the learning process, streamlining post-training efforts.
Best-of-n sampling generates multiple outputs from a language model and selects the one with the highest reward score based on a reward model. It's often used to improve an AI's outputs without modifying model parameters, offering an alternative to fine-tuning with reinforcement learning.
Search methods explore a range of potential decision paths before selecting a final output. This post-training technique can iteratively improve the model's responses
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
01/04/2026
January 4 2026, 18:00 (PST) DOLBY AND DOUYIN EMPOWER THE NEXT GENERATON OF CREATORS WITH DOLBY VISION
Douyin Users Can Now Create And Share Videos With Stun...
20/02/2026
Gravity Media and Los Angeles-based Green Couch Entertainment announce a strateg...
20/02/2026
IMAX announces it is working with Apple TV to bring the 2026 FIA Formula One Wor...
20/02/2026
Daktronics has partnered with the Philadelphia Phillies to design, manufacture, ...
20/02/2026
ESPN announces the upcoming launch of Women's Sports Sundays - a first-of-it...
20/02/2026
As the Seattle Seahawks and New England Patriots faced off in the NFL's biggest sporting event of the season on Sun., Feb. 8, Sennheiser wireless solutions ...
20/02/2026
ESPN announces its 2026 Major League Baseball spring training schedule, which includes four national games on ESPN, six games on ESPN Unlimited, and more than 2...
20/02/2026
Open Broadcast Systems, which specializes in software-based professional video transport, has added support for 200 Gigabit Ethernet to its range of encoders an...
20/02/2026
Chyron announces the release of PAINT 10.3, which is designed to help analysts and operators turn live action into clearer, faster on-air storytelling.
PAINT 1...
20/02/2026
With full squad workouts underway, MLB Network's live Spring Training game s...
20/02/2026
Tech enhancements, marquee productions are expected to take advantage of a summe...
20/02/2026
In-venue and creative video staffers at the professional and collegiate level ha...
20/02/2026
Ratings Roundup is a rundown of recent rating news and is derived from press rel...
20/02/2026
Speaking with SVG Europe after one of Team GB's greatest days at a Winter Olympics, BBC Sport's head of major events, Ron Chakraborty, explains the broa...
20/02/2026
Making Winter Games Olympic magic is the goal for every broadcaster in Italy cov...
20/02/2026
Curling, one of the least-dangerous Winter Olympic sports, is dominating the Mil...
20/02/2026
BBC Sport's presence at the 2026 Winter Games is centred around a significan...
20/02/2026
BBC Sport is bringing together its linear TV and streaming digital arms in a str...
20/02/2026
To broaden the appeal of winter sports at Milano Cortina, the BBC has integrated...
20/02/2026
Just in time for the start of Apple TV's inaugural season as the exclusive U...
20/02/2026
One big challenge was to depict the character of each of very different and wide...
20/02/2026
(L-R) Writer-director Amanda Kramer photographs the photographers at the premiere of her film By Design at the Library Center Theatre in Park City. (Photo by ...
20/02/2026
In our latest blog, Tim Pearson explores the impact that increased memory prices are having on the consumer electronics market, and particularly the set-top box...
20/02/2026
Calrec Type R: Shaping the Future of Radio from the Heart of Flirt FM
Love may have filled the airwaves last week for Valentine's Day, and we've just c...
20/02/2026
NEW YORK - February 10, 2026 - An estimated 125.6* million viewers watched Super Bowl LX on Sunday, February 8, according to Nielsen's Big Data Panel meas...
20/02/2026
NEW YORK - February 19, 2026 - Nielsen today shared updated and final Super Bowl...
20/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
20/02/2026
A leading global investment bank, with offices at Two International Finance Centre in Hong Kong, partnered with systems integrators Global Vision Engineering (G...
20/02/2026
Rise AV and Rise Broadcast, the global not-for-profit organisations dedicated to improving gender diversity across technical industries, have today announced a ...
20/02/2026
Open Broadcast Systems, the leader in software-based professional video transport, has added support for 200 Gigabit Ethernet to its range of encoders and decod...
20/02/2026
Signiant today announced the formation of its Customer Advisory Board (CAB), bringing together a select group of customers to collaborate on product strategy, r...
20/02/2026
PTZOptics today announced the launch of its Visual Reasoning initiative that makes video more actionable by combining robotic PTZ camera systems, AI, and open i...
20/02/2026
Amino, a global media technology provider delivering devices, software and cloud services that simplify and elevate video delivery, today announced the successf...
20/02/2026
SMPTE , the home of media professionals, technologists, and engineers, today announced its call for technical papers for the SMPTE 2026 Media Technology Summit....
20/02/2026
Wowza Media Systems today announced that Granicus, a leading provider of digital engagement solutions for governments, continues to rely on Wowza to power its h...
20/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
20/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
20/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
20/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
20/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
20/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
20/02/2026
Back to All News
A Friend, A Murderer' Launches on Netflix on March 5, 2026
Main image
Entertainment
20 February 2026
GlobalDenmark
Link copied to cl...
20/02/2026
Other Voices returns to RT this Spring with performances from Dermot Kennedy, A...
19/02/2026
The Canadian rightsholder deploys its most complex' Olympics setup an ever,...
19/02/2026
Suite Studios, a cloud-native platform that connects creative teams to their med...
19/02/2026
Guitar Center and the Tennessee Titans announce a first-of-its-kind partnership ...