
Just as there are widely understood empirical laws of nature - for example, what goes up must come down, or every action has an equal and opposite reaction - the field of AI was long defined by a single idea: that more compute, more training data and more parameters makes a better AI model.
However, AI has since grown to need three distinct laws that describe how applying compute resources in different ways impacts model performance. Together, these AI scaling laws - pretraining scaling, post-training scaling and test-time scaling, also called long thinking - reflect how the field has evolved with techniques to use additional compute in a wide variety of increasingly complex AI use cases.
The recent rise of test-time scaling - applying more compute at inference time to improve accuracy - has enabled AI reasoning models, a new class of large language models (LLMs) that perform multiple inference passes to work through complex problems, while describing the steps required to solve a task. Test-time scaling requires intensive amounts of computational resources to support AI reasoning, which will drive further demand for accelerated computing.
What Is Pretraining Scaling? Pretraining scaling is the original law of AI development. It demonstrated that by increasing training dataset size, model parameter count and computational resources, developers could expect predictable improvements in model intelligence and accuracy.
Each of these three elements - data, model size, compute - is interrelated. Per the pretraining scaling law, outlined in this research paper, when larger models are fed with more data, the overall performance of the models improves. To make this feasible, developers must scale up their compute - creating the need for powerful accelerated computing resources to run those larger training workloads.
This principle of pretraining scaling led to large models that achieved groundbreaking capabilities. It also spurred major innovations in model architecture, including the rise of billion- and trillion-parameter transformer models, mixture of experts models and new distributed training techniques - all demanding significant compute.
And the relevance of the pretraining scaling law continues - as humans continue to produce growing amounts of multimodal data, this trove of text, images, audio, video and sensor information will be used to train powerful future AI models.
Pretraining scaling is the foundational principle of AI development, linking the size of models, datasets and compute to AI gains. Mixture of experts, depicted above, is a popular model architecture for AI training. What Is Post-Training Scaling? Pretraining a large foundation model isn't for everyone - it takes significant investment, skilled experts and datasets. But once an organization pretrains and releases a model, they lower the barrier to AI adoption by enabling others to use their pretrained model as a foundation to adapt for their own applications.
This post-training process drives additional cumulative demand for accelerated computing across enterprises and the broader developer community. Popular open-source models can have hundreds or thousands of derivative models, trained across numerous domains.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Post-training techniques can further improve a model's specificity and relevance for an organization's desired use case. While pretraining is like sending an AI model to school to learn foundational skills, post-training enhances the model with skills applicable to its intended job. An LLM, for example, could be post-trained to tackle a task like sentiment analysis or translation - or understand the jargon of a specific domain, like healthcare or law.
The post-training scaling law posits that a pretrained model's performance can further improve - in computational efficiency, accuracy or domain specificity - using techniques including fine-tuning, pruning, quantization, distillation, reinforcement learning and synthetic data augmentation.
Fine-tuning uses additional training data to tailor an AI model for specific domains and applications. This can be done using an organization's internal datasets, or with pairs of sample model input and outputs.
Distillation requires a pair of AI models: a large, complex teacher model and a lightweight student model. In the most common distillation technique, called offline distillation, the student model learns to mimic the outputs of a pretrained teacher model.
Reinforcement learning, or RL, is a machine learning technique that uses a reward model to train an agent to make decisions that align with a specific use case. The agent aims to make decisions that maximize cumulative rewards over time as it interacts with an environment - for example, a chatbot LLM that is positively reinforced by thumbs up reactions from users. This technique is known as reinforcement learning from human feedback (RLHF). Another, newer technique, reinforcement learning from AI feedback (RLAIF), instead uses feedback from AI models to guide the learning process, streamlining post-training efforts.
Best-of-n sampling generates multiple outputs from a language model and selects the one with the highest reward score based on a reward model. It's often used to improve an AI's outputs without modifying model parameters, offering an alternative to fine-tuning with reinforcement learning.
Search methods explore a range of potential decision paths before selecting a final output. This post-training technique can iteratively improve the model's responses
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
01/04/2026
January 4 2026, 18:00 (PST) DOLBY AND DOUYIN EMPOWER THE NEXT GENERATON OF CREATORS WITH DOLBY VISION
Douyin Users Can Now Create And Share Videos With Stun...
17/02/2026
Johannesburg, 17 February 2026 - The National Film and Video Foundation (NFVF), ...
17/02/2026
A Green Beret hangs a 60mm mortar round from a M225A mortar system on a live-fire range. (Photo credit: U.S. Army)...
17/02/2026
AI Is Scaling Faster Than Governance And That's a Risk AI adoption hasn't rolled out through neat transformation programmes. It has spread organically...
17/02/2026
Cable Surges 9% on Strength of College Football Playoffs and News, while NFL Delivers Top 15 Broadcast Telecasts
ESPN (+86%) and FOX News Channel (+17%) Combin...
17/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
17/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
17/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
17/02/2026
As the FCC explores a potential clawback of Upper C-Band spectrum currently used for broadcast distribution, BCNEXXT is working with broadcasters and operators ...
17/02/2026
Qube Cinema announces the acquisition of the business of Arts Alliance Media (AA...
17/02/2026
Back to All News
Unveiling What's Next on Netflix for Australia and New Zealand in 2026
Entertainment
17 February 2026
GlobalAustraliaNew Zealand
Link ...
17/02/2026
Back to All News
WBD Files Definitive Proxy Statement and Schedules Special Mee...
17/02/2026
Arvato Systems Once Again Named One of Germany's Best IT Service Providers.
The best IT service providers in 2026
G tersloh - Arvato Systems is once ag...
17/02/2026
Open the report
Our 2025 report brings together results from 155 consumer magazine titles across 42 market sectors, published by 59 media owners. Over the year...
17/02/2026
Celebrating 21 Years of the RT Choice Music Prize
RT Choice Music Prize
In association with IMRO and IRMA
Classic Irish Album
And the winning album is
T...
17/02/2026
RT has today announced that as part of its public-service remit, it will expand...
16/02/2026
The production infrastructure scaled seamlessly from regular-season games to the...
16/02/2026
The operations team reimagined traditional workflows were reimagined and built a...
16/02/2026
Once again an in-house operation, the league's network is producing shows fr...
16/02/2026
The league is producing its first All-Star with NBC, Intuit Dome, and, once agai...
16/02/2026
When a customer in London needed a 4K video wall that could take 12G input, the media company found no video wall suppliers that could meet their specs and time...
16/02/2026
Grass Valley, a media and entertainment technology innovator, has won a competitive NATO-wide tender to provide the new camera system for NATO's main broadc...
16/02/2026
Comcast Business announces it is again partnering with NBCUniversal to architect and manage critical components of the linear and digital broadcast for three of...
16/02/2026
Week two of the 2026 Milano Cortina Winter Olympics are underway and crucial to the NBC Sports efforts has been the NEP Group which is providing a full range of...
16/02/2026
The Film Independent Spirit Awards have officially switched things up. This Sund...
16/02/2026
SBS's Powerhouse Current Affairs Line-up is Back with the Conversations and ...
16/02/2026
A THAAD interceptor is launched during a successful Missile Defense Agency intercept test (Photo Credit: Missile Defense Agency)...
16/02/2026
aconnic AG (ISIN: DE000A0LBKW6), Munich, is increasing capacity and ramping up p...
16/02/2026
Content Vault, the patent-pending content security platform originally developed for the film, television and entertainment industries, has today announced a ma...
16/02/2026
Back to All News
Unscripted Content Producer MEGUMI Signs Exclusive Partnership with Netflix
Entertainment
16 February 2026
GlobalJapan
Link copied to clip...
16/02/2026
In Ep62 of the AMPS Podcast, they share how a 3 mm DPA 6061 Subminiature Mic became part of a custom solution hidden inside the underwater mask capturing breath...
16/02/2026
Back to All News
Netflix Releases the Second Season of Gangs of Galicia on April 3
Entertainment
16 February 2026
GlobalSpain
Link copied to clipboard
Wat...
16/02/2026
Back to All News
"Unscripted" Producer MEGUMI Signs Exclusive Partnership with Netflix
Entertainment
16 February 2026
GlobalJapan
Link copied to clipboard
...
16/02/2026
RT News & Current Affairs has today announced the new appointments of journalis...
16/02/2026
The NVIDIA Blackwell platform has been widely adopted by leading inference provi...
15/02/2026
With new partnership between the league and NBC, workflows distinguish more between live, broadcast sound
There'll be a lot new for the 75th NBA All-Star W...
15/02/2026
After 24-year absence, NBC Sports returns to NBA All-Star Weekend with unique ca...
15/02/2026
New to NBA coverage, the viewer experience offers several angles in addition to ...
15/02/2026
Coverage features 4X-slo-mo Supracam and Steadicam, Nucleus 4K cameras, closer play-by-play angle, 10 player mics
NBC Sports is in the midst of its first NBA A...
14/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/02/2026
Boston Conservatory Orchestra Helps Peter and Leonardo Dugan Complete Their Dre...
13/02/2026
Olympic Broadcasting Services (OBS) has provided an update on its adoption of the cloud as it continues on its journey to fully migrate to IT-based systems by 2...
13/02/2026
France T l visions has successfully launched France 2 UHD featuring Dolby Vision...
13/02/2026
Partnering with Worldwide Olympic Partner TCL, OBS deploys connected Athlete Mom...