
Just as there are widely understood empirical laws of nature - for example, what goes up must come down, or every action has an equal and opposite reaction - the field of AI was long defined by a single idea: that more compute, more training data and more parameters makes a better AI model.
However, AI has since grown to need three distinct laws that describe how applying compute resources in different ways impacts model performance. Together, these AI scaling laws - pretraining scaling, post-training scaling and test-time scaling, also called long thinking - reflect how the field has evolved with techniques to use additional compute in a wide variety of increasingly complex AI use cases.
The recent rise of test-time scaling - applying more compute at inference time to improve accuracy - has enabled AI reasoning models, a new class of large language models (LLMs) that perform multiple inference passes to work through complex problems, while describing the steps required to solve a task. Test-time scaling requires intensive amounts of computational resources to support AI reasoning, which will drive further demand for accelerated computing.
What Is Pretraining Scaling? Pretraining scaling is the original law of AI development. It demonstrated that by increasing training dataset size, model parameter count and computational resources, developers could expect predictable improvements in model intelligence and accuracy.
Each of these three elements - data, model size, compute - is interrelated. Per the pretraining scaling law, outlined in this research paper, when larger models are fed with more data, the overall performance of the models improves. To make this feasible, developers must scale up their compute - creating the need for powerful accelerated computing resources to run those larger training workloads.
This principle of pretraining scaling led to large models that achieved groundbreaking capabilities. It also spurred major innovations in model architecture, including the rise of billion- and trillion-parameter transformer models, mixture of experts models and new distributed training techniques - all demanding significant compute.
And the relevance of the pretraining scaling law continues - as humans continue to produce growing amounts of multimodal data, this trove of text, images, audio, video and sensor information will be used to train powerful future AI models.
Pretraining scaling is the foundational principle of AI development, linking the size of models, datasets and compute to AI gains. Mixture of experts, depicted above, is a popular model architecture for AI training. What Is Post-Training Scaling? Pretraining a large foundation model isn't for everyone - it takes significant investment, skilled experts and datasets. But once an organization pretrains and releases a model, they lower the barrier to AI adoption by enabling others to use their pretrained model as a foundation to adapt for their own applications.
This post-training process drives additional cumulative demand for accelerated computing across enterprises and the broader developer community. Popular open-source models can have hundreds or thousands of derivative models, trained across numerous domains.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Post-training techniques can further improve a model's specificity and relevance for an organization's desired use case. While pretraining is like sending an AI model to school to learn foundational skills, post-training enhances the model with skills applicable to its intended job. An LLM, for example, could be post-trained to tackle a task like sentiment analysis or translation - or understand the jargon of a specific domain, like healthcare or law.
The post-training scaling law posits that a pretrained model's performance can further improve - in computational efficiency, accuracy or domain specificity - using techniques including fine-tuning, pruning, quantization, distillation, reinforcement learning and synthetic data augmentation.
Fine-tuning uses additional training data to tailor an AI model for specific domains and applications. This can be done using an organization's internal datasets, or with pairs of sample model input and outputs.
Distillation requires a pair of AI models: a large, complex teacher model and a lightweight student model. In the most common distillation technique, called offline distillation, the student model learns to mimic the outputs of a pretrained teacher model.
Reinforcement learning, or RL, is a machine learning technique that uses a reward model to train an agent to make decisions that align with a specific use case. The agent aims to make decisions that maximize cumulative rewards over time as it interacts with an environment - for example, a chatbot LLM that is positively reinforced by thumbs up reactions from users. This technique is known as reinforcement learning from human feedback (RLHF). Another, newer technique, reinforcement learning from AI feedback (RLAIF), instead uses feedback from AI models to guide the learning process, streamlining post-training efforts.
Best-of-n sampling generates multiple outputs from a language model and selects the one with the highest reward score based on a reward model. It's often used to improve an AI's outputs without modifying model parameters, offering an alternative to fine-tuning with reinforcement learning.
Search methods explore a range of potential decision paths before selecting a final output. This post-training technique can iteratively improve the model's responses
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
01/04/2026
January 4 2026, 18:00 (PST) DOLBY AND DOUYIN EMPOWER THE NEXT GENERATON OF CREATORS WITH DOLBY VISION
Douyin Users Can Now Create And Share Videos With Stun...
26/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/03/2026
Nevion introduces powerful new Panel Builder to enhance VideoIPath broadcast con...
26/03/2026
2026 Oscar Nominated Films Powered by Blackmagic Design
Brie Clayton March 25, 2026
0 Comments
DaVinci Resolve Studio used on 27 of this year's no...
26/03/2026
Leader to present full suite of advanced Test & Measurement solutions at NAB Sho...
26/03/2026
Boston Conservatory to Present New England and Collegiate Premiere of Groundbrea...
26/03/2026
Wayne, N.J., March 26, 2026 Phantom High-Speed announces the latest product li...
25/03/2026
Live match directors Sarah Cheadle (Sky Sports), Rob Levi (TNT Sports), and Andrew Swift (BBC Sport) sit down with the Premier League's Rachel Nightingale t...
25/03/2026
The senior from Upstate New York is manning the mic while also interning for the athletic department's sports-information team
In the live-sports-video ind...
25/03/2026
Synamedia has announced ContentArmor Edge Watermarking, a server-side solution t...
25/03/2026
SES has announced meoSphere, a medium Earth orbit (MEO) satellite network targeted for operation by 2030. The first phase will pair SES-developed software-defin...
25/03/2026
TVU Networks is working with Reuters on a phased migration from satellite to a c...
25/03/2026
Nielsen has announced three senior appointments. Seth Ladetsky has been named Head of Global Sports. Trevor Fellows will lead Nielsen's advertiser and agenc...
25/03/2026
Anoki and Amagi have launched In-Scene Ads powered by Anoki ContextIQ across Amagi's portfolio of in-content ad formats for Free Ad-supported Streaming TV (...
25/03/2026
Arkona Technologies will announce a series of enhancements to its BLADE//runner platform at NAB 2026 (Booth C.1808). The updates focus on usability and workflow...
25/03/2026
Daktronics has installed two tower displays and a video wall in the Lexus Club at Petco Park in San Diego ahead of the 2026 season.
Continuing to improve the ...
25/03/2026
MultiDyne Video & Fiber Optic Systems is celebrating its 50th anniversary as NAB Show 2026 approaches. The company was founded in 1976 by Vincent Jachetta, an N...
25/03/2026
IPC, a provider of integrated communication solutions, will make its NAB 2026 de...
25/03/2026
Live production categories were led by NBC, FOX, and ESPN's NFL coverage...
25/03/2026
The Atlanta Braves and Spectrum have announced a multiyear distribution agreemen...
25/03/2026
(L-R) Charlie Tyrell and Daniel Roher attend The AI Doc: Or How I Became An Apocaloptimist Premiere during the 2026 Sundance Film Festival at The Ray Theatre ...
25/03/2026
Directed By, Spotify's documentary-style series that pulls back the curtain ...
25/03/2026
BTS is so back., This week, the global pop superstars took the stage at New York City's Pier 17 for their first U.S. performance in four years.
Part of Spo...
25/03/2026
How you listen can shape what you hear. That's the idea behind the new Spotify Listening Lounge, an acoustic space at our London headquarters purpose-built ...
25/03/2026
Tape effects taken to the extreme
The latest release from New York-based developer Iconic Instruments is said to accurately recreate the saturation and comp...
25/03/2026
Launched alongside new Vocal Phrases bundle
Sonuscore's latest release has been designed specifically for composers working on fantasy TV, film and game...
25/03/2026
Latest update now live
The latest version of Steinberg's post-production-focused DAW has just arrived, and comes packed with new dialogue editing, sound...
25/03/2026
Rohde & Schwarz joins FormFactor's MeasureOne partner program FormFactor and Rohde & Schwarz advance their partnership for on-wafer RF component character...
25/03/2026
L3Harris Technologies and RFTEQ Pty Ltd signed a memorandum of understanding to ...
25/03/2026
L3Harris delivers combat-ready Torpedo Tube Launch and Recovery system, which deploys and retrieves Iver4 900 autonomous underwater vehicles through submarine t...
25/03/2026
The company expands leadership team under Chief Revenue Officer Amilcar Perez
S...
25/03/2026
Winter Olympic Games Opening Ceremony features in top 10 programmes of the month...
25/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
25/03/2026
Providing wide view timing visibility across the entire production chain...
25/03/2026
Continuing development drives advances in security, availability, access and connectivity...
25/03/2026
Caudalie, the renowned French cosmetics brand, has unveiled a state-of-the-art 200-seat auditorium at its new headquarters in the historic Marais district of ce...
25/03/2026
Telestream, a global leader in media workflow technologies, today announced expanded integration with Adobe Premiere, Adobe Media Encoder (AME), and Frame.io, d...
25/03/2026
Marshall Electronics is expanding its lineup of high-performance POV cameras designed for broadcast, live production and professional AV applications with the d...
25/03/2026
OOONA, a global provider of professional management and production tools for the media localization industry, announced today that it has been awarded the TPN G...
25/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
25/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
25/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
25/03/2026
SipRadius, specialists in secure, low-latency media transport, will drive innovation and interoperability still further with the launch of the SipMX Alliance at...