
Just as there are widely understood empirical laws of nature - for example, what goes up must come down, or every action has an equal and opposite reaction - the field of AI was long defined by a single idea: that more compute, more training data and more parameters makes a better AI model.
However, AI has since grown to need three distinct laws that describe how applying compute resources in different ways impacts model performance. Together, these AI scaling laws - pretraining scaling, post-training scaling and test-time scaling, also called long thinking - reflect how the field has evolved with techniques to use additional compute in a wide variety of increasingly complex AI use cases.
The recent rise of test-time scaling - applying more compute at inference time to improve accuracy - has enabled AI reasoning models, a new class of large language models (LLMs) that perform multiple inference passes to work through complex problems, while describing the steps required to solve a task. Test-time scaling requires intensive amounts of computational resources to support AI reasoning, which will drive further demand for accelerated computing.
What Is Pretraining Scaling? Pretraining scaling is the original law of AI development. It demonstrated that by increasing training dataset size, model parameter count and computational resources, developers could expect predictable improvements in model intelligence and accuracy.
Each of these three elements - data, model size, compute - is interrelated. Per the pretraining scaling law, outlined in this research paper, when larger models are fed with more data, the overall performance of the models improves. To make this feasible, developers must scale up their compute - creating the need for powerful accelerated computing resources to run those larger training workloads.
This principle of pretraining scaling led to large models that achieved groundbreaking capabilities. It also spurred major innovations in model architecture, including the rise of billion- and trillion-parameter transformer models, mixture of experts models and new distributed training techniques - all demanding significant compute.
And the relevance of the pretraining scaling law continues - as humans continue to produce growing amounts of multimodal data, this trove of text, images, audio, video and sensor information will be used to train powerful future AI models.
Pretraining scaling is the foundational principle of AI development, linking the size of models, datasets and compute to AI gains. Mixture of experts, depicted above, is a popular model architecture for AI training. What Is Post-Training Scaling? Pretraining a large foundation model isn't for everyone - it takes significant investment, skilled experts and datasets. But once an organization pretrains and releases a model, they lower the barrier to AI adoption by enabling others to use their pretrained model as a foundation to adapt for their own applications.
This post-training process drives additional cumulative demand for accelerated computing across enterprises and the broader developer community. Popular open-source models can have hundreds or thousands of derivative models, trained across numerous domains.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Post-training techniques can further improve a model's specificity and relevance for an organization's desired use case. While pretraining is like sending an AI model to school to learn foundational skills, post-training enhances the model with skills applicable to its intended job. An LLM, for example, could be post-trained to tackle a task like sentiment analysis or translation - or understand the jargon of a specific domain, like healthcare or law.
The post-training scaling law posits that a pretrained model's performance can further improve - in computational efficiency, accuracy or domain specificity - using techniques including fine-tuning, pruning, quantization, distillation, reinforcement learning and synthetic data augmentation.
Fine-tuning uses additional training data to tailor an AI model for specific domains and applications. This can be done using an organization's internal datasets, or with pairs of sample model input and outputs.
Distillation requires a pair of AI models: a large, complex teacher model and a lightweight student model. In the most common distillation technique, called offline distillation, the student model learns to mimic the outputs of a pretrained teacher model.
Reinforcement learning, or RL, is a machine learning technique that uses a reward model to train an agent to make decisions that align with a specific use case. The agent aims to make decisions that maximize cumulative rewards over time as it interacts with an environment - for example, a chatbot LLM that is positively reinforced by thumbs up reactions from users. This technique is known as reinforcement learning from human feedback (RLHF). Another, newer technique, reinforcement learning from AI feedback (RLAIF), instead uses feedback from AI models to guide the learning process, streamlining post-training efforts.
Best-of-n sampling generates multiple outputs from a language model and selects the one with the highest reward score based on a reward model. It's often used to improve an AI's outputs without modifying model parameters, offering an alternative to fine-tuning with reinforcement learning.
Search methods explore a range of potential decision paths before selecting a final output. This post-training technique can iteratively improve the model's responses
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
06/04/2026
Michigan legends bring a new voice to the broadcast as TNT Sports and CBS Sports...
06/04/2026
From high school sports all the way up to the major leagues, building high-quali...
06/04/2026
Quickplay, an AI company for the media and entertainment industry, has been accepted into the Advanced tier of the TwelveLabs Ecosystem Partner Program. Quickpl...
06/04/2026
Grass Valley has announced the Future Playmakers Program, a global initiative to...
06/04/2026
El l der de operaciones impulsa la producci n en estudio mientras encuentra insp...
06/04/2026
The ops leader helps lead the charge in studio for the Spanish-language broadcas...
06/04/2026
Behind The Mic provides a roundup of recent news regarding on-air talent, includ...
06/04/2026
The National Hockey League (NHL), in partnership with Verizon and the New Jersey Devils, today announced the opening of the NHL Innovation Lab powered by Verizo...
06/04/2026
Rock League, a new professional curling league, has announced that ESPN+ will stream its inaugural 2026 season for fans in the United States. The first Rock Lea...
06/04/2026
Advanced Systems Group has announced the appointment of Andrea (Andy) Cummis as Vice President of Systems Design and Engineering. In this role, she will lead de...
06/04/2026
Backed by Bolt Ventures, the venture brings Bryson DeChambeau, Grant Horvat, and...
06/04/2026
With this environment we can start that collaboration even earlier because we ca...
06/04/2026
Like the immortal lives of vampires, some stories never really end. That's t...
06/04/2026
As podcasting continues to evolve, growth increasingly means building beyond aud...
06/04/2026
Multiband dynamics plug-in enhanced
California-based developer FSK Audio have released a significant update for their innovative multiband dynamics processo...
06/04/2026
Share official & user-created full-rig presets
IK Multimedia's latest TONEX update makes it possible for users of the popular amp and effects modelling ...
06/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/04/2026
Dalet Showcases Dalia Agentic AI and End-to-End Media Workflows at NAB Show 2026
Brie Clayton April 6, 2026
0 Comments
Dalet, a leading technology and...
06/04/2026
OpenDrives Shows Off Sports Expertise in Sports Business Hub located in NAB Show...
06/04/2026
Proton to Demonstrate 3D Application at NAB 2026
Brie Clayton April 6, 2026
0 Comments
Yet further creative potential unleashed through innovation in ...
06/04/2026
Autoscript Highlights Voice-Driven Prompting and PTZ Solutions at NAB 2026
Brie Clayton April 6, 2026
0 Comments
Experience Autoscript Voice, PTZ prom...
06/04/2026
Mediaproxy Highlights Significant Enhancements to its LogServer suite at NAB Sho...
06/04/2026
Back to All News
Netflix Expands Kids Entertainment Lineup With Playground App ...
05/04/2026
Tackles all reported bugs!
SoundBridge have just announced the launch of a new update that introduces a couple of minor changes to their remote collaboratio...
04/04/2026
The University of Arizona's Men's Basketball team has only loss twice th...
04/04/2026
1080p HDR arrives, a new generation of storytelling tools takes center stage, an...
04/04/2026
Michigan legends bring a new voice to the broadcast as TNT Sports and CBS Sports...
04/04/2026
Faster, cleaner and more intuitive than ever
The control software for Flock Audio's digitally controlled patchbay systems has just been treated to an up...
04/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
04/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
04/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
04/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
04/04/2026
DHD Introduces AI-Based Audio Noise Reduction to XD3 IP Core
Brie Clayton April 3, 2026
0 Comments
The accompanying image shows the rear panel of the ...
04/04/2026
Macnica Redefines ST 2110 Flexibility with Two Speeds on One Card
Brie Clayton April 3, 2026
0 Comments
New for NAB Show 2026, MEP100 SmartNIC now sup...
04/04/2026
Unified Media Workflows for Story-Centric Production
Brie Clayton April 3, 2026
0 Comments
Framelight X unifies field capture, editing and publishing ...
03/04/2026
Michigan's Fab Five will reunite for an alternate presentation of the Mich...
03/04/2026
Avid will exhibit at NAB Show 2026 (April 18-22, Booth N2226, Las Vegas Convention Center), demonstrating its Content Core platform and new AI-driven workflow c...
03/04/2026
Mark Roberts Motion Control (MRMC) has announced the appointment of Nick Barthee as Chief Operating Officer.
The announcement follows MRMC's transition fro...
03/04/2026
Interra Systems has announced that Elite Media Technologies has selected its BATON file-based QC solution for media workflows. Elite Media Technologies speciali...
03/04/2026
Ateme has announced that Moldtelecom has deployed Ateme technologies across its streaming workflow, covering encoding, delivery, operations, and analytics.
Mol...
03/04/2026
Grass Valley will demonstrate Framelight X, its content management platform, at NAB Show 2026. The platform connects capture, ingest, editing, and publishing in...
03/04/2026
Encompass Digital Media and Techex have announced a cloud-native Master Control ...
03/04/2026
Live Vertical Video automatically track the action on the court via AI technology and delivers a fully optimized, 9 16 live feed for viewers...
03/04/2026
As the Illini make their first trip to college basketball's biggest stage si...