
Just as there are widely understood empirical laws of nature - for example, what goes up must come down, or every action has an equal and opposite reaction - the field of AI was long defined by a single idea: that more compute, more training data and more parameters makes a better AI model.
However, AI has since grown to need three distinct laws that describe how applying compute resources in different ways impacts model performance. Together, these AI scaling laws - pretraining scaling, post-training scaling and test-time scaling, also called long thinking - reflect how the field has evolved with techniques to use additional compute in a wide variety of increasingly complex AI use cases.
The recent rise of test-time scaling - applying more compute at inference time to improve accuracy - has enabled AI reasoning models, a new class of large language models (LLMs) that perform multiple inference passes to work through complex problems, while describing the steps required to solve a task. Test-time scaling requires intensive amounts of computational resources to support AI reasoning, which will drive further demand for accelerated computing.
What Is Pretraining Scaling? Pretraining scaling is the original law of AI development. It demonstrated that by increasing training dataset size, model parameter count and computational resources, developers could expect predictable improvements in model intelligence and accuracy.
Each of these three elements - data, model size, compute - is interrelated. Per the pretraining scaling law, outlined in this research paper, when larger models are fed with more data, the overall performance of the models improves. To make this feasible, developers must scale up their compute - creating the need for powerful accelerated computing resources to run those larger training workloads.
This principle of pretraining scaling led to large models that achieved groundbreaking capabilities. It also spurred major innovations in model architecture, including the rise of billion- and trillion-parameter transformer models, mixture of experts models and new distributed training techniques - all demanding significant compute.
And the relevance of the pretraining scaling law continues - as humans continue to produce growing amounts of multimodal data, this trove of text, images, audio, video and sensor information will be used to train powerful future AI models.
Pretraining scaling is the foundational principle of AI development, linking the size of models, datasets and compute to AI gains. Mixture of experts, depicted above, is a popular model architecture for AI training. What Is Post-Training Scaling? Pretraining a large foundation model isn't for everyone - it takes significant investment, skilled experts and datasets. But once an organization pretrains and releases a model, they lower the barrier to AI adoption by enabling others to use their pretrained model as a foundation to adapt for their own applications.
This post-training process drives additional cumulative demand for accelerated computing across enterprises and the broader developer community. Popular open-source models can have hundreds or thousands of derivative models, trained across numerous domains.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Post-training techniques can further improve a model's specificity and relevance for an organization's desired use case. While pretraining is like sending an AI model to school to learn foundational skills, post-training enhances the model with skills applicable to its intended job. An LLM, for example, could be post-trained to tackle a task like sentiment analysis or translation - or understand the jargon of a specific domain, like healthcare or law.
The post-training scaling law posits that a pretrained model's performance can further improve - in computational efficiency, accuracy or domain specificity - using techniques including fine-tuning, pruning, quantization, distillation, reinforcement learning and synthetic data augmentation.
Fine-tuning uses additional training data to tailor an AI model for specific domains and applications. This can be done using an organization's internal datasets, or with pairs of sample model input and outputs.
Distillation requires a pair of AI models: a large, complex teacher model and a lightweight student model. In the most common distillation technique, called offline distillation, the student model learns to mimic the outputs of a pretrained teacher model.
Reinforcement learning, or RL, is a machine learning technique that uses a reward model to train an agent to make decisions that align with a specific use case. The agent aims to make decisions that maximize cumulative rewards over time as it interacts with an environment - for example, a chatbot LLM that is positively reinforced by thumbs up reactions from users. This technique is known as reinforcement learning from human feedback (RLHF). Another, newer technique, reinforcement learning from AI feedback (RLAIF), instead uses feedback from AI models to guide the learning process, streamlining post-training efforts.
Best-of-n sampling generates multiple outputs from a language model and selects the one with the highest reward score based on a reward model. It's often used to improve an AI's outputs without modifying model parameters, offering an alternative to fine-tuning with reinforcement learning.
Search methods explore a range of potential decision paths before selecting a final output. This post-training technique can iteratively improve the model's responses
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
04/08/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
04/07/2026
April 7 2026, 19:00 (PDT) Detective Conan: Fallen Angel of the Highway Opens in...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
05/05/2026
Experts from the world of academia, tech, business, politics and media convened for a Thomson Talks at the Cambridge Disinformation Summit in April. It's th...
05/05/2026
Three phones were hardwired for power and transmission to the truck; camera feat...
05/05/2026
The creative studio behind campaigns for the NBA, Fanatics Sportsbook & Casino, ...
05/05/2026
Nielsen has announced results from a co-viewing pilot program covering February&...
05/05/2026
viztrick AiDi, an on-device AI solution developed by Nippon TV, delivered global...
05/05/2026
ARRI has announced Omnibar, a battery-powered, IP65-rated multi-color LED linear...
05/05/2026
Imagine Communications has announced that France T l visions is the first broadc...
05/05/2026
The Women's National Basketball Association (WNBA) and Bell Media today announced a multiyear agreement to broadcast and stream WNBA games in Canada beginni...
05/05/2026
SVG is proud to announce Warner Bros. Discovery's Techwood Studios in Atlant...
05/05/2026
With no operator required, AutoMic workflow automates talent identification on U...
05/05/2026
A crash in 2015 set the industry back, but this winter proved that drones are he...
05/05/2026
Another year, and more proof that Asia continues to shape some of the world's most exciting new sounds. This year's RADAR artists draw from deep local r...
05/05/2026
The Austin City Limits Music Fest 2026 lineup just dropped, and this year, Spoti...
05/05/2026
New drum machine book campaign incoming
Bjooks have announced that during Superbooth 2026, they will be launching a Kickstarter campaign to fund the product...
05/05/2026
Flagship all-in-one production bundle updated
The latest version of Native Instruments' flagship virtual instrument and plug-in bundle has just been ann...
05/05/2026
Rohde & Schwarz to host RF Testing Innovations Forum 2026, helping design engine...
05/05/2026
L3Harris provides communications, electronic warfare, sensors and mission systems that enable Virginia-class submarine crews to operate with confidence in conte...
05/05/2026
The company grew by 7.6% in net revenue and 16.3% in EBITDA, achieving a 33% inc...
05/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
05/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
05/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
05/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
05/05/2026
Project Marks First Major Broadcast Deployment of Latest Addition to SNP Lineup
Imagine Communications today announced that France T l visions is the first br...
05/05/2026
Shotoku Broadcast Systems Wins 2026 NAB Show Product of the Year Award
Shotoku Broadcast Systems announced today that its Swoop range of robotic cranes has be...
05/05/2026
DigitalGlue's creative.space Intelligence Wins Future's Best of Show Award, Presented by TV Tech
creative.space Intelligence (CSI), part of the creativ...
05/05/2026
Zixi, a leader in live video delivery and workflow orchestration, will showcase next-generation broadcast workflows at the Media Production and Technology Show ...
05/05/2026
Stingr marks its launch with a new approach to second-screen interactivity
Brie Clayton May 5, 2026
0 Comments
Huge leap forward in revenues and engag...
05/05/2026
Shotoku Broadcast Systems Wins 2026 NAB Show Product of the Year Award
Brie Clayton May 5, 2026
0 Comments
Shotoku Broadcast Systems announced today tha...
05/05/2026
Following a successful NAB Show in Las Vegas, DHD will promote examples from its wide range of broadcast-quality audio production equipment at the May 13th-14th...
05/05/2026
LucidLink today announced its programme for MPTS 2026, where it will exhibit at Stand M59 at Olympia London, 13 to 14 May. The company will showcase its latest ...
05/05/2026
Limecraft today announces the release of Limecraft 2026.3, the third platform update in its 2026 release cycle. Limecraft is an AI-powered production platform t...
05/05/2026
Huge leap forward in revenues and engagement...
05/05/2026
Broadcast Solutions, a leading system integrator and provider of innovative solutions for the broadcast media industry, has taken another significant step in st...
05/05/2026
Operative today announced the appointment of Dang Ly as Chief Product Officer, signaling the company's accelerating commitment to delivering the next genera...
05/05/2026
The Media Talent Manifesto (MTM) today announces the return of the World Skills Caf at IBC2026, positioning the event as a critical industry forum to confront ...
05/05/2026
ARRI unveils Omnibar: compact, modular, battery-powered IP65 LED bars with preci...
05/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
05/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
05/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
05/05/2026
Digital Domain Welcomes Award-Nominated VFX Supervisor Jelmer Boskma
Brie Clayton May 4, 2026
0 Comments
Digital Domain, a global leader in visual eff...
05/05/2026
May 5th, 2026 Press Materials Available Here
2026 TRIBECA FESTIVAL UNVEILS EXP...
05/05/2026
Back to All News
Limited Series About The Greatest Soccer Team Of All Time: Net...
05/05/2026
FOX Sports, FOX One and Indeed Launch Nationwide Search for FOX One Chief World...
05/05/2026
GoVertical! Technology Recognized for Ability to Provide Real-Time 9:16 Autocrop...
04/05/2026
just:play pro 2026 and just:live pro 2026 are available to download!
More Details:At NAB 2026, ToolsOnAir showcased just:play pro 2026 and just:live pro 2026, ...
04/05/2026
just:in mac pro 2026 - The Next Level of Professional Recording on macOS
More Details:The headline innovation in just:in mac pro 2026 is the new Auto format si...