
Just as there are widely understood empirical laws of nature - for example, what goes up must come down, or every action has an equal and opposite reaction - the field of AI was long defined by a single idea: that more compute, more training data and more parameters makes a better AI model.
However, AI has since grown to need three distinct laws that describe how applying compute resources in different ways impacts model performance. Together, these AI scaling laws - pretraining scaling, post-training scaling and test-time scaling, also called long thinking - reflect how the field has evolved with techniques to use additional compute in a wide variety of increasingly complex AI use cases.
The recent rise of test-time scaling - applying more compute at inference time to improve accuracy - has enabled AI reasoning models, a new class of large language models (LLMs) that perform multiple inference passes to work through complex problems, while describing the steps required to solve a task. Test-time scaling requires intensive amounts of computational resources to support AI reasoning, which will drive further demand for accelerated computing.
What Is Pretraining Scaling? Pretraining scaling is the original law of AI development. It demonstrated that by increasing training dataset size, model parameter count and computational resources, developers could expect predictable improvements in model intelligence and accuracy.
Each of these three elements - data, model size, compute - is interrelated. Per the pretraining scaling law, outlined in this research paper, when larger models are fed with more data, the overall performance of the models improves. To make this feasible, developers must scale up their compute - creating the need for powerful accelerated computing resources to run those larger training workloads.
This principle of pretraining scaling led to large models that achieved groundbreaking capabilities. It also spurred major innovations in model architecture, including the rise of billion- and trillion-parameter transformer models, mixture of experts models and new distributed training techniques - all demanding significant compute.
And the relevance of the pretraining scaling law continues - as humans continue to produce growing amounts of multimodal data, this trove of text, images, audio, video and sensor information will be used to train powerful future AI models.
Pretraining scaling is the foundational principle of AI development, linking the size of models, datasets and compute to AI gains. Mixture of experts, depicted above, is a popular model architecture for AI training. What Is Post-Training Scaling? Pretraining a large foundation model isn't for everyone - it takes significant investment, skilled experts and datasets. But once an organization pretrains and releases a model, they lower the barrier to AI adoption by enabling others to use their pretrained model as a foundation to adapt for their own applications.
This post-training process drives additional cumulative demand for accelerated computing across enterprises and the broader developer community. Popular open-source models can have hundreds or thousands of derivative models, trained across numerous domains.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Post-training techniques can further improve a model's specificity and relevance for an organization's desired use case. While pretraining is like sending an AI model to school to learn foundational skills, post-training enhances the model with skills applicable to its intended job. An LLM, for example, could be post-trained to tackle a task like sentiment analysis or translation - or understand the jargon of a specific domain, like healthcare or law.
The post-training scaling law posits that a pretrained model's performance can further improve - in computational efficiency, accuracy or domain specificity - using techniques including fine-tuning, pruning, quantization, distillation, reinforcement learning and synthetic data augmentation.
Fine-tuning uses additional training data to tailor an AI model for specific domains and applications. This can be done using an organization's internal datasets, or with pairs of sample model input and outputs.
Distillation requires a pair of AI models: a large, complex teacher model and a lightweight student model. In the most common distillation technique, called offline distillation, the student model learns to mimic the outputs of a pretrained teacher model.
Reinforcement learning, or RL, is a machine learning technique that uses a reward model to train an agent to make decisions that align with a specific use case. The agent aims to make decisions that maximize cumulative rewards over time as it interacts with an environment - for example, a chatbot LLM that is positively reinforced by thumbs up reactions from users. This technique is known as reinforcement learning from human feedback (RLHF). Another, newer technique, reinforcement learning from AI feedback (RLAIF), instead uses feedback from AI models to guide the learning process, streamlining post-training efforts.
Best-of-n sampling generates multiple outputs from a language model and selects the one with the highest reward score based on a reward model. It's often used to improve an AI's outputs without modifying model parameters, offering an alternative to fine-tuning with reinforcement learning.
Search methods explore a range of potential decision paths before selecting a final output. This post-training technique can iteratively improve the model's responses
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
01/04/2026
January 4 2026, 18:00 (PST) DOLBY AND DOUYIN EMPOWER THE NEXT GENERATON OF CREATORS WITH DOLBY VISION
Douyin Users Can Now Create And Share Videos With Stun...
24/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
24/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
24/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
24/03/2026
Intinor introduces enhanced SRT monitoring, HDR transport and NDI Advanced suppo...
24/03/2026
Miri V410 Live 4K Encoder/Decoder to Make North American Debut at 2026 NAB Show
Brie Clayton March 23, 2026
0 Comments
Powerful new solution for strea...
23/03/2026
The Professional Fighters League (PFL) has renewed its multi-year partnership wi...
23/03/2026
The Snow League has named Google Cloud as its Official Cloud and AI Partner. The...
23/03/2026
Chyron has appointed Eric Wolff as Director of Venues Sales, North America. Wolff previously served as Director of Broadcast Operations & Media Production for T...
23/03/2026
Chicago Sports Network (CHSN) and Weigel Broadcasting's WCIU (The U, ch. 26.1) will simulcast 10 Chicago White Sox games during the 2026 season, the compani...
23/03/2026
Cosm has appointed Jon Werbeck as Vice President, Head of Sponsorships. He will report to Corey Breton, Head of Venues, and will focus on corporate sponsorship ...
23/03/2026
CP Communications has announced a partnership with Mark Roberts Motion Control (...
23/03/2026
NAB Show 2026, taking place April 18-22 (exhibits April 19-22) at the Las Vegas ...
23/03/2026
Bay FC and free streaming platform Victory have announced a partnership through...
23/03/2026
Gemini AI models will surface hidden context around pitches, matchups, rare stat...
23/03/2026
Behind The Mic provides a roundup of recent news regarding on-air talent, includ...
23/03/2026
Growing from broadcast engineer to strategic planner, this Ithaca College grad h...
23/03/2026
16 Science-Focused Nonfiction Projects Selected for Funding
LOS ANGELES, CA, March 23, 2026 - The nonprofit Sundance Institute and Sandbox Films announced toda...
23/03/2026
It's been 20 years since Miley Cyrus introduced the world to Hannah Montana,...
23/03/2026
Made entirely from real natural recordings
Aimed at sound designers and editors working in film, TV and game audio, the latest release from BOOM Library com...
23/03/2026
Transcribe sheets, tabs or MIDI from audio files
Klang.io have announced the launch of a new AI-powered software tool that's capable of detecting multip...
23/03/2026
An auxiliary target has been affixed to the Interim Cryogenic Propulsion Stage f...
23/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
23/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
23/03/2026
Pro8mm, the Super 8 experts, provided cameras, Super 8 movie film, and scanning services for Bruno Mars' Risk It All music video. The debut single from Br...
23/03/2026
Matthews, introduces their first aluminum grid clamp collection, engineered for the rigging needs of film, television and live production. Combining light weigh...
23/03/2026
Monday 23 March 2026
Hacks, the multi-Emmy -winning Sky Exclusive comedy, retur...
23/03/2026
Back to All News
Too Hot to Handle: Italy Reignites for a Second Season With th...
23/03/2026
Autonomous agents mark a new inflection point in AI. Systems are no longer limited to generating responses or reasoning through tasks. They can take action: Age...
23/03/2026
RT is sad today to learn of the death of legendary RT Sport broadcaster Michael Lyster, who died this morning aged 71 years.
Kevin Bakhurst, Director-General...
23/03/2026
RT Documentary On One has scooped its first ever dedicated music award. At the 2026 Icelandic Music Awards, composer lfur Eldj rn won Release of the Year in t...
23/03/2026
Inside Sport, Liveline, Morning Ireland and 2FM DRIVE will all be in Prague to bring fans to the heart of the action
Every Moment, Every GenerationRT | FIFA W...
22/03/2026
Free updates now available
VSL have just released some free updates that add some existing features to a selection of libraries in their expansive Synchron ...
22/03/2026
Back to All News
Live-Action Sins of Kujo' Premieres April 2: Main Trailer and Key Art Debut
Entertainment
22 March 2026
GlobalJapan
Link copied to cl...
21/03/2026
Presented to War Child UK's HELP(2) project
The MPG (Music Producers Guild) have announced the launch of the MPG Impact Award, a brand-new honour that w...
21/03/2026
Microtuning support for Arabic, Persian & Turkish scales
The latest release from Best Service brings together a selection of string, wind and percussion ins...
21/03/2026
New campaign from NAATI and SBS CulturalConnect highlights how we all deserve t...
21/03/2026
Statement regarding Rhoda Roberts AO
21 March, 2026
Media releases
SBS is deeply saddened by the passing of Widjabul Wia-bal woman from the Bundjalung Na...
21/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
21/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
21/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
21/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
21/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
21/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
21/03/2026
Cine Gear Connect NY, presented by Universal Production Services, is filling in the slate for a full day of panels, peers, learning the latest, and mixing it up...
21/03/2026
Studio Technologies Debuts New StudioComm System at NAB 2026
Brie Clayton March 20, 2026
0 Comments
StudioComm Model 794 Central Controller and Model ...