
Just as there are widely understood empirical laws of nature - for example, what goes up must come down, or every action has an equal and opposite reaction - the field of AI was long defined by a single idea: that more compute, more training data and more parameters makes a better AI model.
However, AI has since grown to need three distinct laws that describe how applying compute resources in different ways impacts model performance. Together, these AI scaling laws - pretraining scaling, post-training scaling and test-time scaling, also called long thinking - reflect how the field has evolved with techniques to use additional compute in a wide variety of increasingly complex AI use cases.
The recent rise of test-time scaling - applying more compute at inference time to improve accuracy - has enabled AI reasoning models, a new class of large language models (LLMs) that perform multiple inference passes to work through complex problems, while describing the steps required to solve a task. Test-time scaling requires intensive amounts of computational resources to support AI reasoning, which will drive further demand for accelerated computing.
What Is Pretraining Scaling? Pretraining scaling is the original law of AI development. It demonstrated that by increasing training dataset size, model parameter count and computational resources, developers could expect predictable improvements in model intelligence and accuracy.
Each of these three elements - data, model size, compute - is interrelated. Per the pretraining scaling law, outlined in this research paper, when larger models are fed with more data, the overall performance of the models improves. To make this feasible, developers must scale up their compute - creating the need for powerful accelerated computing resources to run those larger training workloads.
This principle of pretraining scaling led to large models that achieved groundbreaking capabilities. It also spurred major innovations in model architecture, including the rise of billion- and trillion-parameter transformer models, mixture of experts models and new distributed training techniques - all demanding significant compute.
And the relevance of the pretraining scaling law continues - as humans continue to produce growing amounts of multimodal data, this trove of text, images, audio, video and sensor information will be used to train powerful future AI models.
Pretraining scaling is the foundational principle of AI development, linking the size of models, datasets and compute to AI gains. Mixture of experts, depicted above, is a popular model architecture for AI training. What Is Post-Training Scaling? Pretraining a large foundation model isn't for everyone - it takes significant investment, skilled experts and datasets. But once an organization pretrains and releases a model, they lower the barrier to AI adoption by enabling others to use their pretrained model as a foundation to adapt for their own applications.
This post-training process drives additional cumulative demand for accelerated computing across enterprises and the broader developer community. Popular open-source models can have hundreds or thousands of derivative models, trained across numerous domains.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Post-training techniques can further improve a model's specificity and relevance for an organization's desired use case. While pretraining is like sending an AI model to school to learn foundational skills, post-training enhances the model with skills applicable to its intended job. An LLM, for example, could be post-trained to tackle a task like sentiment analysis or translation - or understand the jargon of a specific domain, like healthcare or law.
The post-training scaling law posits that a pretrained model's performance can further improve - in computational efficiency, accuracy or domain specificity - using techniques including fine-tuning, pruning, quantization, distillation, reinforcement learning and synthetic data augmentation.
Fine-tuning uses additional training data to tailor an AI model for specific domains and applications. This can be done using an organization's internal datasets, or with pairs of sample model input and outputs.
Distillation requires a pair of AI models: a large, complex teacher model and a lightweight student model. In the most common distillation technique, called offline distillation, the student model learns to mimic the outputs of a pretrained teacher model.
Reinforcement learning, or RL, is a machine learning technique that uses a reward model to train an agent to make decisions that align with a specific use case. The agent aims to make decisions that maximize cumulative rewards over time as it interacts with an environment - for example, a chatbot LLM that is positively reinforced by thumbs up reactions from users. This technique is known as reinforcement learning from human feedback (RLHF). Another, newer technique, reinforcement learning from AI feedback (RLAIF), instead uses feedback from AI models to guide the learning process, streamlining post-training efforts.
Best-of-n sampling generates multiple outputs from a language model and selects the one with the highest reward score based on a reward model. It's often used to improve an AI's outputs without modifying model parameters, offering an alternative to fine-tuning with reinforcement learning.
Search methods explore a range of potential decision paths before selecting a final output. This post-training technique can iteratively improve the model's responses
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
01/04/2026
January 4 2026, 18:00 (PST) DOLBY AND DOUYIN EMPOWER THE NEXT GENERATON OF CREATORS WITH DOLBY VISION
Douyin Users Can Now Create And Share Videos With Stun...
23/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
23/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
23/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
23/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
23/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
23/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
23/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
23/01/2026
RT is delighted to announce that the RT and F s ireann / Screen Ireland supported short film Retirement Plan has been nominated for Best Animated Short at th...
22/01/2026
SVG Students To Watch: Chuck Luarasi, Curry CollegeThe Massachusetts native is cutting his teeth with Harvard Athletics, Cape Cod Baseball LeagueBy Brandon Cost...
22/01/2026
Follow the Money, Episode 4: Talking Tech, Sports, and Private Capital With Sam ...
22/01/2026
Fever pitch: WRC is back for the start of the 2026 season with Rallye Monte-Carl...
22/01/2026
FloSports Prepares To Broadcast Outdoor Hockey Game Amidst Brutally Cold Tempera...
22/01/2026
As Paramount Enters the Octagon, UFC's Craig Borsari Previews Production Pl...
22/01/2026
By Jordan Crucchiola
It's a desire you hear so often among those in filmmaking circles. I just want to make cool stuff with my friends. With the NEXT selec...
22/01/2026
Brittany Shyne attends the 2025 Sundance Film Festival premiere of Seeds at The Ray Theatre on January 25, 2025, in Park City, UT. (Photo by Robin Marshall/Sh...
22/01/2026
Joel Edgerton and Felicity Jones appear in Train Dreams by Clint Bentley, an off...
22/01/2026
Last November, Ed Sheeran returned to his musical roots for an intimate, one-nig...
22/01/2026
A New Voice, New Places and the Real Australia as Brooke Blurton joins Ernie Din...
22/01/2026
MELBOURNE, Fla., Jan 22, 2026 - L3Harris Technologies (NYSE: LHX) has received a...
22/01/2026
Every delay costs. When a subtitle fails QC, even the smallest issue can mean missed deadlines, extra vendor costs, or frustrated teams. The new Accurate.Video ...
22/01/2026
Strategic hire marks latest milestone in Gracenote's continued expansion into CTV advertising & monetization
New York - January 21, 2026 - Nielsen's Gr...
22/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
22/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
22/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
22/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
22/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
22/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
22/01/2026
AI-powered driver assistance technologies are becoming standard equipment, funda...
22/01/2026
A Four-Time Emmy Award Winner on Defining His SoundCharles David Denler is a Composer and Pianist for film, television, and the Concert Stage. He is a 4 Time E...
22/01/2026
Rohde & Schwarz, Qualcomm, and Motorola demonstrate successful 5G Broadcast comp...
22/01/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
22/01/2026
The wait is over, pilots. Flight control support - one of the most community-requested features for GeForce NOW - is live starting today, following its announce...
22/01/2026
AI has taken center stage in financial services, automating the research and exe...
22/01/2026
AI-powered content generation is now embedded in everyday tools like Adobe and Canva, with a slew of agencies and studios incorporating the technology into thei...
21/01/2026
Australia's Greatest Conman? premieres 24 February on SBS and SBS On Demand
Media releases
The $900 million dollar mystery that fooled our nation Austra...
21/01/2026
SBS ignites Lunar New Year with bold storytelling for The Year of the Fire Horse
21 January, 2026
Media releases
SBS is charging into Lunar New Year 2026 w...
21/01/2026
The Living Room Remains Central: Nielsen Highlights Growing TV Screen Dominance ...
21/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
21/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
21/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
21/01/2026
Telestream , the industry's leading provider of content lifecycle management and media workflow orchestration, and Quantum Corporation (NASDAQ: QMCO) today ...
21/01/2026
Lightware s TPN ecosystem brings a new level of predictability and structure to 10G AV-over-IP deployments, offering professional AV integrators a deterministic...
21/01/2026
Wisycom, a global leader in advanced wireless RF solutions, launches its new wideband antenna matrix, MATF, which supports RF and fiber for demanding multi-zone...
21/01/2026
Grass Valley will demonstrate how it is powering scalable, future-ready live production at FOMEX 2026, taking place February 2 4 in Riyadh, Saudi Arabia. Exhibi...
21/01/2026
BCNEXXT, the developers of the advanced playout platform Vipe, today announced that OKAST, the monetization-first OTT platform provider, is using BCNEXXT's ...
21/01/2026
Revamped design enables advanced capabilities, leading with powerful IP to HDMI conversion
Magewell, developer of innovative, high-performance video I/O and I...