
Just as there are widely understood empirical laws of nature - for example, what goes up must come down, or every action has an equal and opposite reaction - the field of AI was long defined by a single idea: that more compute, more training data and more parameters makes a better AI model.
However, AI has since grown to need three distinct laws that describe how applying compute resources in different ways impacts model performance. Together, these AI scaling laws - pretraining scaling, post-training scaling and test-time scaling, also called long thinking - reflect how the field has evolved with techniques to use additional compute in a wide variety of increasingly complex AI use cases.
The recent rise of test-time scaling - applying more compute at inference time to improve accuracy - has enabled AI reasoning models, a new class of large language models (LLMs) that perform multiple inference passes to work through complex problems, while describing the steps required to solve a task. Test-time scaling requires intensive amounts of computational resources to support AI reasoning, which will drive further demand for accelerated computing.
What Is Pretraining Scaling? Pretraining scaling is the original law of AI development. It demonstrated that by increasing training dataset size, model parameter count and computational resources, developers could expect predictable improvements in model intelligence and accuracy.
Each of these three elements - data, model size, compute - is interrelated. Per the pretraining scaling law, outlined in this research paper, when larger models are fed with more data, the overall performance of the models improves. To make this feasible, developers must scale up their compute - creating the need for powerful accelerated computing resources to run those larger training workloads.
This principle of pretraining scaling led to large models that achieved groundbreaking capabilities. It also spurred major innovations in model architecture, including the rise of billion- and trillion-parameter transformer models, mixture of experts models and new distributed training techniques - all demanding significant compute.
And the relevance of the pretraining scaling law continues - as humans continue to produce growing amounts of multimodal data, this trove of text, images, audio, video and sensor information will be used to train powerful future AI models.
Pretraining scaling is the foundational principle of AI development, linking the size of models, datasets and compute to AI gains. Mixture of experts, depicted above, is a popular model architecture for AI training. What Is Post-Training Scaling? Pretraining a large foundation model isn't for everyone - it takes significant investment, skilled experts and datasets. But once an organization pretrains and releases a model, they lower the barrier to AI adoption by enabling others to use their pretrained model as a foundation to adapt for their own applications.
This post-training process drives additional cumulative demand for accelerated computing across enterprises and the broader developer community. Popular open-source models can have hundreds or thousands of derivative models, trained across numerous domains.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Post-training techniques can further improve a model's specificity and relevance for an organization's desired use case. While pretraining is like sending an AI model to school to learn foundational skills, post-training enhances the model with skills applicable to its intended job. An LLM, for example, could be post-trained to tackle a task like sentiment analysis or translation - or understand the jargon of a specific domain, like healthcare or law.
The post-training scaling law posits that a pretrained model's performance can further improve - in computational efficiency, accuracy or domain specificity - using techniques including fine-tuning, pruning, quantization, distillation, reinforcement learning and synthetic data augmentation.
Fine-tuning uses additional training data to tailor an AI model for specific domains and applications. This can be done using an organization's internal datasets, or with pairs of sample model input and outputs.
Distillation requires a pair of AI models: a large, complex teacher model and a lightweight student model. In the most common distillation technique, called offline distillation, the student model learns to mimic the outputs of a pretrained teacher model.
Reinforcement learning, or RL, is a machine learning technique that uses a reward model to train an agent to make decisions that align with a specific use case. The agent aims to make decisions that maximize cumulative rewards over time as it interacts with an environment - for example, a chatbot LLM that is positively reinforced by thumbs up reactions from users. This technique is known as reinforcement learning from human feedback (RLHF). Another, newer technique, reinforcement learning from AI feedback (RLAIF), instead uses feedback from AI models to guide the learning process, streamlining post-training efforts.
Best-of-n sampling generates multiple outputs from a language model and selects the one with the highest reward score based on a reward model. It's often used to improve an AI's outputs without modifying model parameters, offering an alternative to fine-tuning with reinforcement learning.
Search methods explore a range of potential decision paths before selecting a final output. This post-training technique can iteratively improve the model's responses
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
01/04/2026
January 4 2026, 18:00 (PST) DOLBY AND DOUYIN EMPOWER THE NEXT GENERATON OF CREATORS WITH DOLBY VISION
Douyin Users Can Now Create And Share Videos With Stun...
30/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
30/01/2026
Grass Valley , the leading technology provider for live production solutions, and NETGEAR Inc. (NASDAQ: NTGR), a global leader in network solutions, today anno...
30/01/2026
tvONE, a leading video processor, signal distribution technology and media server developer, announces the expansion of Amit Singh's role to Regional Sales ...
30/01/2026
With a career that spans four decades across television, film and post-production, Freelance Sound Designer and Post-production Sound Mixer Mike Aiton has built...
30/01/2026
DPA Microphones will feature its new, fully integrated wireless microphone ecosystem, designed to let audio professionals work faster, cleaner and with total co...
30/01/2026
As the Middle East continues to accelerate investment in next-generation media, broadcast, and immersive content technologies, Ventum Tech today announced a str...
30/01/2026
Mark Roberts Motion Control (MRMC), a Nikon company and global leader in robotic camera systems, today announced its participation at Integrated Systems Europe ...
30/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
30/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
30/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
30/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
30/01/2026
Boston Conservatory at Berklee Hosts the National Opera Association's 2026 C...
30/01/2026
Student Spotlight: Sriram Narayanan The classical pianist shares his experience growing up with a language disability and finding his voice through music.
Ja...
29/01/2026
The National Film and Video Foundation (NFVF), in collaboration with a distribut...
29/01/2026
Michele Fracchiolla Succeeds Andrew Barr as President of EMEA region from April 1, 2026
London, January 29, 2026
Hitachi Europe Ltd. today announces the appoi...
29/01/2026
MELBOURNE, Fla., January 29, 2026 - L3Harris Technologies (NYSE: LHX) reports fu...
29/01/2026
Bluey' Wins Second Consecutive Top Streaming Title of the Year with 45 Billi...
29/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
29/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
29/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
29/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
29/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
29/01/2026
Boston Conservatory Orchestra Presents East Coast Premiere of Peter and Leonardo...
29/01/2026
Mercedes-Benz is marking 140 years of automotive innovation with a new S-Class b...
29/01/2026
X-Rite Pantone Appoints Cindy Cooperman as Vice President and General Manager of...
29/01/2026
New two-part true crime documentary, OUTBACK TERROR: THE FALCONIO MURDER, aims to shed new light on a case that continues to intrigue on both sides of the world...
29/01/2026
Back to All News
Love is Blind: Sweden Returns for a Third Season - Premiering ...
29/01/2026
Back to All News
Unmask Bridgerton' Season 4 With Our Complete Coverage Guide
Yerin Ha as Sophie Baek and Luke Thompson as Benedict Bridgerton in Season ...
29/01/2026
Back to All News
Extraordinary Crime Mysteries, Mythical Worlds and High-Stakes...
29/01/2026
FOX Sports Unveils Historic FIFA World Cup 2026 Broadcast Schedule Monumental Slate Features 340 Hours of Live First-Run Programming Across FOX Sports Platfo...
29/01/2026
AI Assistants Head into 2026 on a High Note: Comscore Reports Triple-Digit Growt...
29/01/2026
Broadcom Confirms Arvato Systems' Status as a VCSP Partner
Broadcom Partner Program Update
Arvato Systems confirmed as authorized VMware Cloud Service Pr...
29/01/2026
Editor's note: This post is part of Into the Omniverse, a series focused on ...
29/01/2026
RT has today announced that Annette Malone has been appointed to the role of Chief People Officer, RT following a public competition.
As Chief People Officer...
29/01/2026
Get ready to game - the native GeForce NOW app for Linux PCs is now available in beta, letting Linux desktops tap directly into GeForce RTX performance from the...
28/01/2026
Top L-R: The Liars, Jazz Infernal, Living with a Visionary
Second Row L-R: Paper Trail, The Baddest Speechwriter of All, Crisis Actor
Third Row: The Boys and ...
28/01/2026
Music discovery should feel intuitive and personal. That's why we're continuing to give you more control, so you can ask for what you want, shape what y...
28/01/2026
Today, Charlie Hellman, Spotify's Head of Music, shared the following note on the Spotify for Artists blog that the company paid out more than $11 billion t...
28/01/2026
The National Film and Video Foundation (NFVF), in partnership with the Oudtshoorn Municipality, invites aspiring and emerging filmmakers to apply for the Sediba...
28/01/2026
As demand for more complex live sports coverage grows, Balkan broadcast specialist MVP has upgraded its flagship HD1 progressive OB truck with the installation ...
28/01/2026
Airlines, cruise and tour operators double down on ad spend as Australians' prioritise travel
Sydney January 28, 2026 - New Nielsen Ad Intel data shows a...
28/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
28/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
28/01/2026
Marshall Electronics launches the CV420-27X, its next-generation ultra-high-definition (UHD) IP camera, at ISE 2026 (Stand 4N900). Engineered for modern IP-base...
28/01/2026
Grass Valley has announced that Television Mobiles Ltd. (TVM), one of Europe's leading independent outside broadcast providers, has carried out a major refu...
28/01/2026
FOR-A is bringing remarkable new technologies to FOMEX, the Future of Media Exhibition (exhibiting in partnership with Future Art Broadcast Trading on booth 103...