
Just as there are widely understood empirical laws of nature - for example, what goes up must come down, or every action has an equal and opposite reaction - the field of AI was long defined by a single idea: that more compute, more training data and more parameters makes a better AI model.
However, AI has since grown to need three distinct laws that describe how applying compute resources in different ways impacts model performance. Together, these AI scaling laws - pretraining scaling, post-training scaling and test-time scaling, also called long thinking - reflect how the field has evolved with techniques to use additional compute in a wide variety of increasingly complex AI use cases.
The recent rise of test-time scaling - applying more compute at inference time to improve accuracy - has enabled AI reasoning models, a new class of large language models (LLMs) that perform multiple inference passes to work through complex problems, while describing the steps required to solve a task. Test-time scaling requires intensive amounts of computational resources to support AI reasoning, which will drive further demand for accelerated computing.
What Is Pretraining Scaling? Pretraining scaling is the original law of AI development. It demonstrated that by increasing training dataset size, model parameter count and computational resources, developers could expect predictable improvements in model intelligence and accuracy.
Each of these three elements - data, model size, compute - is interrelated. Per the pretraining scaling law, outlined in this research paper, when larger models are fed with more data, the overall performance of the models improves. To make this feasible, developers must scale up their compute - creating the need for powerful accelerated computing resources to run those larger training workloads.
This principle of pretraining scaling led to large models that achieved groundbreaking capabilities. It also spurred major innovations in model architecture, including the rise of billion- and trillion-parameter transformer models, mixture of experts models and new distributed training techniques - all demanding significant compute.
And the relevance of the pretraining scaling law continues - as humans continue to produce growing amounts of multimodal data, this trove of text, images, audio, video and sensor information will be used to train powerful future AI models.
Pretraining scaling is the foundational principle of AI development, linking the size of models, datasets and compute to AI gains. Mixture of experts, depicted above, is a popular model architecture for AI training. What Is Post-Training Scaling? Pretraining a large foundation model isn't for everyone - it takes significant investment, skilled experts and datasets. But once an organization pretrains and releases a model, they lower the barrier to AI adoption by enabling others to use their pretrained model as a foundation to adapt for their own applications.
This post-training process drives additional cumulative demand for accelerated computing across enterprises and the broader developer community. Popular open-source models can have hundreds or thousands of derivative models, trained across numerous domains.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Post-training techniques can further improve a model's specificity and relevance for an organization's desired use case. While pretraining is like sending an AI model to school to learn foundational skills, post-training enhances the model with skills applicable to its intended job. An LLM, for example, could be post-trained to tackle a task like sentiment analysis or translation - or understand the jargon of a specific domain, like healthcare or law.
The post-training scaling law posits that a pretrained model's performance can further improve - in computational efficiency, accuracy or domain specificity - using techniques including fine-tuning, pruning, quantization, distillation, reinforcement learning and synthetic data augmentation.
Fine-tuning uses additional training data to tailor an AI model for specific domains and applications. This can be done using an organization's internal datasets, or with pairs of sample model input and outputs.
Distillation requires a pair of AI models: a large, complex teacher model and a lightweight student model. In the most common distillation technique, called offline distillation, the student model learns to mimic the outputs of a pretrained teacher model.
Reinforcement learning, or RL, is a machine learning technique that uses a reward model to train an agent to make decisions that align with a specific use case. The agent aims to make decisions that maximize cumulative rewards over time as it interacts with an environment - for example, a chatbot LLM that is positively reinforced by thumbs up reactions from users. This technique is known as reinforcement learning from human feedback (RLHF). Another, newer technique, reinforcement learning from AI feedback (RLAIF), instead uses feedback from AI models to guide the learning process, streamlining post-training efforts.
Best-of-n sampling generates multiple outputs from a language model and selects the one with the highest reward score based on a reward model. It's often used to improve an AI's outputs without modifying model parameters, offering an alternative to fine-tuning with reinforcement learning.
Search methods explore a range of potential decision paths before selecting a final output. This post-training technique can iteratively improve the model's responses
More from Nvidia
11/07/2025
Ceramics - the humble mix of earth, fire and artistry - have been part of a global conversation for millennia.
From Tang Dynasty trade routes to Renaissance pa...
10/07/2025
In the race to understand our planet's changing climate, speed and accuracy are everything. But today's most widely used climate simulators often strugg...
10/07/2025
As one of the world's largest emerging markets, Indonesia is making strides toward its Golden 2045 Vision - an initiative tapping digital technologies and...
10/07/2025
Grab a friend and climb toward the clouds - PEAK is now available on GeForce NOW, enabling members to try the hugely popular indie hit on virtually any device.
...
10/07/2025
Coding assistants or copilots - AI-powered assistants that can suggest, explain and debug code - are fundamentally changing how software is developed for both e...
08/07/2025
Modern AI applications increasingly rely on models that combine huge parameter c...
03/07/2025
The forecast this month is showing a 100% chance of epic gaming. Catch the scorching lineup of 20 titles coming to the cloud, which gamers can play whether indo...
02/07/2025
Black Forest Labs, one of the world's leading AI research labs, just changed the game for image generation.
The lab's FLUX.1 image models have earned g...
01/07/2025
In many parts of the world, including major technology hubs in the U.S., there's a yearslong wait for AI factories to come online, pending the buildout of n...
26/06/2025
As of today, NVIDIA now supports the general availability of Gemma 3n on NVIDIA RTX and Jetson. Gemma, previewed by Google DeepMind at Google I/O last month, in...
26/06/2025
Editor's note: This blog is a part of Into the Omniverse, a series focused o...
26/06/2025
Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold ...
26/06/2025
This GFN Thursday rolls out a new reward and games for GeForce NOW members. Whether hunting for hot new releases or rediscovering timeless classics, members can...
24/06/2025
To get the most out of AI, optimizations are critical. When developers think about optimizing AI models for inference, model compression techniques-such as quan...
24/06/2025
To speed up AI adoption across industries, HPE and NVIDIA today launched new AI factory offerings at HPE Discover in Las Vegas.
The new lineup includes everyth...
24/06/2025
From the heart of Germany's automotive sector to manufacturing hubs across F...
19/06/2025
GeForce NOW is throwing open the vault doors to welcome the legendary Borderland series to the cloud.
Whether a seasoned Vault Hunter or new to the mayhem of P...
18/06/2025
Project G-Assist - available through the NVIDIA App - is an experimental AI assistant that helps tune, control and optimize NVIDIA GeForce RTX systems.
NVIDIA&...
17/06/2025
As a global labor shortage leaves 50 million positions unfilled across industrie...
13/06/2025
Industrial AI isn't slowing down. Germany is ready.
Following London Tech Week and GTC Paris at VivaTech, NVIDIA founder and CEO Jensen Huang's Europea...
12/06/2025
Generative AI has reshaped how people create, imagine and interact with digital ...
12/06/2025
Level up GeForce NOW experiences this summer with 40% off Performance Day Passes. Enjoy 24 hours of premium cloud gaming with RTX ON, delivering low latency and...
11/06/2025
NVIDIA is launching a comprehensive, industry-defining autonomous vehicle (AV) software platform to accelerate large-scale deployment of safe, intelligent trans...
11/06/2025
NVIDIA Research has developed an AI light switch for videos that can turn daytim...
11/06/2025
Using NVIDIA platforms, tools and libraries, European telecommunications institu...
11/06/2025
NVIDIA was today named an Autonomous Grand Challenge winner at the Computer Visi...
11/06/2025
In the face of growing labor shortages and need for sustainability, European man...
11/06/2025
AI is packing and shipping efficiency for the retail and consumer packaged goods (CPG) industries, with a majority of surveyed companies in the space reporting ...
11/06/2025
Urban populations are expected to double by 2050, which means around 2.5 billion...
11/06/2025
Telecom companies last year spent nearly $295 billion in capital expenditures an...
11/06/2025
In a new effort to advance sovereign AI for European public service media, NVIDI...
11/06/2025
At GTC Paris - held alongside VivaTech, Europe's largest tech event - NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn't just a...
10/06/2025
Germany's Leibniz Supercomputing Centre, LRZ, is gaining a new supercomputer...
10/06/2025
With a more detailed simulation of the Earth's climate, scientists and resea...
10/06/2025
Cisco and NVIDIA are helping set a new standard for secure, scalable and high-performance enterprise AI.
Announced today at the Cisco Live conference in San Di...
09/06/2025
AI isn't waiting. And this week, neither is Europe.
At London's Olympia, under a ceiling of steel beams and enveloped by the thrum of startup pitches, ...
08/06/2025
U.K. Prime Minister Keir Starmer's ambition for Britain to be an AI maker, not an AI taker, is becoming a reality at London Tech Week.
With NVIDIA's ...
05/06/2025
GeForce NOW is a gamer's ticket to an unforgettable summer of gaming. With 25 titles coming this month and endless ways to play, the summer is going to be e...
04/06/2025
NVIDIA is working with companies worldwide to build out AI factories - speeding ...
04/06/2025
Humans learn the norms, values and behaviors of society from each other - and Bernt B rnich, founder and CEO of 1X Technologies, thinks robots should learn like...
04/06/2025
4:2:2 cameras - capable of capturing double the color information compared with most standard cameras - are becoming widely available for consumers. At the same...
02/06/2025
Editor's note: This blog, originally published on October 28, 2024, has been...
02/06/2025
Since a 7.8-magnitude earthquake hit Syria and T rkiye two years ago - leaving 5...
29/05/2025
Ready for a front-row seat to the next scientific revolution?
That's the idea behind Doudna - a groundbreaking supercomputer announced today at Lawrence Be...
29/05/2025
Large language models (LLMs), trained on datasets with billions of tokens, can generate high-quality content. They're the backbone for many of the most popu...
29/05/2025
GeForce NOW is supercharging Valve's Steam Deck with a new native app - delivering the high-quality GeForce RTX-powered gameplay members are used to on a po...
28/05/2025
Building effective agentic AI systems requires rethinking how technology interac...
27/05/2025
Over a century ago, Henry Ford pioneered the mass production of cars and engines...
27/05/2025
NVIDIA and Google share a long-standing relationship rooted in advancing AI inno...
22/05/2025
GeForce NOW is turning up the heat this summer with a hot new deal. For a limited time, save 40% on six-month Performance memberships and enjoy premium GeForce ...