
Just as there are widely understood empirical laws of nature - for example, what goes up must come down, or every action has an equal and opposite reaction - the field of AI was long defined by a single idea: that more compute, more training data and more parameters makes a better AI model.
However, AI has since grown to need three distinct laws that describe how applying compute resources in different ways impacts model performance. Together, these AI scaling laws - pretraining scaling, post-training scaling and test-time scaling, also called long thinking - reflect how the field has evolved with techniques to use additional compute in a wide variety of increasingly complex AI use cases.
The recent rise of test-time scaling - applying more compute at inference time to improve accuracy - has enabled AI reasoning models, a new class of large language models (LLMs) that perform multiple inference passes to work through complex problems, while describing the steps required to solve a task. Test-time scaling requires intensive amounts of computational resources to support AI reasoning, which will drive further demand for accelerated computing.
What Is Pretraining Scaling? Pretraining scaling is the original law of AI development. It demonstrated that by increasing training dataset size, model parameter count and computational resources, developers could expect predictable improvements in model intelligence and accuracy.
Each of these three elements - data, model size, compute - is interrelated. Per the pretraining scaling law, outlined in this research paper, when larger models are fed with more data, the overall performance of the models improves. To make this feasible, developers must scale up their compute - creating the need for powerful accelerated computing resources to run those larger training workloads.
This principle of pretraining scaling led to large models that achieved groundbreaking capabilities. It also spurred major innovations in model architecture, including the rise of billion- and trillion-parameter transformer models, mixture of experts models and new distributed training techniques - all demanding significant compute.
And the relevance of the pretraining scaling law continues - as humans continue to produce growing amounts of multimodal data, this trove of text, images, audio, video and sensor information will be used to train powerful future AI models.
Pretraining scaling is the foundational principle of AI development, linking the size of models, datasets and compute to AI gains. Mixture of experts, depicted above, is a popular model architecture for AI training. What Is Post-Training Scaling? Pretraining a large foundation model isn't for everyone - it takes significant investment, skilled experts and datasets. But once an organization pretrains and releases a model, they lower the barrier to AI adoption by enabling others to use their pretrained model as a foundation to adapt for their own applications.
This post-training process drives additional cumulative demand for accelerated computing across enterprises and the broader developer community. Popular open-source models can have hundreds or thousands of derivative models, trained across numerous domains.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Post-training techniques can further improve a model's specificity and relevance for an organization's desired use case. While pretraining is like sending an AI model to school to learn foundational skills, post-training enhances the model with skills applicable to its intended job. An LLM, for example, could be post-trained to tackle a task like sentiment analysis or translation - or understand the jargon of a specific domain, like healthcare or law.
The post-training scaling law posits that a pretrained model's performance can further improve - in computational efficiency, accuracy or domain specificity - using techniques including fine-tuning, pruning, quantization, distillation, reinforcement learning and synthetic data augmentation.
Fine-tuning uses additional training data to tailor an AI model for specific domains and applications. This can be done using an organization's internal datasets, or with pairs of sample model input and outputs.
Distillation requires a pair of AI models: a large, complex teacher model and a lightweight student model. In the most common distillation technique, called offline distillation, the student model learns to mimic the outputs of a pretrained teacher model.
Reinforcement learning, or RL, is a machine learning technique that uses a reward model to train an agent to make decisions that align with a specific use case. The agent aims to make decisions that maximize cumulative rewards over time as it interacts with an environment - for example, a chatbot LLM that is positively reinforced by thumbs up reactions from users. This technique is known as reinforcement learning from human feedback (RLHF). Another, newer technique, reinforcement learning from AI feedback (RLAIF), instead uses feedback from AI models to guide the learning process, streamlining post-training efforts.
Best-of-n sampling generates multiple outputs from a language model and selects the one with the highest reward score based on a reward model. It's often used to improve an AI's outputs without modifying model parameters, offering an alternative to fine-tuning with reinforcement learning.
Search methods explore a range of potential decision paths before selecting a final output. This post-training technique can iteratively improve the model's responses
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
04/08/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
04/07/2026
April 7 2026, 19:00 (PDT) Detective Conan: Fallen Angel of the Highway Opens in...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
14/05/2026
Soldiers equipped with Falcon IV radios will soon gain a sense-and-protect capa...
14/05/2026
Artists concept of the L3Harris Next Gen RTG in flight configuration, designed to provide 250 watts of reliable power for decades-long missions in deep space....
14/05/2026
Vivid Broadcast was embracing remote production long before it became the industry norm. Now, with Calrec's True Control 2.0-enabled Argo M and Type R conso...
14/05/2026
Car ad spend rises sharply in March as more auto buyers turn to electric, hybrid...
14/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/05/2026
CueScript's CueiT 4.0 Wins Future's Best of Show Award, Presented at 2026 NAB Show by TV Tech
CueScript, a leading international developer of professio...
14/05/2026
Expert-Led Education Sessions and Development of Online Training Program Accelerate IPMX Adoption and Deployment
The Alliance for IP Media Solutions (AIMS) to...
14/05/2026
Klvr is launching in the United States with a professional-grade rechargeable battery solution that cuts costs and improves performance across live entertainmen...
14/05/2026
Shooting into the depths of Bedlam with URSA Cine 17K 65
Brie Clayton May 14, 2026
0 Comments
Indie feature film paired digital 65mm capture with a Bl...
14/05/2026
WeMakeColor expands with Baselight, becoming hybrid color facility
Caroline Shawley May 14, 2026
0 Comments
Boutique Mexican-based studio integrates B...
14/05/2026
Berklee's Summer in the City Returns with Free Concerts Throughout Boston Ar...
14/05/2026
Chelsey Green Named to Billboard's 2026 Women in Music List The Berklee professor and chair of the Recording Academy Board of Trustees joins other high-pr...
14/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/05/2026
Six decades of products built around the people who use them...
14/05/2026
Designed to embrace multiple production processes and deliver high-end live broadcast workflows, Vivid Broadcast has combined Calrec True Control 2.0 enabled co...
14/05/2026
Today, Jigsaw24, the UK's leading media equipment supplier and systems integrator, announces a new partnership with EVS, a global leader in live video techn...
14/05/2026
Transforming bold ideas into market-ready productions: Digital Originals returns 14 May 2026
Top (L-R): Bronte Gosper (Musket), Mema Munro (Rogue One), Nathan ...
14/05/2026
Berklee Convenes Leaders in AI, Music for Inaugural AIMS Symposium The three-day event puts musicians at the center of the future of music creation, ethics, a...
14/05/2026
Scripps Research establishes endowed chair honoring renowned structural biologist Ian Wilson Keren Lasker to be inaugural chair holder
May 13, 2026
LA JOLLA ...
14/05/2026
Back From The Brink airs Sunday 17 May and Sunday 24 May at 6.30pm on RT One an...
14/05/2026
Dive masks on - Subnautica 2 is making a splash on GeForce NOW day-and-date with launch, so members can plunge into the title's brand-new alien ocean from a...
13/05/2026
New Adobe Premiere Color Grading Mode Accelerated on NVIDIA GPUs
Joel Pennington May 13, 2026
0 Comments
New NVIDIA RTX-accelerated features streamlin...
13/05/2026
Grass Valley announced that dB Broadcast has delivered new IP-based outside broadcast (OB) trucks for Cloudbass, featuring Grass Valley LDX 100 Series cameras a...
13/05/2026
Ikegami will exhibit the latest additions to its wide range of broadcast production cameras, control units, viewfinders and monitors on stand 5D3-1 at Broadcast...
13/05/2026
FISE, working with the founding members of the XR Sports Alliance (XRSA), Accedo, Qualcomm Technologies, Inc. and HBS, have collaborated to develop an immersive...
13/05/2026
Canon Unveils New EOS R6 V Full-Frame EOS Camera and RF20-50mm F4 L IS USM PZ Bu...
13/05/2026
Boston Conservatory at Berklee Honors Beth Morrison and Moses Pendleton at Comme...
13/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
13/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
13/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
13/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
13/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
13/05/2026
Creative software developer Foundry today announced the latest developments on Nuke Stage. A purpose-built application for end-to-end virtual production and in-...
13/05/2026
SAN JOSE, Calif. - May 13, 2026 - Harmonic (NASDAQ: HLIT) today announced that I...
13/05/2026
A definitive portrait of one of Ireland's most influential musicians
New TV documentary airs Monday 18 May on RT One and RT Player at 9.35pm
Watch the...
13/05/2026
Agentic AI is changing the way users get work done. Following the success of OpenClaw, the community is embracing new open source agentic frameworks. The latest...
13/05/2026
Reinforcement-learning agents - AI systems that learn by trial and error - can c...
12/05/2026
Beyond the Hype: A Strategic Post-Hoc Analysis of NAB 2026 If NAB Show 2026 had an underlying theme, it was a quiet, industry-wide pivot from the high-energy sp...
12/05/2026
Guntermann and Drunck (G&D), a Panoptec Technologies Group company, and CT Square, led by Chandresh Shah, have announced a joint venture to distribute G&D and V...
12/05/2026
With 30 days until the start of the FIFA World Cup 2026, Telemundo, the exclusive Spanish-language home of the tournament in the United States, has announced th...