
Just as there are widely understood empirical laws of nature - for example, what goes up must come down, or every action has an equal and opposite reaction - the field of AI was long defined by a single idea: that more compute, more training data and more parameters makes a better AI model.
However, AI has since grown to need three distinct laws that describe how applying compute resources in different ways impacts model performance. Together, these AI scaling laws - pretraining scaling, post-training scaling and test-time scaling, also called long thinking - reflect how the field has evolved with techniques to use additional compute in a wide variety of increasingly complex AI use cases.
The recent rise of test-time scaling - applying more compute at inference time to improve accuracy - has enabled AI reasoning models, a new class of large language models (LLMs) that perform multiple inference passes to work through complex problems, while describing the steps required to solve a task. Test-time scaling requires intensive amounts of computational resources to support AI reasoning, which will drive further demand for accelerated computing.
What Is Pretraining Scaling? Pretraining scaling is the original law of AI development. It demonstrated that by increasing training dataset size, model parameter count and computational resources, developers could expect predictable improvements in model intelligence and accuracy.
Each of these three elements - data, model size, compute - is interrelated. Per the pretraining scaling law, outlined in this research paper, when larger models are fed with more data, the overall performance of the models improves. To make this feasible, developers must scale up their compute - creating the need for powerful accelerated computing resources to run those larger training workloads.
This principle of pretraining scaling led to large models that achieved groundbreaking capabilities. It also spurred major innovations in model architecture, including the rise of billion- and trillion-parameter transformer models, mixture of experts models and new distributed training techniques - all demanding significant compute.
And the relevance of the pretraining scaling law continues - as humans continue to produce growing amounts of multimodal data, this trove of text, images, audio, video and sensor information will be used to train powerful future AI models.
Pretraining scaling is the foundational principle of AI development, linking the size of models, datasets and compute to AI gains. Mixture of experts, depicted above, is a popular model architecture for AI training. What Is Post-Training Scaling? Pretraining a large foundation model isn't for everyone - it takes significant investment, skilled experts and datasets. But once an organization pretrains and releases a model, they lower the barrier to AI adoption by enabling others to use their pretrained model as a foundation to adapt for their own applications.
This post-training process drives additional cumulative demand for accelerated computing across enterprises and the broader developer community. Popular open-source models can have hundreds or thousands of derivative models, trained across numerous domains.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Post-training techniques can further improve a model's specificity and relevance for an organization's desired use case. While pretraining is like sending an AI model to school to learn foundational skills, post-training enhances the model with skills applicable to its intended job. An LLM, for example, could be post-trained to tackle a task like sentiment analysis or translation - or understand the jargon of a specific domain, like healthcare or law.
The post-training scaling law posits that a pretrained model's performance can further improve - in computational efficiency, accuracy or domain specificity - using techniques including fine-tuning, pruning, quantization, distillation, reinforcement learning and synthetic data augmentation.
Fine-tuning uses additional training data to tailor an AI model for specific domains and applications. This can be done using an organization's internal datasets, or with pairs of sample model input and outputs.
Distillation requires a pair of AI models: a large, complex teacher model and a lightweight student model. In the most common distillation technique, called offline distillation, the student model learns to mimic the outputs of a pretrained teacher model.
Reinforcement learning, or RL, is a machine learning technique that uses a reward model to train an agent to make decisions that align with a specific use case. The agent aims to make decisions that maximize cumulative rewards over time as it interacts with an environment - for example, a chatbot LLM that is positively reinforced by thumbs up reactions from users. This technique is known as reinforcement learning from human feedback (RLHF). Another, newer technique, reinforcement learning from AI feedback (RLAIF), instead uses feedback from AI models to guide the learning process, streamlining post-training efforts.
Best-of-n sampling generates multiple outputs from a language model and selects the one with the highest reward score based on a reward model. It's often used to improve an AI's outputs without modifying model parameters, offering an alternative to fine-tuning with reinforcement learning.
Search methods explore a range of potential decision paths before selecting a final output. This post-training technique can iteratively improve the model's responses
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
04/08/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
04/07/2026
April 7 2026, 19:00 (PDT) Detective Conan: Fallen Angel of the Highway Opens in...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
15/04/2026
Open Broadcast Systems has announced that BBC World Service has selected its IP ...
15/04/2026
LiveU has announced an expansion of its collaboration with Sony Corporation, add...
15/04/2026
Ateme has announced a collaboration with NVIDIA to support live Apple Immersive ...
15/04/2026
The Professional Fighters League (PFL) has announced a multi-year partnership renewal with DAZN DACH, covering Germany, Switzerland, Austria, Liechtenstein, and...
15/04/2026
Canon U.S.A. (NAB Booth C3825) today took the lid off of the CINE-SERVO 40-1200m...
15/04/2026
Panasonic Video and Audio Systems North America and NEP Group will demonstrate a...
15/04/2026
For the fourth year running, independent analysts found businesses across all industries and verticals pay roughly the same amount in fees as they spend on stor...
15/04/2026
The Soccer Tournament (TST) has announced a media rights deal with NBC Sports to...
15/04/2026
JB&A will host the Pre-NAB 2026 Technology Event on April 17-18 at Flamingo Las Vegas, ahead of NAB Show. The event features hands-on demonstrations and technic...
15/04/2026
The Sennheiser Group will exhibit at NAB Show 2026 (Booth 4931, Central Hall), with demonstrations from Sennheiser, Neumann, and Merging across three areas: Rel...
15/04/2026
NAB Show 2026 will take place April 18-22 at the Las Vegas Convention Center, wi...
15/04/2026
AI-Media has announced the LEXI Text Encoder and LEXI Voice Encoder at NAB Show 2026, the company's first new encoder hardware release in more than a decade...
15/04/2026
Italian camera support manufacturer Cartoni will introduce several new products at NAB Show 2026 (Booth C6540, Central Hall), including the Master 30 OB fluid h...
15/04/2026
Lawo and swXtch.io have announced a memorandum of understanding at NAB Show 2026, under which Lawo will explore incorporating swXtch.io's groundSwXtch softw...
15/04/2026
CacheFly will exhibit at NAB Show 2026 (Booth W3129, April 19-22, Las Vegas Convention Center), showcasing three new additions to its content delivery platform:...
15/04/2026
Synamedia has announced GO Shorts, a new module within its Synamedia Go OTT platform that uses AI to convert an operator's existing content library into a s...
15/04/2026
The NAB Show kicks off on Saturday, and the SVG and SVG Europe editorial teams a...
15/04/2026
AJA Video Systems has announced an agreement to acquire Comprimato, a live video encoding and processing software company. The deal will unite the two companies...
15/04/2026
Prime Video Sports' NBA Playoffs coverage, which includes the entire SoFi NB...
15/04/2026
Just announced, the SDE standard provides a unified method and file format to ensure consistent and reliably comparable noise predictions
Sports and entertainm...
15/04/2026
From immersive storytelling to laugh-out-loud comedies, podcasts are booming in ...
15/04/2026
Books have always moved with us, whether tucked in our bags or humming in our he...
15/04/2026
For many artists, independent venues are where music careers begin and fan communities take shape. Independent venue operators work hard every day to keep local...
15/04/2026
From gripping thrillers to poignant memoirs, the 21st century has had no shortage of unforgettable books. To celebrate the standout storytelling of our modern e...
15/04/2026
Vintage broadcast experts release second plug-in
Telsie T is the second plug-in to be released by SonicWorld, a German audio company who specialise in servi...
15/04/2026
Includes eight free UAD plug-ins
Universal Audio's latest bundle brings together a selection of their renowned plug-ins and virtual instruments, and is ...
15/04/2026
Maximum uptime for broadcasters: Rohde & Schwarz launches R&S BroadcastShield at...
15/04/2026
Image courtesy of MD Helicopters...
15/04/2026
Virginia Gov. Abigail Spanberger, L3Harris VP Mark Farley, and state and local l...
15/04/2026
U.S. Space Forces Ground-Based Optical Sensor System upgrade at the Maui Space S...
15/04/2026
NBCU-Versant notches 13.1% of TV viewing in February, its best since August 2024...
15/04/2026
New data reveals older Kiwis are financially resilient, loyal to local products,...
15/04/2026
aconnic AG (ISIN: DE000A0LBKW6), Munich, announces the market launch of the ACCE...
15/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
15/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
15/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
15/04/2026
Evergent introduces its Agentic Revenue Orchestration Platform, transforming how subscription businesses across direct-to-consumer streaming, pay-TV, telecommun...
15/04/2026
Harmonic's XOS Media Processor Delivers Exceptional Video Quality to More than Half of U.S. Public Media Viewership
Harmonic (NASDAQ: HLIT) today announce...
15/04/2026
LONGMONT, COLORADO, APRIL 15, 2026 DPA Microphones N Series Digital Wireless System users in North America can now take full advantage of the system's exc...
15/04/2026
Cobalt Iron, a leading provider of SaaS-based enterprise data protection, today announced the launch of Compass Tape Gateway (CTG), a transformative enhancemen...
15/04/2026
Disguise to Showcase Cutting-Edge Experience Tech for Sports, Broadcast and More...
15/04/2026
Arooj Aftab Makes the Music She Wants to Hear The singular artist explores the juxtaposition of grief and joy, dark and light, in her distinctive sound.
Apri...
15/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
15/04/2026
Interra Systems, a provider of end-to-end quality assurance solutions for the digital media industry, is proud to announce its central role in the digital trans...