
Just as there are widely understood empirical laws of nature - for example, what goes up must come down, or every action has an equal and opposite reaction - the field of AI was long defined by a single idea: that more compute, more training data and more parameters makes a better AI model.
However, AI has since grown to need three distinct laws that describe how applying compute resources in different ways impacts model performance. Together, these AI scaling laws - pretraining scaling, post-training scaling and test-time scaling, also called long thinking - reflect how the field has evolved with techniques to use additional compute in a wide variety of increasingly complex AI use cases.
The recent rise of test-time scaling - applying more compute at inference time to improve accuracy - has enabled AI reasoning models, a new class of large language models (LLMs) that perform multiple inference passes to work through complex problems, while describing the steps required to solve a task. Test-time scaling requires intensive amounts of computational resources to support AI reasoning, which will drive further demand for accelerated computing.
What Is Pretraining Scaling? Pretraining scaling is the original law of AI development. It demonstrated that by increasing training dataset size, model parameter count and computational resources, developers could expect predictable improvements in model intelligence and accuracy.
Each of these three elements - data, model size, compute - is interrelated. Per the pretraining scaling law, outlined in this research paper, when larger models are fed with more data, the overall performance of the models improves. To make this feasible, developers must scale up their compute - creating the need for powerful accelerated computing resources to run those larger training workloads.
This principle of pretraining scaling led to large models that achieved groundbreaking capabilities. It also spurred major innovations in model architecture, including the rise of billion- and trillion-parameter transformer models, mixture of experts models and new distributed training techniques - all demanding significant compute.
And the relevance of the pretraining scaling law continues - as humans continue to produce growing amounts of multimodal data, this trove of text, images, audio, video and sensor information will be used to train powerful future AI models.
Pretraining scaling is the foundational principle of AI development, linking the size of models, datasets and compute to AI gains. Mixture of experts, depicted above, is a popular model architecture for AI training. What Is Post-Training Scaling? Pretraining a large foundation model isn't for everyone - it takes significant investment, skilled experts and datasets. But once an organization pretrains and releases a model, they lower the barrier to AI adoption by enabling others to use their pretrained model as a foundation to adapt for their own applications.
This post-training process drives additional cumulative demand for accelerated computing across enterprises and the broader developer community. Popular open-source models can have hundreds or thousands of derivative models, trained across numerous domains.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Post-training techniques can further improve a model's specificity and relevance for an organization's desired use case. While pretraining is like sending an AI model to school to learn foundational skills, post-training enhances the model with skills applicable to its intended job. An LLM, for example, could be post-trained to tackle a task like sentiment analysis or translation - or understand the jargon of a specific domain, like healthcare or law.
The post-training scaling law posits that a pretrained model's performance can further improve - in computational efficiency, accuracy or domain specificity - using techniques including fine-tuning, pruning, quantization, distillation, reinforcement learning and synthetic data augmentation.
Fine-tuning uses additional training data to tailor an AI model for specific domains and applications. This can be done using an organization's internal datasets, or with pairs of sample model input and outputs.
Distillation requires a pair of AI models: a large, complex teacher model and a lightweight student model. In the most common distillation technique, called offline distillation, the student model learns to mimic the outputs of a pretrained teacher model.
Reinforcement learning, or RL, is a machine learning technique that uses a reward model to train an agent to make decisions that align with a specific use case. The agent aims to make decisions that maximize cumulative rewards over time as it interacts with an environment - for example, a chatbot LLM that is positively reinforced by thumbs up reactions from users. This technique is known as reinforcement learning from human feedback (RLHF). Another, newer technique, reinforcement learning from AI feedback (RLAIF), instead uses feedback from AI models to guide the learning process, streamlining post-training efforts.
Best-of-n sampling generates multiple outputs from a language model and selects the one with the highest reward score based on a reward model. It's often used to improve an AI's outputs without modifying model parameters, offering an alternative to fine-tuning with reinforcement learning.
Search methods explore a range of potential decision paths before selecting a final output. This post-training technique can iteratively improve the model's responses
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
04/08/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
04/07/2026
April 7 2026, 19:00 (PDT) Detective Conan: Fallen Angel of the Highway Opens in...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
16/04/2026
Appear ASA (OSE: APR) will announce three additions to its X Platform at NAB Sho...
16/04/2026
Harmonic has announced that CentralCast, a centralized master control facility for U.S. public media, has deployed Harmonic's XOS Advanced Media Processor t...
16/04/2026
Interra Systems has announced that Encompass Digital Media has integrated its BA...
16/04/2026
Grass Valley will demonstrate its Media Infrastructure capabilities at NAB Show 2026 (Booth C2408, Central Hall), bringing routing, signal processing, and orche...
16/04/2026
As preparations ramp up for the FIFA World Cup 2026, Verizon has outlined a sweeping connectivity and infrastructure initiative that will underpin broadcast ope...
16/04/2026
Advanced Image Robotics (AIR) has announced its selection for the 2026 MLS Innovation Lab. AIR will work with MLS clubs, players, and executives on automated vi...
16/04/2026
JWX has announced the acquisition of True Anthem, an AI-powered social publishin...
16/04/2026
Jomboy Media and FuboTV Inc. have launched the Jomboy Media Channel, a 24/7 channel available to FuboTV base plan subscribers. The channel, timed to the start o...
16/04/2026
Wave Central, a Domo Broadcast Company (Booth C2820), and EVS Broadcast Equipmen...
16/04/2026
Tata Communications and Formula 1 have released Race Before the Race, the firs...
16/04/2026
DPA Microphones has released a firmware update for its N-Series Digital Wireless...
16/04/2026
Sony's Live Stage at NAB Show 2026 is the place to hear directly from the content creators, end users, and technology experts who are pushing boundaries in ...
16/04/2026
Perfect Game and Youth Prospects have announced a partnership covering broadcast...
16/04/2026
National Collegiate Rugby (NCR) has announced a media rights partnership with Al...
16/04/2026
Audio-Technica has announced two new mid-side (MS) stereo broadcast microphones:...
16/04/2026
PSSI Global Services has announced the acquisition of Beagle Networks, a provider of IT infrastructure and onsite technical support for media and enterprise cus...
16/04/2026
Blackmagic Design has released Blackmagic Camera for iOS 3.3, a free update available now from the Apple App Store. The update will be demonstrated at NAB Show ...
16/04/2026
As live sports production continues to expand across linear, digital, and in-ven...
16/04/2026
Part of an infrastructure upgrade, the recently constructed spaces accommodate n...
16/04/2026
NBC has added NEP's Supershooter 11, which only just came online in time for...
16/04/2026
At Spotify, we want your experience to feel intuitive and personal across every moment of the day. Whether you're streaming your favorite playlist while you...
16/04/2026
Vintage broadcast experts release second plug-in
Telsie T is the second plug-in to be released by SonicWorld, a German audio company who specialise in servi...
16/04/2026
Compact unit offers hands-on plug-in control
Softube have just announced the launch of their latest hardware unit, the Flow Studio. Housed in a compact desk...
16/04/2026
Use any VST3 plug-in on immersive audio
Fiedler Audio have just released a powerful new plug-in wrapper that brings full VST3 processing to Dolby Atmos and ...
16/04/2026
Analysis plug-in gains enhanced frequency readouts
Nugen Audio's real-time analysis plug-in has just received a significant update that introduces some ...
16/04/2026
Recording & Music Technology Show
Sound On Sound are pleased to announce a new recording and music technology exhibition taking place in London on Saturday ...
16/04/2026
SBS reveals all-star alumni team to celebrate 70 years of the Eurovision Song Co...
16/04/2026
For decades, understanding the physical world from space has required a trade-of...
16/04/2026
The integration accelerates demographic audience delivery across local markets a...
16/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
16/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
16/04/2026
Telestream Cloud Services, including Vantage Cloud, UP, and SENTRY monitoring tools, are now optimized for OCI, powering flexible multi-cloud media orchestratio...
16/04/2026
Fox Corporation (Nasdaq: FOXA, FOX; FOX or the Company ) today announced a strategic collaboration with Amazon Web Services (AWS), naming AWS as its preferre...
16/04/2026
Triveni Digital, a trusted leader in ATSC 1.0 and 3.0 service delivery, data broadcasting, and quality assurance solutions, today announced an ISDB-Tb capabilit...
16/04/2026
Synamedia and SoFast announce strategic go-to-market partnership to accelerate FAST, pay-TV, and VOD
At the NAB Show 2026, Synamedia and SoFast are announcing ...
16/04/2026
Showcases New, Open-Standard IP Solutions Across Its Portfolio, From Production to Playout
Imagine Communications is marking a decade of leadership in ST 2110 ...
16/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
16/04/2026
Sportway Media Group, a world leading AI-automated sports production company and Broadcast Solutions, a leading system integrator and provider of innovative sol...
16/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
16/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
16/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
16/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
16/04/2026
AJA Enters into Agreement to Acquire Video Encoding Software Company Comprimato
Brie Clayton April 15, 2026
0 Comments
Deal will expand AJA's video ...
16/04/2026
Deity Announces PR-4 Compact Field Recorder with Pre-Orders Launching April 14
Brie Clayton April 15, 2026
0 Comments
Deity Microphones today announce...