
Just as there are widely understood empirical laws of nature - for example, what goes up must come down, or every action has an equal and opposite reaction - the field of AI was long defined by a single idea: that more compute, more training data and more parameters makes a better AI model.
However, AI has since grown to need three distinct laws that describe how applying compute resources in different ways impacts model performance. Together, these AI scaling laws - pretraining scaling, post-training scaling and test-time scaling, also called long thinking - reflect how the field has evolved with techniques to use additional compute in a wide variety of increasingly complex AI use cases.
The recent rise of test-time scaling - applying more compute at inference time to improve accuracy - has enabled AI reasoning models, a new class of large language models (LLMs) that perform multiple inference passes to work through complex problems, while describing the steps required to solve a task. Test-time scaling requires intensive amounts of computational resources to support AI reasoning, which will drive further demand for accelerated computing.
What Is Pretraining Scaling? Pretraining scaling is the original law of AI development. It demonstrated that by increasing training dataset size, model parameter count and computational resources, developers could expect predictable improvements in model intelligence and accuracy.
Each of these three elements - data, model size, compute - is interrelated. Per the pretraining scaling law, outlined in this research paper, when larger models are fed with more data, the overall performance of the models improves. To make this feasible, developers must scale up their compute - creating the need for powerful accelerated computing resources to run those larger training workloads.
This principle of pretraining scaling led to large models that achieved groundbreaking capabilities. It also spurred major innovations in model architecture, including the rise of billion- and trillion-parameter transformer models, mixture of experts models and new distributed training techniques - all demanding significant compute.
And the relevance of the pretraining scaling law continues - as humans continue to produce growing amounts of multimodal data, this trove of text, images, audio, video and sensor information will be used to train powerful future AI models.
Pretraining scaling is the foundational principle of AI development, linking the size of models, datasets and compute to AI gains. Mixture of experts, depicted above, is a popular model architecture for AI training. What Is Post-Training Scaling? Pretraining a large foundation model isn't for everyone - it takes significant investment, skilled experts and datasets. But once an organization pretrains and releases a model, they lower the barrier to AI adoption by enabling others to use their pretrained model as a foundation to adapt for their own applications.
This post-training process drives additional cumulative demand for accelerated computing across enterprises and the broader developer community. Popular open-source models can have hundreds or thousands of derivative models, trained across numerous domains.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Post-training techniques can further improve a model's specificity and relevance for an organization's desired use case. While pretraining is like sending an AI model to school to learn foundational skills, post-training enhances the model with skills applicable to its intended job. An LLM, for example, could be post-trained to tackle a task like sentiment analysis or translation - or understand the jargon of a specific domain, like healthcare or law.
The post-training scaling law posits that a pretrained model's performance can further improve - in computational efficiency, accuracy or domain specificity - using techniques including fine-tuning, pruning, quantization, distillation, reinforcement learning and synthetic data augmentation.
Fine-tuning uses additional training data to tailor an AI model for specific domains and applications. This can be done using an organization's internal datasets, or with pairs of sample model input and outputs.
Distillation requires a pair of AI models: a large, complex teacher model and a lightweight student model. In the most common distillation technique, called offline distillation, the student model learns to mimic the outputs of a pretrained teacher model.
Reinforcement learning, or RL, is a machine learning technique that uses a reward model to train an agent to make decisions that align with a specific use case. The agent aims to make decisions that maximize cumulative rewards over time as it interacts with an environment - for example, a chatbot LLM that is positively reinforced by thumbs up reactions from users. This technique is known as reinforcement learning from human feedback (RLHF). Another, newer technique, reinforcement learning from AI feedback (RLAIF), instead uses feedback from AI models to guide the learning process, streamlining post-training efforts.
Best-of-n sampling generates multiple outputs from a language model and selects the one with the highest reward score based on a reward model. It's often used to improve an AI's outputs without modifying model parameters, offering an alternative to fine-tuning with reinforcement learning.
Search methods explore a range of potential decision paths before selecting a final output. This post-training technique can iteratively improve the model's responses
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
01/04/2026
January 4 2026, 18:00 (PST) DOLBY AND DOUYIN EMPOWER THE NEXT GENERATON OF CREATORS WITH DOLBY VISION
Douyin Users Can Now Create And Share Videos With Stun...
02/03/2026
NBC Sports selected Imaginary Forces to create the main title sequence for the M...
02/03/2026
The NFL announces the launch of the NFL Draft Innovation Challenge, a new crowds...
02/03/2026
The World Teleport Association (WTA) has named Skyline Communications' DataMiner SatOps solution as a finalist for the 2026 Teleport Technology of the Year ...
02/03/2026
Ventum Tech announces a strategic partnership with Pixotope, a real-time graphic...
02/03/2026
Victory , a free, ad-supported sports streaming service, announces the launch of...
02/03/2026
Global powerhouse Delta Goodrem to represent Australia at this year's Eurovi...
02/03/2026
SBS brings together prominent voices to discuss rising social division and seek ...
02/03/2026
NITV Unveils Wednesday Night Sport Night
Media releases
A Midweek Hub for Blak Sport
Flagship NRL program Over The Black Dot moves to Wednesdays at 9:30pm
...
02/03/2026
Rohde & Schwarz demonstrates FR1-FR3 carrier aggregation, advancing 6G readiness Rohde & Schwarz and Qualcomm Technologies, Inc. have reached another pivotal ...
02/03/2026
MELBOURNE, Fla., March 2, 2026 - L3Harris Technologies (NYSE: LHX) today announc...
02/03/2026
Harvey Norman tops New Zealand's biggest ad spenders as retail leads and telcos and beverages sure Auckland March 3, 2026 - Nielsen's New Zealand'...
02/03/2026
aconnic AG (ISIN: DE000A0LBKW6), Munich, and Arqit Quantum Inc. (Nasdaq: ARQQ, A...
02/03/2026
Advanced Systems Group, LLC (ASG), a technology and services provider for media creatives and content owners, has appointed Jody Boatwright as Chief Strategy Of...
02/03/2026
Lightware, an industry leader in signal management, is reinforcing its commitment to simplifying complex AV environments with the continued evolution of its TPN...
02/03/2026
ZEISS Cinema opens its doors to an exclusive screening, filmmakers Q&A and hands-on showcase of the new ZEISS Aatma lens family on Tuesday, March 10, 6:00-9:00 ...
02/03/2026
To expand the creative potential of the compact Astera QuikBeam, DoPchoice introduces the SNAPBAG Round. The new light-shaping accessory transforms the powerfu...
02/03/2026
Douglas Dubler
On a rooftop in New York City, veteran photographer Douglas Dubler watched the sun's angle shift across the urban landscape, his internal cl...
02/03/2026
ZEISS is expanding its Otus ML lens family with the introduction of the new ZEISS Otus ML 1.4/35. This manual-focus lens is designed for photographers who live ...
02/03/2026
Matthews Studio Equipment now offers the new Dinkum Systems White FlexiMount and Phone Mount Kit, a compact, highly adaptable setup for mobile content creation...
02/03/2026
Luxembourg, February 26, 2026 - SES and Africa Mobile Network (AMN) have expande...
02/03/2026
02 03 2026 - Media release Ausfilm and Screen Australia launch joint UK market initiative
Ausfilm and Screen Australia have announced Partner with Australia (...
02/03/2026
Monday 2 March 2026
Saturday Night Live UK announces writing team
L-R: Jonno Johnson; Charlie Skelton; Celya AB; Omar Badawy; Gr inne Maguire; Laura Claxton; ...
02/03/2026
Back to All News
Netflix Unveils the Trailer for 53 Sundays, the New Film by Cesc Gay
Entertainment
02 March 2026
GlobalSpain
Link copied to clipboard
Dis...
28/02/2026
With two features seen in Formula 1 coverage, the broadcaster aims to bring view...
28/02/2026
Secretary of War Pete Hegseth addresses a crowd of approximately 1,500 L3Harris employees in Camden, Arkansas, as part of his Arsenal of Freedom tour....
28/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
28/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
28/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
28/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
28/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
28/02/2026
Berklee Presents Mambo Mania: Eguie Castrillo and the Berklee All-Stars Big Band...
28/02/2026
Berklee Announces Two New Summer Programs in Los Angeles The Berklee Music Business Program and Electronic Music Production and Sound Design Workshop bring imme...
28/02/2026
AI-RAN is moving from lab to field, showing that a software-defined approach is ...
28/02/2026
Autonomous networks - intelligent, self-managing telecommunications operations -...
28/02/2026
Back to All News
Final Trailer for BEASTARS Final Season Part 2' Roars Tow...
28/02/2026
New way to intentionally discover molecular glues could expand drug discovery Scripps Research scientists and colleagues show how drugs that eliminate certain d...
27/02/2026
The E.W. Scripps Company names Oliver Gray as Vice President, Network Sports and...
27/02/2026
The Gotham Sports App, the exclusive direct-to-consumer streaming home of MSG Networks and the YES Network, is now available for purchase through Prime Video fo...
27/02/2026
ESPN and the Horizon League announce a new multi-year, multi-platform media rights agreement, continuing a 38-year collaboration that began with the 1988 Midwes...
27/02/2026
At the 2026 NAB Show in Las Vegas, NETGEAR will highlight its new switch models and major updates to its Engage Controller software. The company's network d...
27/02/2026
Riedel Communications announces that Fondazione Teatro alla Scala has deployed a...
27/02/2026
Lyuno specializes in media localization, including translation, dubbing, subtitling, and voice-over services for a wide array of entertainment content. The comp...
27/02/2026
Chyron Weather 2.3, the latest edition of Chyron's weather visualization suite for broadcasters and meteorologists, recently launched.
The release includes...
27/02/2026
Telestream, which concentrates in media workflow technologies, announces expanded practical AI enhancements across its Vantage, Vantage Cloud, EDC, Stanza, and ...