
Just as there are widely understood empirical laws of nature - for example, what goes up must come down, or every action has an equal and opposite reaction - the field of AI was long defined by a single idea: that more compute, more training data and more parameters makes a better AI model.
However, AI has since grown to need three distinct laws that describe how applying compute resources in different ways impacts model performance. Together, these AI scaling laws - pretraining scaling, post-training scaling and test-time scaling, also called long thinking - reflect how the field has evolved with techniques to use additional compute in a wide variety of increasingly complex AI use cases.
The recent rise of test-time scaling - applying more compute at inference time to improve accuracy - has enabled AI reasoning models, a new class of large language models (LLMs) that perform multiple inference passes to work through complex problems, while describing the steps required to solve a task. Test-time scaling requires intensive amounts of computational resources to support AI reasoning, which will drive further demand for accelerated computing.
What Is Pretraining Scaling? Pretraining scaling is the original law of AI development. It demonstrated that by increasing training dataset size, model parameter count and computational resources, developers could expect predictable improvements in model intelligence and accuracy.
Each of these three elements - data, model size, compute - is interrelated. Per the pretraining scaling law, outlined in this research paper, when larger models are fed with more data, the overall performance of the models improves. To make this feasible, developers must scale up their compute - creating the need for powerful accelerated computing resources to run those larger training workloads.
This principle of pretraining scaling led to large models that achieved groundbreaking capabilities. It also spurred major innovations in model architecture, including the rise of billion- and trillion-parameter transformer models, mixture of experts models and new distributed training techniques - all demanding significant compute.
And the relevance of the pretraining scaling law continues - as humans continue to produce growing amounts of multimodal data, this trove of text, images, audio, video and sensor information will be used to train powerful future AI models.
Pretraining scaling is the foundational principle of AI development, linking the size of models, datasets and compute to AI gains. Mixture of experts, depicted above, is a popular model architecture for AI training. What Is Post-Training Scaling? Pretraining a large foundation model isn't for everyone - it takes significant investment, skilled experts and datasets. But once an organization pretrains and releases a model, they lower the barrier to AI adoption by enabling others to use their pretrained model as a foundation to adapt for their own applications.
This post-training process drives additional cumulative demand for accelerated computing across enterprises and the broader developer community. Popular open-source models can have hundreds or thousands of derivative models, trained across numerous domains.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Post-training techniques can further improve a model's specificity and relevance for an organization's desired use case. While pretraining is like sending an AI model to school to learn foundational skills, post-training enhances the model with skills applicable to its intended job. An LLM, for example, could be post-trained to tackle a task like sentiment analysis or translation - or understand the jargon of a specific domain, like healthcare or law.
The post-training scaling law posits that a pretrained model's performance can further improve - in computational efficiency, accuracy or domain specificity - using techniques including fine-tuning, pruning, quantization, distillation, reinforcement learning and synthetic data augmentation.
Fine-tuning uses additional training data to tailor an AI model for specific domains and applications. This can be done using an organization's internal datasets, or with pairs of sample model input and outputs.
Distillation requires a pair of AI models: a large, complex teacher model and a lightweight student model. In the most common distillation technique, called offline distillation, the student model learns to mimic the outputs of a pretrained teacher model.
Reinforcement learning, or RL, is a machine learning technique that uses a reward model to train an agent to make decisions that align with a specific use case. The agent aims to make decisions that maximize cumulative rewards over time as it interacts with an environment - for example, a chatbot LLM that is positively reinforced by thumbs up reactions from users. This technique is known as reinforcement learning from human feedback (RLHF). Another, newer technique, reinforcement learning from AI feedback (RLAIF), instead uses feedback from AI models to guide the learning process, streamlining post-training efforts.
Best-of-n sampling generates multiple outputs from a language model and selects the one with the highest reward score based on a reward model. It's often used to improve an AI's outputs without modifying model parameters, offering an alternative to fine-tuning with reinforcement learning.
Search methods explore a range of potential decision paths before selecting a final output. This post-training technique can iteratively improve the model's responses
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
01/04/2026
January 4 2026, 18:00 (PST) DOLBY AND DOUYIN EMPOWER THE NEXT GENERATON OF CREATORS WITH DOLBY VISION
Douyin Users Can Now Create And Share Videos With Stun...
09/01/2026
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
09/01/2026
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
09/01/2026
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
09/01/2026
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
08/01/2026
An evidence-based analysis on disinformation and information manipulation in Sudan's ongoing conflict is published today. (January 8th 2026).
Thomson Found...
08/01/2026
At CFP Semifinals, ESPN Again Flexes Its Operational Muscle With 20+ MegaCast Vi...
08/01/2026
SVG Students To Watch: Sophie Fowler, University of OregonThe Portland product has honed her skills as a producer, director, and TD at Quack VideoBy Brandon Cos...
08/01/2026
Follow the Money, Episode 3: Inside the Sports-Media Biz With Sam McCleery and K...
08/01/2026
SVG New Sponsor Spotlight: Qualstar's Jeff Sengpiehl on the Enduring Power a...
08/01/2026
Legendary February: Production Leaders at NBC Sports Pull Back the Curtain on Ol...
08/01/2026
In 2025 we launched the Spotify Partner Program to give creators more ways to tu...
08/01/2026
On Wednesday in Los Angeles, Spotify welcomed creators and press to a brunch cel...
08/01/2026
The Hollywood Professional Association (HPA) today announced several updates to its Board of Directors. As part of HPA's annual governance cycle, new leader...
08/01/2026
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
08/01/2026
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
08/01/2026
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
08/01/2026
Spain's national public broadcaster, RTVE, has upgraded one of its main television production facilities in Madrid with the installation of two Alfalite NEO...
08/01/2026
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
08/01/2026
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
08/01/2026
Maxon's new release of Cinebench features performance enhancements and adds support for the latest Nvidia and AMD GPUs as well as Apple Silicon.
Maxon is t...
08/01/2026
Zixi, the industry leader in IP-based video transport and orchestration, today announced the appointment of Heather Mellish as Vice President, Global Sales.
In...
08/01/2026
Pebble, the leading automation, content management and integrated channel specialist, has provided a complete update of its installation at Canal Sur in Spain. ...
08/01/2026
iWedia, a global leader in software solutions for connected TV devices, proudly announces the success of its collaboration with Panasonic on the Z95B OLED TV, o...
08/01/2026
Secuoya Chile, a leading provider of television content creation and supporting services, has invested in Ikegami UHK-X600 and UHL-X40 broadcast cameras as the ...
08/01/2026
Kiloview, a global leader in AV-over-IP solutions, will showcase its latest innovations at ISE 2026, highlighting the continued evolution of its complete, light...
08/01/2026
iWedia, a global leader in software solutions for connected TV devices, and Realtek, a leading global SoC design house, today announced the next phase of their ...
08/01/2026
CJP Broadcast has completed a new pitch-side media installation for Cinderford RFC, creating a flexible production setup that supports match coverage, coaching ...
08/01/2026
PlayBox Neo further drives momentum in Playout, Streaming, Media Management and Delivery
"With a brand new year at PlayBox Neo already off to a flying start, I...
08/01/2026
Boston Conservatory at Berklee Presents Second Annual Commercial Dance BFA Conce...
08/01/2026
TSA awards Rohde & Schwarz contract for advanced airport screening ahead of Socc...
08/01/2026
Back to All News
New Korean Series Beauty in the Beast' (WT) in Production...
08/01/2026
The review looks back at DPA's miniature microphone development over the years. It compares the evolving technologies from the original mics through CORE an...
08/01/2026
Comscore Launches Audio Targeting and Measurement Capabilities with The Trade De...
08/01/2026
Tonight, on RT Prime Time at 9:35pm on RT One and RT Player
Tonight, Prime T...
08/01/2026
The Late Late Show celebrates the very best of traditional Irish music with its first-ever full special dedicated entirely to the tradition
Lisa Canny | Kevin...
08/01/2026
The next universal technology since the smartphone is on the horizon - and it ma...
08/01/2026
In the rolling hills of Berkeley, California, an AI agent is supporting high-stakes physics experiments at the Advanced Light Source (ALS) particle accelerator....
08/01/2026
It will be murder on the dancefloor when Dancing with the Stars returns this S...
08/01/2026
NVIDIA is wrapping up a big week at the CES trade show with a set of GeForce NOW...
07/01/2026
SVG Summit 2025: All General Sessions Now Available To Watch on SVG PLAYThe SVG Summit celebrated its 20th edition with a loaded agendaBy Brandon Costa, Directo...
07/01/2026
NBCU's Brings Rinkside Live' and Courtside Live' Features to Peaco...
07/01/2026
Spotify is launching a week-long celebration spotlighting creators at the center...
07/01/2026
We know people use Spotify not just to listen, but to share the songs, podcasts, and audiobooks they love with their friends and family. When we launched Messag...
07/01/2026
This week, all eyes are on the podcast industry as the Golden Globes recognizes ...
07/01/2026
Podcasts are stepping onto a new stage this week as the Golden Globes recognize the medium for the first time. To mark this milestone moment, we're hosting ...
07/01/2026
The National Film and Video Foundation (NFVF), in collaboration with a distributor, is commissioning new micro-budget fiction feature films and invites eligible...