
Just as there are widely understood empirical laws of nature - for example, what goes up must come down, or every action has an equal and opposite reaction - the field of AI was long defined by a single idea: that more compute, more training data and more parameters makes a better AI model.
However, AI has since grown to need three distinct laws that describe how applying compute resources in different ways impacts model performance. Together, these AI scaling laws - pretraining scaling, post-training scaling and test-time scaling, also called long thinking - reflect how the field has evolved with techniques to use additional compute in a wide variety of increasingly complex AI use cases.
The recent rise of test-time scaling - applying more compute at inference time to improve accuracy - has enabled AI reasoning models, a new class of large language models (LLMs) that perform multiple inference passes to work through complex problems, while describing the steps required to solve a task. Test-time scaling requires intensive amounts of computational resources to support AI reasoning, which will drive further demand for accelerated computing.
What Is Pretraining Scaling? Pretraining scaling is the original law of AI development. It demonstrated that by increasing training dataset size, model parameter count and computational resources, developers could expect predictable improvements in model intelligence and accuracy.
Each of these three elements - data, model size, compute - is interrelated. Per the pretraining scaling law, outlined in this research paper, when larger models are fed with more data, the overall performance of the models improves. To make this feasible, developers must scale up their compute - creating the need for powerful accelerated computing resources to run those larger training workloads.
This principle of pretraining scaling led to large models that achieved groundbreaking capabilities. It also spurred major innovations in model architecture, including the rise of billion- and trillion-parameter transformer models, mixture of experts models and new distributed training techniques - all demanding significant compute.
And the relevance of the pretraining scaling law continues - as humans continue to produce growing amounts of multimodal data, this trove of text, images, audio, video and sensor information will be used to train powerful future AI models.
Pretraining scaling is the foundational principle of AI development, linking the size of models, datasets and compute to AI gains. Mixture of experts, depicted above, is a popular model architecture for AI training. What Is Post-Training Scaling? Pretraining a large foundation model isn't for everyone - it takes significant investment, skilled experts and datasets. But once an organization pretrains and releases a model, they lower the barrier to AI adoption by enabling others to use their pretrained model as a foundation to adapt for their own applications.
This post-training process drives additional cumulative demand for accelerated computing across enterprises and the broader developer community. Popular open-source models can have hundreds or thousands of derivative models, trained across numerous domains.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Post-training techniques can further improve a model's specificity and relevance for an organization's desired use case. While pretraining is like sending an AI model to school to learn foundational skills, post-training enhances the model with skills applicable to its intended job. An LLM, for example, could be post-trained to tackle a task like sentiment analysis or translation - or understand the jargon of a specific domain, like healthcare or law.
The post-training scaling law posits that a pretrained model's performance can further improve - in computational efficiency, accuracy or domain specificity - using techniques including fine-tuning, pruning, quantization, distillation, reinforcement learning and synthetic data augmentation.
Fine-tuning uses additional training data to tailor an AI model for specific domains and applications. This can be done using an organization's internal datasets, or with pairs of sample model input and outputs.
Distillation requires a pair of AI models: a large, complex teacher model and a lightweight student model. In the most common distillation technique, called offline distillation, the student model learns to mimic the outputs of a pretrained teacher model.
Reinforcement learning, or RL, is a machine learning technique that uses a reward model to train an agent to make decisions that align with a specific use case. The agent aims to make decisions that maximize cumulative rewards over time as it interacts with an environment - for example, a chatbot LLM that is positively reinforced by thumbs up reactions from users. This technique is known as reinforcement learning from human feedback (RLHF). Another, newer technique, reinforcement learning from AI feedback (RLAIF), instead uses feedback from AI models to guide the learning process, streamlining post-training efforts.
Best-of-n sampling generates multiple outputs from a language model and selects the one with the highest reward score based on a reward model. It's often used to improve an AI's outputs without modifying model parameters, offering an alternative to fine-tuning with reinforcement learning.
Search methods explore a range of potential decision paths before selecting a final output. This post-training technique can iteratively improve the model's responses
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
04/08/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
04/07/2026
April 7 2026, 19:00 (PDT) Detective Conan: Fallen Angel of the Highway Opens in...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
06/05/2026
Gravity Media Chief RF Communications Engineer Glenn Willems uses Wisycom RF over Fiber and wireless solutions across major cycling events and international mar...
06/05/2026
A Sennheiser Spectera module is now available in Bitfocus Companion and Buttons, enabling direct integration of Spectera with the two software platforms. The mo...
06/05/2026
Ted Turner, the visionary media entrepreneur whose appetite for disruption helpe...
06/05/2026
Peacock is going all-in on the beautiful game - streaming all 104 FIFA World Cup...
06/05/2026
New pricing tiers for vocal/dialogue restoration tool
NoiseWorks Audio's AI-powered vocal and dialogue processing plug-in is now available in three diff...
06/05/2026
Popular mixing & routing software overhauled
Following a recent public beta test, RME have launched the final release version of the powerful mixing and rou...
06/05/2026
New SOS Video Feature
Focusrites ISA C8X is a milestone product that brings together the companys analogue heritage and their expertise in digital audio. Yo...
06/05/2026
Polands Miecznik-class frigates are part of the largest contract in Polish shipbuilding history. (Image Credit: PGZ Stocznia Wojenna)...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
Riedel Communications today announced the expansion of its leadership structure as part of a strategic initiative to strengthen both its operational management ...
06/05/2026
For nearly three decades, Veteran Production Sound Mixer and Five-time Emmy Award Winner Dirk Sciarrotta has helped define the sonic identity of the long-runnin...
06/05/2026
ZEISS CinCraft LensCore: Cinema Lens Looks for Compositing
Brie Clayton May 6, 2026
0 Comments
ZEISS announces the launch of CinCraft LensCore, a nove...
06/05/2026
Wisycom Solves Extreme RF Challenges Across Miles of Live Action for Gravity Med...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
Narrative Entertainment has partnered with Encompass to deliver high-quality subtitling of its Great! network content using the Altitude Intelligence AI assiste...
06/05/2026
SipRadius, widely recognized for making content processing and connectivity secure and seamless, is proud to launch a dramatic new approach to AI content creati...
06/05/2026
When the broadband and media industry gathers at ANGA COM in Cologne from May 19 to 21, Big Blue Marble will be at the forefront. The international broadcast an...
06/05/2026
Cinegy GmbH, a leading developer of software-defined television technology, is proud to exhibit at MPTS for the first time. Visitors to the stand will discover ...
06/05/2026
Val Jeanty Receives 2026 Doris Duke Artist Award Jeanty, a composer, percussionist, and turntablist, is the fourth Berklee recipient of the prestigious award ...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
When live cycling races and international marathons stretch for miles across cities and countryside, there is no margin for RF failure in live broadcast. As Chi...
06/05/2026
Oberkochen/Germany, May 5, 2026
ZEISS announces the launch of CinCraft LensCore, a novel solution for creating physically based cinematic lens looks for visual...
06/05/2026
06 May 2026
VEON's Kyivstar Authorized to Resell Starlink for Businesses & ...
06/05/2026
UKTV has secured the exclusive rights to the early back catalogue of iconic Australian drama series Neighbours, following a landmark content deal with Fremantle...
06/05/2026
Wednesday 6 May 2026
Sky commissions feature documentary to mark 10th anniversa...
06/05/2026
Wednesday 6 May 2026
Sky and Formula 1 agree long-term partnership across UK, Ireland and Italy
Sky in the UK & Ireland to remain the home of Formula 1 until ...
06/05/2026
Sky Zero Footprint Fund-backed TV campaign launches nationwide, supported by new...
06/05/2026
Wuppertal May 6, 2026
Riedel Expands Leadership Structure, Appoints Marc Engro...
06/05/2026
SAN JOSE, Calif. - May 6, 2026 - Harmonic (NASDAQ: HLIT) today announced its latest broadband innovations for ANGA COM 2026, highlighting its vision for a new e...
06/05/2026
Statement from Rupert Murdoch, Chairman Emeritus, Fox Corporation on Ted Turner&...
06/05/2026
Friday 8 May on RT One and RT Player
Meet the NSPCA team caring for and protecting animals in need in this six-part series
Fly on the wall, six-part series...
06/05/2026
The race to build the world's most powerful AI factories demands networking ...
06/05/2026
How changes to proteins can alter drug interactions for new precision therapies Scripps Research team maps how chemical modifications to proteins affect drug bi...
05/05/2026
Experts from the world of academia, tech, business, politics and media convened for a Thomson Talks at the Cambridge Disinformation Summit in April. It's th...
05/05/2026
Three phones were hardwired for power and transmission to the truck; camera feat...
05/05/2026
The creative studio behind campaigns for the NBA, Fanatics Sportsbook & Casino, ...
05/05/2026
Nielsen has announced results from a co-viewing pilot program covering February&...
05/05/2026
viztrick AiDi, an on-device AI solution developed by Nippon TV, delivered global...
05/05/2026
ARRI has announced Omnibar, a battery-powered, IP65-rated multi-color LED linear...
05/05/2026
Imagine Communications has announced that France T l visions is the first broadc...
05/05/2026
The Women's National Basketball Association (WNBA) and Bell Media today announced a multiyear agreement to broadcast and stream WNBA games in Canada beginni...