
Just as there are widely understood empirical laws of nature - for example, what goes up must come down, or every action has an equal and opposite reaction - the field of AI was long defined by a single idea: that more compute, more training data and more parameters makes a better AI model.
However, AI has since grown to need three distinct laws that describe how applying compute resources in different ways impacts model performance. Together, these AI scaling laws - pretraining scaling, post-training scaling and test-time scaling, also called long thinking - reflect how the field has evolved with techniques to use additional compute in a wide variety of increasingly complex AI use cases.
The recent rise of test-time scaling - applying more compute at inference time to improve accuracy - has enabled AI reasoning models, a new class of large language models (LLMs) that perform multiple inference passes to work through complex problems, while describing the steps required to solve a task. Test-time scaling requires intensive amounts of computational resources to support AI reasoning, which will drive further demand for accelerated computing.
What Is Pretraining Scaling? Pretraining scaling is the original law of AI development. It demonstrated that by increasing training dataset size, model parameter count and computational resources, developers could expect predictable improvements in model intelligence and accuracy.
Each of these three elements - data, model size, compute - is interrelated. Per the pretraining scaling law, outlined in this research paper, when larger models are fed with more data, the overall performance of the models improves. To make this feasible, developers must scale up their compute - creating the need for powerful accelerated computing resources to run those larger training workloads.
This principle of pretraining scaling led to large models that achieved groundbreaking capabilities. It also spurred major innovations in model architecture, including the rise of billion- and trillion-parameter transformer models, mixture of experts models and new distributed training techniques - all demanding significant compute.
And the relevance of the pretraining scaling law continues - as humans continue to produce growing amounts of multimodal data, this trove of text, images, audio, video and sensor information will be used to train powerful future AI models.
Pretraining scaling is the foundational principle of AI development, linking the size of models, datasets and compute to AI gains. Mixture of experts, depicted above, is a popular model architecture for AI training. What Is Post-Training Scaling? Pretraining a large foundation model isn't for everyone - it takes significant investment, skilled experts and datasets. But once an organization pretrains and releases a model, they lower the barrier to AI adoption by enabling others to use their pretrained model as a foundation to adapt for their own applications.
This post-training process drives additional cumulative demand for accelerated computing across enterprises and the broader developer community. Popular open-source models can have hundreds or thousands of derivative models, trained across numerous domains.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Post-training techniques can further improve a model's specificity and relevance for an organization's desired use case. While pretraining is like sending an AI model to school to learn foundational skills, post-training enhances the model with skills applicable to its intended job. An LLM, for example, could be post-trained to tackle a task like sentiment analysis or translation - or understand the jargon of a specific domain, like healthcare or law.
The post-training scaling law posits that a pretrained model's performance can further improve - in computational efficiency, accuracy or domain specificity - using techniques including fine-tuning, pruning, quantization, distillation, reinforcement learning and synthetic data augmentation.
Fine-tuning uses additional training data to tailor an AI model for specific domains and applications. This can be done using an organization's internal datasets, or with pairs of sample model input and outputs.
Distillation requires a pair of AI models: a large, complex teacher model and a lightweight student model. In the most common distillation technique, called offline distillation, the student model learns to mimic the outputs of a pretrained teacher model.
Reinforcement learning, or RL, is a machine learning technique that uses a reward model to train an agent to make decisions that align with a specific use case. The agent aims to make decisions that maximize cumulative rewards over time as it interacts with an environment - for example, a chatbot LLM that is positively reinforced by thumbs up reactions from users. This technique is known as reinforcement learning from human feedback (RLHF). Another, newer technique, reinforcement learning from AI feedback (RLAIF), instead uses feedback from AI models to guide the learning process, streamlining post-training efforts.
Best-of-n sampling generates multiple outputs from a language model and selects the one with the highest reward score based on a reward model. It's often used to improve an AI's outputs without modifying model parameters, offering an alternative to fine-tuning with reinforcement learning.
Search methods explore a range of potential decision paths before selecting a final output. This post-training technique can iteratively improve the model's responses
Most recent headlines
19/12/2025
With Playout Release 2025.4, ToolsOnAir continues to push professional playout w...
19/12/2025
SVG Sit-Down: Diversified's Jared Timmins on AI for Broadcast Sports and Cre...
19/12/2025
2025 SVG Summit Audio Recap: Say What? The Audio Production and Distribution Workshop at the SVG Summit 20 took on issues including speech intelligibility, Next...
19/12/2025
Gamified fun: Channel 5 on its NFL Big Game Night ambitions with Hungry Bear Med...
19/12/2025
College Football Playoff Preview: For ESPN, Round 1 is a Fantastic Yet Familia...
19/12/2025
AWS's Jason Dvorkin on Developing Partnerships With the NBA and PGA Tour, Em...
19/12/2025
Netflix Kicks Off Packed Sports Week with Paul-Joshua Fight Before Shifting to N...
19/12/2025
SVG New Sponsor Spotlight: Presidio's Nareev Shah on the Role of Its Captiva...
19/12/2025
Mounted to the pylon of an AH-1Z Viper helicopter, a Red Wolf vehicle successful...
19/12/2025
L3Harris technology for the SDA Tranche 3 Tracking Layer program will provide in...
19/12/2025
Partnership brings Nielsen ONE measurement activation directly into XR's adv...
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Berklee Announces Spring 2026 Signature Series This season's highlights include the Gospel Extravaganza, the 40th International Folk Festival, special gue...
19/12/2025
Performing arts centres across the globe have doubled down on live production infrastructure in recent years. For venues like the Queensland Performing Arts Cen...
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Ricardo Coke-Thomas Named Chair of Theater for Boston Conservatory at Berklee The distinguished theater educator, director, and performer will join the Conser...
19/12/2025
As the year comes to a close, it's the perfect time to give your WO Automation for Radio system a quick tune up. At the top of your year end checklist is on...
19/12/2025
19 Dec 2025
VEON's Mobilink Microfinance Bank Launches Islamic Banking Oper...
19/12/2025
Wrapping up a year of connection and clarity!
19 Dec Written By Suzanne Costello
As 2025 comes to a close, we want to take a moment to thank our incredib...
19/12/2025
The six-part drama, set in a close-knit Welsh town fractured by an unspeakable c...
19/12/2025
Rohde & Schwarz drives the future of mobility at CES 2026 At the 2026 Consumer Electronics Show in Las Vegas, Rohde & Schwarz will present a powerful lineup o...
19/12/2025
Back to All News
Salvador arrives to Netflix on February 6
Entertainment
19 December 2025
GlobalSpain
Link copied to clipboard
WHEN THERE IS NOTHING LEFT ...
19/12/2025
Back to All News
Last Samurai Standing' Renewed for Season 2 - A Global Se...
19/12/2025
RT is proud to return to the RDS to support the 2026 Stripe Young Scientist & T...
19/12/2025
Nanoparticle vaccine strategy could protect against Ebola and other deadly filoviruses Scripps Research scientists turn nanoparticles into virus showcases to ...
18/12/2025
SVG Campus Shot Callers: Kurt Sutton, Director of Broadcast Operations, Clemson ...
18/12/2025
Follow the Money Episode 2: Inside the Sports Media Biz with Sam McCleery and St...
18/12/2025
SVG Sit-Down: Google Cloud's Anshul Kapoor on the Future of Generative Prod...
18/12/2025
The 2025 SVG Summit Draws Record Crowd for 20th-Annual Sports-Production Industr...
18/12/2025
SBS's sports schedule sizzles in January with Dakar Rally, Kooyong Classic a...
18/12/2025
Canada's largest indoor arena has transformed its live production capabilities with a full ST 2110 infrastructure and Calrec's compact Argo S console. S...
18/12/2025
During November, streaming's share of TV viewing in Mexico settled at 24.2%, an increase of 0.5 share points from the previous month.
Disclaimer: YUMI TV,...
18/12/2025
November continued the upward trend in television viewership. The significantly colder weather and a rich programming lineup encouraged viewers to spend more ti...
18/12/2025
As viewers turn to sports highlights, recaps and documentary programming, expand...
18/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
18/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
18/12/2025
The HELM, a global expert in cinematic live broadcast and high-end production workflows, has entered a strategic partnership with ARRI, the renowned designer an...
18/12/2025
Cadena Melod a de Colombia (Cadena Melod a), a long-established Colombian radio network, has chosen DHD audio SX2 production consoles for integration into the m...
18/12/2025
Harmonic (NASDAQ: HLIT) today announced that Czech Television (Czech TV), the public broadcaster of the Czech Republic, has teamed up with Harmonic to modernize...
18/12/2025
Broadcast Solutions Group, a leading system integrator and provider of innovative solutions for the broadcast and media industry, has announced the acquisition ...
18/12/2025
Keepit, the SaaS data protection company, announced today that it has been named a Leader in the IDC MarketScape: Worldwide SaaS Data Protection 2025-2026 Vendo...
18/12/2025
Limecraft today announced the release of Limecraft 2025.8, the eighth and final major platform update of the year. This release strengthens daily workflows acro...
18/12/2025
DigitalGlue is very grateful, especially at this time of the year, that its creative.space platform has expanded its footprint within the House of Worship marke...
18/12/2025
TAG Video Systems is proud to share that the company has recently received multiple industry recognitions across the Asia-Pacific region, reflecting its ongoing...