
Just as there are widely understood empirical laws of nature - for example, what goes up must come down, or every action has an equal and opposite reaction - the field of AI was long defined by a single idea: that more compute, more training data and more parameters makes a better AI model.
However, AI has since grown to need three distinct laws that describe how applying compute resources in different ways impacts model performance. Together, these AI scaling laws - pretraining scaling, post-training scaling and test-time scaling, also called long thinking - reflect how the field has evolved with techniques to use additional compute in a wide variety of increasingly complex AI use cases.
The recent rise of test-time scaling - applying more compute at inference time to improve accuracy - has enabled AI reasoning models, a new class of large language models (LLMs) that perform multiple inference passes to work through complex problems, while describing the steps required to solve a task. Test-time scaling requires intensive amounts of computational resources to support AI reasoning, which will drive further demand for accelerated computing.
What Is Pretraining Scaling? Pretraining scaling is the original law of AI development. It demonstrated that by increasing training dataset size, model parameter count and computational resources, developers could expect predictable improvements in model intelligence and accuracy.
Each of these three elements - data, model size, compute - is interrelated. Per the pretraining scaling law, outlined in this research paper, when larger models are fed with more data, the overall performance of the models improves. To make this feasible, developers must scale up their compute - creating the need for powerful accelerated computing resources to run those larger training workloads.
This principle of pretraining scaling led to large models that achieved groundbreaking capabilities. It also spurred major innovations in model architecture, including the rise of billion- and trillion-parameter transformer models, mixture of experts models and new distributed training techniques - all demanding significant compute.
And the relevance of the pretraining scaling law continues - as humans continue to produce growing amounts of multimodal data, this trove of text, images, audio, video and sensor information will be used to train powerful future AI models.
Pretraining scaling is the foundational principle of AI development, linking the size of models, datasets and compute to AI gains. Mixture of experts, depicted above, is a popular model architecture for AI training. What Is Post-Training Scaling? Pretraining a large foundation model isn't for everyone - it takes significant investment, skilled experts and datasets. But once an organization pretrains and releases a model, they lower the barrier to AI adoption by enabling others to use their pretrained model as a foundation to adapt for their own applications.
This post-training process drives additional cumulative demand for accelerated computing across enterprises and the broader developer community. Popular open-source models can have hundreds or thousands of derivative models, trained across numerous domains.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Developing this ecosystem of derivative models for a variety of use cases could take around 30x more compute than pretraining the original foundation model.
Post-training techniques can further improve a model's specificity and relevance for an organization's desired use case. While pretraining is like sending an AI model to school to learn foundational skills, post-training enhances the model with skills applicable to its intended job. An LLM, for example, could be post-trained to tackle a task like sentiment analysis or translation - or understand the jargon of a specific domain, like healthcare or law.
The post-training scaling law posits that a pretrained model's performance can further improve - in computational efficiency, accuracy or domain specificity - using techniques including fine-tuning, pruning, quantization, distillation, reinforcement learning and synthetic data augmentation.
Fine-tuning uses additional training data to tailor an AI model for specific domains and applications. This can be done using an organization's internal datasets, or with pairs of sample model input and outputs.
Distillation requires a pair of AI models: a large, complex teacher model and a lightweight student model. In the most common distillation technique, called offline distillation, the student model learns to mimic the outputs of a pretrained teacher model.
Reinforcement learning, or RL, is a machine learning technique that uses a reward model to train an agent to make decisions that align with a specific use case. The agent aims to make decisions that maximize cumulative rewards over time as it interacts with an environment - for example, a chatbot LLM that is positively reinforced by thumbs up reactions from users. This technique is known as reinforcement learning from human feedback (RLHF). Another, newer technique, reinforcement learning from AI feedback (RLAIF), instead uses feedback from AI models to guide the learning process, streamlining post-training efforts.
Best-of-n sampling generates multiple outputs from a language model and selects the one with the highest reward score based on a reward model. It's often used to improve an AI's outputs without modifying model parameters, offering an alternative to fine-tuning with reinforcement learning.
Search methods explore a range of potential decision paths before selecting a final output. This post-training technique can iteratively improve the model's responses
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
04/08/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
04/07/2026
April 7 2026, 19:00 (PDT) Detective Conan: Fallen Angel of the Highway Opens in...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
18/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
18/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
18/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
18/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
18/04/2026
New York, NY, April 17, 2026 -- TAG Video Systems has announced an integration with Amazon Web Services (AWS) Elemental MediaConnect Router. The integration bri...
18/04/2026
Pro Sound Effects (PSE), the leading provider of professionally recorded sound effects for film, television, advertising, and technology today announced the lau...
18/04/2026
Fincons Group, a multinational IT business consulting and system integrator firm, announced today that it has achieved Amazon Web Services (AWS) Media & Enterta...
18/04/2026
Adobe extends leadership in video: unleashing new AI-powered creation in Firefly...
17/04/2026
The World Fencing League (WFL) has announced DAZN as its primary global streamin...
17/04/2026
ARRI Releases ALEXA 35 SUP 6.0 and LPS-1 SUP 1.3 Software Updates
ARRI has announced software updates for its ALEXA 35 Live camera and Live Production System L...
17/04/2026
Sennheiser has announced Spectera Studio, an offline system planner for its Spec...
17/04/2026
Deltatre and the German Football Association (DFB) have announced DFB.TV+, a DFB-owned direct-to-consumer streaming service developed and operated by Deltatre. ...
17/04/2026
Global sports marketing agency IMG has announced new senior leadership roles, designed to strengthen how it supports rightsholders and partners in the midst of ...
17/04/2026
NAGRAVISION and Harmonic have announced a watermarking-as-a-service solution for...
17/04/2026
NETGEAR and EVS Broadcast Equipment have announced a global technology partnersh...
17/04/2026
America's broadcasters are launching the NEXTGEN TV Converter Box Program, a new initiative designed to provide millions of American viewers with a low-cost...
17/04/2026
Quantum Corp. will exhibit at NAB Show 2026 (Booth N1726), presenting what it ca...
17/04/2026
Leadership and Staff Announce '20/20 Vision' Playbook...
17/04/2026
TNT Sports has announced a multi-year agreement for U.S. media rights to the FIA World Endurance Championship (WEC). Three events will air on truTV - the 24 Hou...
17/04/2026
Scripps Sports has announced a multi-year broadcast partnership with PBR (Profes...
17/04/2026
Adder Technology, a specialist in connectivity solutions and high performance IP KVM, today announced the latest release of AIM, its IP KVM matrix management so...
17/04/2026
Victory , a free sports streaming platform from A Parent Media Co. Inc. (APMC), has announced a multi-year content distribution partnership with the Dallas Cowb...
17/04/2026
In-venue and creative video staffers at the professional and collegiate level have one major thing in common: the intensity and attention to detail ramps up dur...
17/04/2026
Quickplay has announced the full-scale deployment of Gray Media's streaming ...
17/04/2026
Clear-Com has introduced the FreeSpeak Cell cellular-based wireless intercom system that uses LTE and 5G infrastructure to support large-scale production commun...
17/04/2026
The 2026 NAB Show kicks off Saturday, April 18, with the show floor and exhibits opening April 19-22 at the Las Vegas Convention Center. The show features more ...
17/04/2026
Ratings Roundup is a rundown of recent rating news and is derived from press rel...
17/04/2026
Versatile ribbon mic kit revealed
The latest arrival to the AEA line-up brings together three contrasting models from their Nuvo range, delivering a versati...
17/04/2026
Revealed following sold-out MPG Awards ceremony
Following another sold-out MPG Awards ceremony at The Troxy in London, the MPG have released a full list of ...
17/04/2026
L3Harris towed array systems provide U.S. Navy submarines with extended acoustic...
17/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
17/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
17/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
17/04/2026
Cobalt Digital and SineSix Media Partner to Transform Accessibility Compliance into a More Engaging Viewer Experience at NAB Show 2026
Collaboration integrates...
17/04/2026
Strengthening Appear's North America team with a new Vice President of Business Development
Appear ASA (Appear, OSE: APR), a global leader in live producti...
17/04/2026
New framework helps broadcasters, streaming platforms, and sports organizations apply AI to live video for monetization, metadata, highlights, and downstream wo...
17/04/2026
New whitepaper gives broadcasters and OTT operators independent, codec-by-codec evidence that the VisualOn Optimizer transforms viewer quality of experience.
V...
17/04/2026
Blackmagic Design Announces New Blackmagic URSA Cine Immersive 100G
Brie Clayton April 17, 2026
0 Comments
World's first immersive cinema camera f...
17/04/2026
NAB 2026: Vubiquity and Eluvio Showcase Streaming Solution that Significantly Re...
17/04/2026
What Makes a Good Marathon Running Playlist? We asked Xander Dawson, an eighth-semester saxophone major at Boston Conservatory running the Boston Marathon thi...
17/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
17/04/2026
At NAB Show 2026, PTZOptics (Booth N1902) will showcase a live sports streaming demo created in collaboration with Moondream, offering a new look at how Visual ...
17/04/2026
The Eindhoven University of Technology is a research university in the Netherlands spanning 25 buildings, specialising in engineering, science and technology. D...
17/04/2026
Documentary Editor - US, Remote
Brie Clayton April 17, 2026
0 Comments
Documentary Editor
April 13, 2026Freelance Video Cameraman - Los......