
As AI models evolve and adoption grows, enterprises must perform a delicate balancing act to achieve maximum value.
That's because inference - the process of running data through a model to get an output - offers a different computational challenge than training a model.
Pretraining a model - the process of ingesting data, breaking it down into tokens and finding patterns - is essentially a one-time cost. But in inference, every prompt to a model generates tokens, each of which incur a cost.
That means that as AI model performance and use increases, so do the amount of tokens generated and their associated computational costs. For companies looking to build AI capabilities, the key is generating as many tokens as possible - with maximum speed, accuracy and quality of service - without sending computational costs skyrocketing.
As such, the AI ecosystem has been working to make inference cheaper and more efficient. Inference costs have been trending down for the past year thanks to major leaps in model optimization, leading to increasingly advanced, energy-efficient accelerated computing infrastructure and full-stack solutions.
According to the Stanford University Institute for Human-Centered AI's 2025 AI Index Report, the inference cost for a system performing at the level of GPT-3.5 dropped over 280-fold between November 2022 and October 2024. At the hardware level, costs have declined by 30% annually, while energy efficiency has improved by 40% each year. Open-weight models are also closing the gap with closed models, reducing the performance difference from 8% to just 1.7% on some benchmarks in a single year. Together, these trends are rapidly lowering the barriers to advanced AI.
As models evolve and generate more demand and create more tokens, enterprises need to scale their accelerated computing resources to deliver the next generation of AI reasoning tools or risk rising costs and energy consumption.
What follows is a primer to understand the concepts of the economics of inference, enterprises can position themselves to achieve efficient, cost-effective and profitable AI solutions at scale.
Key Terminology for the Economics of AI Inference Knowing key terms of the economics of inference helps set the foundation for understanding its importance.
Tokens are the fundamental unit of data in an AI model. They're derived from data during training as text, images, audio clips and videos. Through a process called tokenization, each piece of data is broken down into smaller constituent units. During training, the model learns the relationships between tokens so it can perform inference and generate an accurate, relevant output.
Throughput refers to the amount of data - typically measured in tokens - that the model can output in a specific amount of time, which itself is a function of the infrastructure running the model. Throughput is often measured in tokens per second, with higher throughput meaning greater return on infrastructure.
Latency is a measure of the amount of time between inputting a prompt and the start of the model's response. Lower latency means faster responses. The two main ways of measuring latency are:
Time to First Token: A measurement of the initial processing time required by the model to generate its first output token after a user prompt.
Time per Output Token: The average time between consecutive tokens - or the time it takes to generate a completion token for each user querying the model at the same time. It's also known as inter-token latency or token-to-token latency.
Time to first token and time per output token are helpful benchmarks, but they're just two pieces of a larger equation. Focusing solely on them can still lead to a deterioration of performance or cost.
To account for other interdependencies, IT leaders are starting to measure goodput, which is defined as the throughput achieved by a system while maintaining target time to first token and time per output token levels. This metric allows organizations to evaluate performance in a more holistic manner, ensuring that throughput, latency and cost are aligned to support both operational efficiency and an exceptional user experience.
Energy efficiency is the measure of how effectively an AI system converts power into computational output, expressed as performance per watt. By using accelerated computing platforms, organizations can maximize tokens per watt while minimizing energy consumption.
How the Scaling Laws Apply to Inference Cost The three AI scaling laws are also core to understanding the economics of inference:
Pretraining scaling: The original scaling law that demonstrated that by increasing training dataset size, model parameter count and computational resources, models can achieve predictable improvements in intelligence and accuracy.
Post-training: A process where models are fine-tuned for accuracy and specificity so they can be applied to application development. Techniques like retrieval-augmented generation can be used to return more relevant answers from an enterprise database.
Test-time scaling (aka long thinking or reasoning ): A technique by which models allocate additional computational resources during inference to evaluate multiple possible outcomes before arriving at the best answer.
While AI is evolving and post-training and test-time scaling techniques become more sophisticated, pretraining isn't disappearing and remains an important way to scale models. Pretraining will still be needed to support post-training and test-time scaling.
Profitable AI Takes a Full-Stack Approach In comparison to inference from a model that's only gone through pretraining and post-training, models that harness test-time scaling generate multiple tokens to solve a complex problem. This results in more accurate and relevant model outputs - but
More from Nvidia
11/07/2025
Ceramics - the humble mix of earth, fire and artistry - have been part of a global conversation for millennia.
From Tang Dynasty trade routes to Renaissance pa...
10/07/2025
In the race to understand our planet's changing climate, speed and accuracy are everything. But today's most widely used climate simulators often strugg...
10/07/2025
As one of the world's largest emerging markets, Indonesia is making strides toward its Golden 2045 Vision - an initiative tapping digital technologies and...
10/07/2025
Grab a friend and climb toward the clouds - PEAK is now available on GeForce NOW, enabling members to try the hugely popular indie hit on virtually any device.
...
10/07/2025
Coding assistants or copilots - AI-powered assistants that can suggest, explain and debug code - are fundamentally changing how software is developed for both e...
08/07/2025
Modern AI applications increasingly rely on models that combine huge parameter c...
03/07/2025
The forecast this month is showing a 100% chance of epic gaming. Catch the scorching lineup of 20 titles coming to the cloud, which gamers can play whether indo...
02/07/2025
Black Forest Labs, one of the world's leading AI research labs, just changed the game for image generation.
The lab's FLUX.1 image models have earned g...
01/07/2025
In many parts of the world, including major technology hubs in the U.S., there's a yearslong wait for AI factories to come online, pending the buildout of n...
26/06/2025
As of today, NVIDIA now supports the general availability of Gemma 3n on NVIDIA RTX and Jetson. Gemma, previewed by Google DeepMind at Google I/O last month, in...
26/06/2025
Editor's note: This blog is a part of Into the Omniverse, a series focused o...
26/06/2025
Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold ...
26/06/2025
This GFN Thursday rolls out a new reward and games for GeForce NOW members. Whether hunting for hot new releases or rediscovering timeless classics, members can...
24/06/2025
To get the most out of AI, optimizations are critical. When developers think about optimizing AI models for inference, model compression techniques-such as quan...
24/06/2025
To speed up AI adoption across industries, HPE and NVIDIA today launched new AI factory offerings at HPE Discover in Las Vegas.
The new lineup includes everyth...
24/06/2025
From the heart of Germany's automotive sector to manufacturing hubs across F...
19/06/2025
GeForce NOW is throwing open the vault doors to welcome the legendary Borderland series to the cloud.
Whether a seasoned Vault Hunter or new to the mayhem of P...
18/06/2025
Project G-Assist - available through the NVIDIA App - is an experimental AI assistant that helps tune, control and optimize NVIDIA GeForce RTX systems.
NVIDIA&...
17/06/2025
As a global labor shortage leaves 50 million positions unfilled across industrie...
13/06/2025
Industrial AI isn't slowing down. Germany is ready.
Following London Tech Week and GTC Paris at VivaTech, NVIDIA founder and CEO Jensen Huang's Europea...
12/06/2025
Generative AI has reshaped how people create, imagine and interact with digital ...
12/06/2025
Level up GeForce NOW experiences this summer with 40% off Performance Day Passes. Enjoy 24 hours of premium cloud gaming with RTX ON, delivering low latency and...
11/06/2025
NVIDIA is launching a comprehensive, industry-defining autonomous vehicle (AV) software platform to accelerate large-scale deployment of safe, intelligent trans...
11/06/2025
NVIDIA Research has developed an AI light switch for videos that can turn daytim...
11/06/2025
Using NVIDIA platforms, tools and libraries, European telecommunications institu...
11/06/2025
NVIDIA was today named an Autonomous Grand Challenge winner at the Computer Visi...
11/06/2025
In the face of growing labor shortages and need for sustainability, European man...
11/06/2025
AI is packing and shipping efficiency for the retail and consumer packaged goods (CPG) industries, with a majority of surveyed companies in the space reporting ...
11/06/2025
Urban populations are expected to double by 2050, which means around 2.5 billion...
11/06/2025
Telecom companies last year spent nearly $295 billion in capital expenditures an...
11/06/2025
In a new effort to advance sovereign AI for European public service media, NVIDI...
11/06/2025
At GTC Paris - held alongside VivaTech, Europe's largest tech event - NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn't just a...
10/06/2025
Germany's Leibniz Supercomputing Centre, LRZ, is gaining a new supercomputer...
10/06/2025
With a more detailed simulation of the Earth's climate, scientists and resea...
10/06/2025
Cisco and NVIDIA are helping set a new standard for secure, scalable and high-performance enterprise AI.
Announced today at the Cisco Live conference in San Di...
09/06/2025
AI isn't waiting. And this week, neither is Europe.
At London's Olympia, under a ceiling of steel beams and enveloped by the thrum of startup pitches, ...
08/06/2025
U.K. Prime Minister Keir Starmer's ambition for Britain to be an AI maker, not an AI taker, is becoming a reality at London Tech Week.
With NVIDIA's ...
05/06/2025
GeForce NOW is a gamer's ticket to an unforgettable summer of gaming. With 25 titles coming this month and endless ways to play, the summer is going to be e...
04/06/2025
NVIDIA is working with companies worldwide to build out AI factories - speeding ...
04/06/2025
Humans learn the norms, values and behaviors of society from each other - and Bernt B rnich, founder and CEO of 1X Technologies, thinks robots should learn like...
04/06/2025
4:2:2 cameras - capable of capturing double the color information compared with most standard cameras - are becoming widely available for consumers. At the same...
02/06/2025
Editor's note: This blog, originally published on October 28, 2024, has been...
02/06/2025
Since a 7.8-magnitude earthquake hit Syria and T rkiye two years ago - leaving 5...
29/05/2025
Ready for a front-row seat to the next scientific revolution?
That's the idea behind Doudna - a groundbreaking supercomputer announced today at Lawrence Be...
29/05/2025
Large language models (LLMs), trained on datasets with billions of tokens, can generate high-quality content. They're the backbone for many of the most popu...
29/05/2025
GeForce NOW is supercharging Valve's Steam Deck with a new native app - delivering the high-quality GeForce RTX-powered gameplay members are used to on a po...
28/05/2025
Building effective agentic AI systems requires rethinking how technology interac...
27/05/2025
Over a century ago, Henry Ford pioneered the mass production of cars and engines...
27/05/2025
NVIDIA and Google share a long-standing relationship rooted in advancing AI inno...
22/05/2025
GeForce NOW is turning up the heat this summer with a hot new deal. For a limited time, save 40% on six-month Performance memberships and enjoy premium GeForce ...