
As AI models evolve and adoption grows, enterprises must perform a delicate balancing act to achieve maximum value.
That's because inference - the process of running data through a model to get an output - offers a different computational challenge than training a model.
Pretraining a model - the process of ingesting data, breaking it down into tokens and finding patterns - is essentially a one-time cost. But in inference, every prompt to a model generates tokens, each of which incur a cost.
That means that as AI model performance and use increases, so do the amount of tokens generated and their associated computational costs. For companies looking to build AI capabilities, the key is generating as many tokens as possible - with maximum speed, accuracy and quality of service - without sending computational costs skyrocketing.
As such, the AI ecosystem has been working to make inference cheaper and more efficient. Inference costs have been trending down for the past year thanks to major leaps in model optimization, leading to increasingly advanced, energy-efficient accelerated computing infrastructure and full-stack solutions.
According to the Stanford University Institute for Human-Centered AI's 2025 AI Index Report, the inference cost for a system performing at the level of GPT-3.5 dropped over 280-fold between November 2022 and October 2024. At the hardware level, costs have declined by 30% annually, while energy efficiency has improved by 40% each year. Open-weight models are also closing the gap with closed models, reducing the performance difference from 8% to just 1.7% on some benchmarks in a single year. Together, these trends are rapidly lowering the barriers to advanced AI.
As models evolve and generate more demand and create more tokens, enterprises need to scale their accelerated computing resources to deliver the next generation of AI reasoning tools or risk rising costs and energy consumption.
What follows is a primer to understand the concepts of the economics of inference, enterprises can position themselves to achieve efficient, cost-effective and profitable AI solutions at scale.
Key Terminology for the Economics of AI Inference Knowing key terms of the economics of inference helps set the foundation for understanding its importance.
Tokens are the fundamental unit of data in an AI model. They're derived from data during training as text, images, audio clips and videos. Through a process called tokenization, each piece of data is broken down into smaller constituent units. During training, the model learns the relationships between tokens so it can perform inference and generate an accurate, relevant output.
Throughput refers to the amount of data - typically measured in tokens - that the model can output in a specific amount of time, which itself is a function of the infrastructure running the model. Throughput is often measured in tokens per second, with higher throughput meaning greater return on infrastructure.
Latency is a measure of the amount of time between inputting a prompt and the start of the model's response. Lower latency means faster responses. The two main ways of measuring latency are:
Time to First Token: A measurement of the initial processing time required by the model to generate its first output token after a user prompt.
Time per Output Token: The average time between consecutive tokens - or the time it takes to generate a completion token for each user querying the model at the same time. It's also known as inter-token latency or token-to-token latency.
Time to first token and time per output token are helpful benchmarks, but they're just two pieces of a larger equation. Focusing solely on them can still lead to a deterioration of performance or cost.
To account for other interdependencies, IT leaders are starting to measure goodput, which is defined as the throughput achieved by a system while maintaining target time to first token and time per output token levels. This metric allows organizations to evaluate performance in a more holistic manner, ensuring that throughput, latency and cost are aligned to support both operational efficiency and an exceptional user experience.
Energy efficiency is the measure of how effectively an AI system converts power into computational output, expressed as performance per watt. By using accelerated computing platforms, organizations can maximize tokens per watt while minimizing energy consumption.
How the Scaling Laws Apply to Inference Cost The three AI scaling laws are also core to understanding the economics of inference:
Pretraining scaling: The original scaling law that demonstrated that by increasing training dataset size, model parameter count and computational resources, models can achieve predictable improvements in intelligence and accuracy.
Post-training: A process where models are fine-tuned for accuracy and specificity so they can be applied to application development. Techniques like retrieval-augmented generation can be used to return more relevant answers from an enterprise database.
Test-time scaling (aka long thinking or reasoning ): A technique by which models allocate additional computational resources during inference to evaluate multiple possible outcomes before arriving at the best answer.
While AI is evolving and post-training and test-time scaling techniques become more sophisticated, pretraining isn't disappearing and remains an important way to scale models. Pretraining will still be needed to support post-training and test-time scaling.
Profitable AI Takes a Full-Stack Approach In comparison to inference from a model that's only gone through pretraining and post-training, models that harness test-time scaling generate multiple tokens to solve a complex problem. This results in more accurate and relevant model outputs - but
More from Nvidia
30/04/2025
The quick-service restaurant (QSR) industry is being reinvented by AI.
For exam...
30/04/2025
AI-powered image generation has progressed at a remarkable pace - from early exa...
28/04/2025
As enterprises increasingly adopt AI, securing AI factories - where complex, agentic workflows are executed - has never been more critical.
NVIDIA is bringing ...
28/04/2025
Agentic AI is redefining the cybersecurity landscape - introducing new opportunities that demand rethinking how to secure AI while offering the keys to addressi...
28/04/2025
Oracle has stood up and optimized its first wave of liquid-cooled NVIDIA GB200 N...
24/04/2025
Advancing AI requires a full-stack approach, with a powerful foundation of computing infrastructure - including accelerated processors and networking technologi...
24/04/2025
Get the controllers ready and clear the calendar - it's a jam-packed GFN Thu...
23/04/2025
As AI models evolve and adoption grows, enterprises must perform a delicate balancing act to achieve maximum value.
That's because inference - the process ...
23/04/2025
Financial services has long been at the forefront of adopting technological innovations. Today, generative AI and agentic systems are redefining the industry, f...
23/04/2025
AI is rapidly reshaping what's possible on a PC - whether for real-time image generation or voice-controlled workflows. As AI capabilities grow, so does the...
23/04/2025
An AI agent is only as accurate, relevant and timely as the data that powers it....
22/04/2025
Whether at sea, land or in the sky - even outer space - NVIDIA technology is helping research scientists and developers alike explore and understand oceans, wil...
22/04/2025
Traditionally, data centers have relied on air cooling - where mechanical chillers circulate chilled air to absorb heat from servers, helping them maintain opti...
22/04/2025
About 15% of the world's population - over a billion people - are affected by neurological disorders, from commonly known diseases like Alzheimer's and ...
17/04/2025
Gold prospecting in Venezuela has led to a malaria resurgence, but researchers h...
17/04/2025
As the days grow longer and the flowers bloom, GFN Thursday brings a fresh lineup of games to brighten the week.
Dive into thrilling hunts and dark fantasy adv...
16/04/2025
Isomorphic Labs is reimagining the drug discovery process with an AI-first approach. At the heart of this work is a new way of thinking about biology.
Max Jade...
16/04/2025
Editor's note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners, and enterprises can transform their workflows ...
15/04/2025
The final days of the AI Mathematical Olympiad's latest competition were a transcontinental relay for team NVIDIA.
Every evening, two team members on oppos...
15/04/2025
Every company and country wants to grow and create economic opportunity - but they need virtually limitless intelligence to do so. Working with its ecosystem pa...
15/04/2025
CoreWeave today became one of the first cloud providers to bring NVIDIA GB200 NV...
14/04/2025
NVIDIA is working with its manufacturing partners to design and build factories that, for the first time, will produce NVIDIA AI supercomputers entirely in the ...
11/04/2025
As a teenager, Bradley Rothenberg was obsessed with CAD: computer-aided design software.
Before he turned 30, Rothenberg channeled that interest into building ...
10/04/2025
NVIDIA this week recognized the contributions of partners in Europe, the Middle ...
10/04/2025
As AI-powered tools continue to evolve, NVIDIA GeForce RTX 50 Series and NVIDIA ...
10/04/2025
Get ready to explore the Deep South. South of Midnight, the action-adventure gam...
09/04/2025
Necessity is the mother of invention. And sometimes, what a person really needs ...
09/04/2025
NVIDIA is collaborating with Google Cloud to bring agentic AI to enterprises seeking to locally harness the Google Gemini family of AI models using the NVIDIA B...
06/04/2025
This National Robotics Week, NVIDIA highlighted the pioneering technologies that...
03/04/2025
The Nintendo Switch 2, unveiled April 2, takes performance to the next level, powered by a custom NVIDIA processor featuring an NVIDIA GPU with dedicated RT Cor...
03/04/2025
Real-time AI is unlocking new possibilities in media and entertainment, improving viewer engagement and advancing intelligent content creation.
At NAB Show, a...
03/04/2025
Editor's note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copi...
03/04/2025
GeForce NOW isn't fooling around.
This month, 21 games are joining the cloud gaming library of over 2,000 titles. Whether chasing epic adventures, testing ...
02/04/2025
In the latest MLPerf Inference V5.0 benchmarks, which reflect some of the most c...
02/04/2025
AI is rapidly transforming how organizations solve complex challenges.
The early stages of enterprise AI adoption focused on using large language models to cre...
02/04/2025
Video editing workflows are getting a lot more colorful.
Adobe recently announc...
31/03/2025
Advances in physical AI are enabling organizations to embrace embodied AI across...
27/03/2025
A new resident is moving into the cloud - KRAFTON's inZOI joins the 2,000+ games in the GeForce NOW cloud gaming library.
Plus, members can get ready for a...
26/03/2025
The reliability of the electric grid is critical.
From handling demand surges and evolving power needs to preventing infrastructure failures that can cause wil...
25/03/2025
Generative AI is unlocking new capabilities for PCs and workstations, including ...
20/03/2025
The power and utilities sector keeps the lights on for the world's populatio...
20/03/2025
Time to sharpen the blade. GeForce NOW brings a legendary addition to the cloud: Ubisoft's highly anticipated Assassin's Creed Shadows is now available ...
19/03/2025
The roots of many of NVIDIA's landmark innovations - the foundational techno...
19/03/2025
AI has been shaping the media and entertainment industry for decades, from early recommendation engines to AI-driven editing and visual effects automation. Real...
19/03/2025
NVIDIA this week recognized 14 partners leading the way across the Americas for their work advancing agentic and physical AI across industries.
The 2025 Americ...
18/03/2025
The quick-service restaurant industry is a marvel of modern logistics, where spe...
18/03/2025
Global telecommunications networks can support millions of user connections per day, generating more than 3,800 terabytes of data per minute on average.
That m...
18/03/2025
The telecom industry is increasingly embracing AI to deliver seamless connections - even in conditions of poor signal strength - while maximizing sustainability...
18/03/2025
AI agents are transforming work, delivering time and cost savings by helping peo...
18/03/2025
AI deployments thrive on speed, data and scale. That's why NVIDIA is expandi...