
Traditional data centers only stored, retrieved and processed data. In the generative and agentic AI era, these facilities have evolved into AI token factories. With AI inference becoming their primary workload, their primary output is intelligence manufactured in the form of tokens.
This transformation demands a corresponding shift in how the economics of AI infrastructure, including total cost of ownership (TCO), is assessed. Enterprises evaluating AI infrastructure still too often focus on peak chip specifications, compute cost or floating point operations per second for every dollar spent, aka FLOPS per dollar.
The distinction that matters is this:
Compute cost is what enterprises pay for AI infrastructure, whether rented from cloud providers or owned on premises.
FLOPS per dollar is how much raw computing power an enterprise gets for every dollar spent, but raw compute and real-world token output are not the same thing.
Cost per token is an enterprise's all-in cost to produce each delivered token, usually represented as cost per million tokens.
The first two are merely input metrics. Optimizing for inputs while the business runs on output is a fundamental mismatch.
Cost per token determines whether enterprises can profitably scale AI. It's the one TCO metric that directly accounts for hardware performance, software optimization, ecosystem support and real-world utilization - and NVIDIA delivers the lowest cost per token in the industry.
What Are the Factors That Lower Token Cost? Understanding how to optimize token cost requires looking at the equation for calculating cost per million tokens.
In this equation, many enterprises evaluating AI infrastructure focus on the numerator: the cost per GPU per hour. For cloud deployments, this is the hourly rate paid to a cloud provider; for on-premises deployments, it's the effective hourly cost derived from amortizing owned infrastructure. The real key to reducing token cost, however, lies in the denominator: maximizing the delivered token output.
That denominator carries two business implications.
Minimize token cost: When this increase in token output is reflected through the cost equation, it drives down cost per token, which is what grows the profit margin on every interaction served.
Maximize revenue: More tokens delivered per second also translates to more tokens per megawatt, which means more intelligence to use in AI-powered products and services, generating more revenue from the same infrastructure investment.
So focusing only on the numerator means missing what drives the denominator. Think of it as an inference iceberg : The numerator sits above the surface, visible and easy to compare. The denominator is everything beneath the surface, which represents key factors that determine real-world token output. Accurately evaluating AI infrastructure starts with asking what lies beneath.
Surface-level inquiry:
What is the cost per GPU hour?
What are the peak petaflops and high-bandwidth memory capacity?
What are the FLOPS per dollar?
In-depth cost analysis:
What is the cost per million tokens? Specifically, what is the cost per million tokens for large-scale mixture-of-experts (MoE) reasoning models, which represent the most widely deployed type of AI models?
What is the delivered token output per megawatt? For on-premises deployments especially, where capital commitment to land, power and infrastructure is substantial, maximizing intelligence produced per megawatt is critical.
Can the scale-up interconnect handle the all-to-all traffic of MoE models?
Is FP4 precision supported? Can the inference stack make use of FP4 while maintaining high accuracy?
Does the inference runtime support speculative decoding or multi-token prediction to increase user interactivity?
Does the serving layer support disaggregated serving, KV-aware routing, KV-cache offloading and other optimizations?
Does the platform support the unique workload requirements of agentic AI - including ultralow latency, high throughput and large input sequence lengths?
Does the platform support the full lifecycle, from training and post-training to high-scale inference, across all model architectures, to ensure infrastructure fungibility and high utilization?
Every one of these algorithmic, hardware and software optimizations must be active and integrated, or the denominator collapses. A cheaper GPU that delivers significantly fewer tokens per second results in a much higher cost per token. AI infrastructure that gets it right across the full stack ensures that every optimization enhances the others.
Why Does Cost per Token Matter Much More Than FLOPS per Dollar? The following data for the DeepSeek-R1 AI model demonstrates the difference between theoretical and actual business outcomes.
Looking at compute cost alone, the NVIDIA Blackwell platform appears to cost roughly 2x more than NVIDIA Hopper - but compute cost says nothing about the output that investment buys. An analysis of mere FLOPS per dollar suggests a 2x NVIDIA Blackwell advantage compared with the NVIDIA Hopper architecture. However, the actual outcome is orders of magnitude different: Blackwell delivers more than 50x greater token output per watt than Hopper, resulting in nearly 35x lower cost per million tokens.
MetricNVIDIA Hopper (HGX H200) NVIDIA Blackwell (GB300 NVL72) NVIDIA Blackwell Relative to Hopper
Cost per GPU per Hour ($) $1.41 $2.65 2x
FLOP per Dollar (PFLOPS) 2.8 5.6 2x
Token Output per GPU 90 6K 65x
Token Output per MW 54K 2.8M 50x
Cost per Million Tokens ($) $4.20 $0.12 35x lower
Note: Data is sourced from NVIDIA analysis and the SemiAnalysis InferenceX v2 benchmark.
This massive divergence proves NVIDIA Blackwell delivers a massive leap in business value over the earlier Hopper generation that far outpaces any increase
More from Nvidia
15/04/2026
Traditional data centers only stored, retrieved and processed data. In the generative and agentic AI era, these facilities have evolved into AI token factories....
15/04/2026
The NAB Show 2026 trade show, running April 18-22 in Las Vegas, is set to showcase a wave of new features and optimizations for top video editing applications. ...
09/04/2026
A timeless story of grit, faith and rebellion takes center stage as Samson: A Ty...
02/04/2026
Open models are driving a new wave of on-device AI, extending innovation beyond the cloud to everyday devices. As these models advance, their value increasingly...
02/04/2026
No joke - GFN Thursday is skipping the tricks and heading straight into the games. April kicks off with ten new titles, bringing fresh adventures to GeForce NOW...
31/03/2026
CERAWeek - dubbed the Davos of energy - is where policymakers, producers, techno...
26/03/2026
Editor's note: This post is part of Into the Omniverse, a series focused on ...
26/03/2026
That gaming backlog won't clear itself - GeForce NOW is here to help. Stream the latest titles straight from the cloud across a variety of devices.
This we...
25/03/2026
AI is the defining technology of our time, quickly becoming core business infrastructure. It's fueled by a diverse ecosystem of models: large and small, ope...
25/03/2026
At the half-time whistle of the UEFA EURO 2020 round of 16 football match betwee...
24/03/2026
Artificial intelligence has rapidly emerged as one of the most critical workload...
23/03/2026
Autonomous agents mark a new inflection point in AI. Systems are no longer limited to generating responses or reasoning through tasks. They can take action: Age...
19/03/2026
It's a double feature on GFN Thursday. This week, GeForce NOW offers smoother sights in virtual reality (VR) and a sprawling new land to conquer.
Streaming...
17/03/2026
As AI native applications scale to more users, agents and devices, the telecommu...
17/03/2026
The features on social media apps like Snapchat evolve nearly as fast as what...
17/03/2026
The paradigm of consumer computing has revolved around the concept of a personal...
12/03/2026
Editor's note: This post is part of Into the Omniverse, a series focused on ...
12/03/2026
GeForce NOW is bringing the game to the Game Developers Conference (GDC), running this week in San Francisco. While developers build the future of gaming, GeFor...
11/03/2026
Launched today, NVIDIA Nemotron 3 Super is a 120 billion parameter open model with 12 billion active parameters designed to run complex agentic AI systems at sc...
10/03/2026
Game developers and artists are building cinematic worlds and iconic characters ...
10/03/2026
Game development teams are working across larger worlds, more complex pipelines and more distributed teams than ever. At the same time, many studios still rely ...
10/03/2026
The Cat 306 CR mini-excavator weighs just under eight tons and fits inside a standard shipping container. It's the machine a contractor rents when the job s...
10/03/2026
NVIDIA and Thinking Machines Lab announced today a multiyear strategic partnersh...
09/03/2026
AI is everywhere and accelerating everything - becoming essential infrastructure...
09/03/2026
ABB Robotics and NVIDIA today announced a breakthrough partnership that brings i...
05/03/2026
March is in full bloom, and that means a fresh wave of games heading to the cloud. 15 new titles are joining the GeForce NOW library this month.
Leading the Ma...
28/02/2026
AI-RAN is moving from lab to field, showing that a software-defined approach is ...
28/02/2026
Autonomous networks - intelligent, self-managing telecommunications operations -...
26/02/2026
GeForce NOW's anniversary celebration reaches a chilling crescendo as Capcom...
26/02/2026
GeForce NOW's anniversary celebration reaches a chilling crescendo as Capcom...
24/02/2026
AI is accelerating every aspect of healthcare - from radiology and drug discover...
23/02/2026
As technologies and systems become more digitalized and connected across the world, operational technology (OT) environments and industrial control systems (ICS...
19/02/2026
The GeForce NOW anniversary celebration keeps on rolling, and this week is all about the games that make it possible. With more than 4,500 titles supported in t...
19/02/2026
AI is accelerating the telecommunications industry's transformation, becomin...
17/02/2026
India is entering a new age of industrialization, as AI transforms how the world...
17/02/2026
Agentic AI is reshaping India's tech industry, delivering leaps in services ...
17/02/2026
India is the nexus of AI innovation this week as the host of the AI Impact Summit, which brings together global heads of state and industry to chart the future ...
16/02/2026
The NVIDIA Blackwell platform has been widely adopted by leading inference provi...
12/02/2026
At leading institutions across the globe, the NVIDIA DGX Spark desktop supercomputer is bringing data center class AI to lab benches, faculty offices and studen...
12/02/2026
A diagnostic insight in healthcare. A character's dialogue in an interactive...
12/02/2026
The GeForce NOW sixth-anniversary festivities roll on this February, continuing a monthlong celebration of NVIDIA's cloud gaming service.
This week brings ...
05/02/2026
Break out the cake and green sprinkles - GeForce NOW is turning six.
Since launch, members have streamed over 1 billion hours, and the party's just getting...
04/02/2026
Editor's note: This post is part of the Nemotron Labs blog series, which exp...
03/02/2026
At 3DEXPERIENCE World in Houston, NVIDIA founder and CEO Jensen Huang and Dassau...
29/01/2026
Mercedes-Benz is marking 140 years of automotive innovation with a new S-Class b...
29/01/2026
Editor's note: This post is part of Into the Omniverse, a series focused on ...
29/01/2026
Get ready to game - the native GeForce NOW app for Linux PCs is now available in beta, letting Linux desktops tap directly into GeForce RTX performance from the...
28/01/2026
Quantum technologies are rapidly emerging as foundational capabilities for economic competitiveness, national security and scientific leadership in the 21st cen...
22/01/2026
AI-powered driver assistance technologies are becoming standard equipment, funda...
22/01/2026
The wait is over, pilots. Flight control support - one of the most community-requested features for GeForce NOW - is live starting today, following its announce...