
NVIDIA Blackwell swept the new SemiAnalysis InferenceMAX v1 benchmarks, delivering the highest performance and best overall efficiency.
InferenceMax v1 is the first independent benchmark to measure total cost of compute across diverse models and real-world scenarios.
Best return on investment: NVIDIA GB200 NVL72 delivers unmatched AI factory economics - a $5 million investment generates $75 million in DSR1 token revenue, a 15x return on investment.
Lowest total cost of ownership: NVIDIA B200 software optimizations achieve two cents per million tokens on gpt-oss, delivering 5x lower cost per token in just 2 months.
Best throughput and interactivity: NVIDIA B200 sets the pace with 60,000 tokens per second per GPU and 1,000 tokens per second per user on gpt-oss with the latest NVIDIA TensorRT-LLM stack.
As AI shifts from one-shot answers to complex reasoning, the demand for inference - and the economics behind it - is exploding.
The new independent InferenceMAX v1 benchmarks are the first to measure total cost of compute across real-world scenarios. The results? The NVIDIA Blackwell platform swept the field - delivering unmatched performance and best overall efficiency for AI factories.
A $5 million investment in an NVIDIA GB200 NVL72 system can generate $75 million in token revenue. That's a 15x return on investment (ROI) - the new economics of inference.
Inference is where AI delivers value every day, said Ian Buck, vice president of hyperscale and high-performance computing at NVIDIA. These results show that NVIDIA's full-stack approach gives customers the performance and efficiency they need to deploy AI at scale.
Enter InferenceMAX v1 InferenceMAX v1, a new benchmark from SemiAnalysis released Monday, is the latest to highlight Blackwell's inference leadership. It runs popular models across leading platforms, measures performance for a wide range of use cases and publishes results anyone can verify.
Why do benchmarks like this matter?
Because modern AI isn't just about raw speed - it's about efficiency and economics at scale. As models shift from one-shot replies to multistep reasoning and tool use, they generate far more tokens per query, dramatically increasing compute demands.
NVIDIA's open-source collaborations with OpenAI (gpt-oss 120B), Meta (Llama 3 70B), and DeepSeek AI (DeepSeek R1) highlight how community-driven models are advancing state-of-the-art reasoning and efficiency.
Partnering with these leading model builders and the open-source community, NVIDIA ensures the latest models are optimized for the world's largest AI inference infrastructure. These efforts reflect a broader commitment to open ecosystems - where shared innovation accelerates progress for everyone.
Deep collaborations with the FlashInfer, SGLang and vLLM communities enable codeveloped kernel and runtime enhancements that power these models at scale.
Software Optimizations Deliver Continued Performance Gains NVIDIA continuously improves performance through hardware and software codesign optimizations. Initial gpt-oss-120b performance on an NVIDIA DGX Blackwell B200 system with the NVIDIA TensorRT LLM library was market-leading, but NVIDIA's teams and the community have significantly optimized TensorRT LLM for open-source large language models.
The TensorRT LLM v1.0 release is a major breakthrough in making large AI models faster and more responsive for everyone.
Through advanced parallelization techniques, it uses the B200 system and NVIDIA NVLink Switch's 1,800 GB/s bidirectional bandwidth to dramatically improve the performance of the gpt-oss-120b model.
The innovation doesn't stop there. The newly released gpt-oss-120b-Eagle3-v2 model introduces speculative decoding, a clever method that predicts multiple tokens at a time.
This reduces lag and delivers even quicker results, tripling throughput at 100 tokens per second per user (TPS/user) - boosting per-GPU speeds from 6,000 to 30,000 tokens.
For dense AI models like Llama 3.3 70B, which demand significant computational resources due to their large parameter count and the fact that all parameters are utilized simultaneously during inference, NVIDIA Blackwell B200 sets a new performance standard in InferenceMAX v1 benchmarks.
Blackwell delivers over 10,000 TPS per GPU at 50 TPS per user interactivity - 4x higher per-GPU throughput compared with the NVIDIA H200 GPU.
Performance Efficiency Drives Value Metrics like tokens per watt, cost per million tokens and TPS/user matter as much as throughput. In fact, for power-limited AI factories, Blackwell delivers 10x throughput per megawatt compared with the previous generation, which translates into higher token revenue.
The cost per token is crucial for evaluating AI model efficiency, directly impacting operational expenses. The NVIDIA Blackwell architecture lowered cost per million tokens by 15x versus the previous generation, leading to substantial savings and fostering wider AI deployment and innovation.
Multidimensional Performance InferenceMAX uses the Pareto frontier - a curve that shows the best trade-offs between different factors, such as data center throughput and responsiveness - to map performance.
But it's more than a chart. It reflects how NVIDIA Blackwell balances the full spectrum of production priorities: cost, energy efficiency, throughput and responsiveness. That balance enables the highest ROI across real-world workloads.
Systems that optimize for just one mode or scenario may show peak performance in isolation, but the economics of that doesn't scale. Blackwell's full-stack design delivers efficiency and value where it matters most: in production.
For a deeper look at how these curves are built - and why they matter for total cost of ownership and service-level agreement planning - check out this technical deep d
More from Nvidia
27/11/2025
Black Friday is leveling up. Get ready to score one of the biggest deals of the season - 50% off the first three months of a new GeForce NOW Ultimate membership...
25/11/2025
Black Forest Labs - the frontier AI research lab developing visual generative AI models - today released the FLUX.2 family of state-of-the-art image generation ...
24/11/2025
Editor's note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copi...
20/11/2025
Editor's note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows u...
20/11/2025
The NVIDIA Blackwell RTX upgrade is nearing the finish line, letting GeForce NOW Ultimate members across the globe experience true next-generation cloud gaming ...
20/11/2025
Tanya Berger-Wolf's first computational biology project started as a bet wit...
18/11/2025
Timed with the Microsoft Ignite conference running this week, NVIDIA is expandin...
18/11/2025
Today, Microsoft, NVIDIA and Anthropic announced new strategic partnerships. Anthropic is scaling its rapidly growing Claude AI model on Microsoft Azure, powere...
18/11/2025
AI agents have the potential to become indispensable tools for automating complex tasks. But bringing agents to production remains challenging.
According to Ga...
17/11/2025
NVIDIA Apollo - a family of open models for accelerating industrial and computat...
17/11/2025
To power future technologies including liquid-cooled data centers, high-resoluti...
17/11/2025
At SC25, NVIDIA unveiled advances across NVIDIA BlueField DPUs, next-generation networking, quantum computing, national research, AI physics and more - as accel...
17/11/2025
Across quantum physics, digital biology and climate research, the world's researchers are harnessing a universal scientific instrument to chart new frontier...
17/11/2025
It used to be that computing power trickled down from hulking supercomputers to ...
14/11/2025
Today's AI workloads are data-intensive, requiring more scalable and afforda...
13/11/2025
Editor's note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copi...
13/11/2025
Chaos has entered the chat. It's GFN Thursday, and things are getting intense with the launch of Call of Duty: Black Ops 7, streaming at launch this week on...
12/11/2025
In the age of AI reasoning, training smarter, more capable models is critical to scaling intelligence. Delivering the massive performance to meet this new age r...
12/11/2025
Large language model (LLM)-based AI assistants are powerful productivity tools, but without the right context and information, they can struggle to provide nuan...
10/11/2025
Editor's note: This post is part of Think SMART, a series focused on how lea...
06/11/2025
NVIDIA founder and CEO Jensen Huang and chief scientist Bill Dally were honored ...
06/11/2025
Editor's note: This blog has been updated to reflect the correct launch date for Call of Duty: Black Ops 7', November 14.
A crisp chill's in the...
04/11/2025
In Berlin on Tuesday, Deutsche Telekom and NVIDIA unveiled the world's first...
04/11/2025
When inspiration strikes, nothing kills momentum faster than a slow tool or a frozen timeline. Creative apps should feel fast and fluid - an extension of imagin...
03/11/2025
Two out of every three people are likely to be living in cities or other urban c...
31/10/2025
Amidst Gyeongju, South Korea's ancient temples and modern skylines, Jensen H...
30/10/2025
An unassuming van driving around rural India uses powerful AI technology that...
30/10/2025
Get ready, raiders - the wait is over. ARC Raiders is dropping onto GeForce NOW and bringing the fight from orbit to the screen.
To celebrate the launch, gamer...
29/10/2025
Editor's note: This post is part of Into the Omniverse, a series focused on ...
28/10/2025
Governments everywhere are racing to harness the power of AI - but legacy infras...
28/10/2025
AI is moving from the digital world into the physical one. Across factory floors...
28/10/2025
NVIDIA is delivering the telecom industry a major boost in open-source software for building AI-native 5G and 6G networks.
NVIDIA Aerial software will soon be ...
28/10/2025
The race to bottle a star now runs on AI.
NVIDIA, General Atomics and a team of international partners have built a high-fidelity, AI-enabled digital twin for ...
28/10/2025
Along the Pacific Ocean in Monterey, California, the Naval Postgraduate School (...
28/10/2025
To democratize access to AI technology nationwide, AI education and deployment c...
28/10/2025
Leading technology companies in aerospace and automotive are accelerating their ...
26/10/2025
This year's ROSCon conference heads to Singapore, bringing together the global robotics developer community behind Robot Operating System (ROS) - the world&...
24/10/2025
Monday, Oct. 27, 12:30 p.m.
How Medium-Sized Cities Are Tackling AI Readiness
L to R: Mark Muro, senior fellow at Brookings Metro; Micah Runner, city manag...
23/10/2025
The nights grow longer and the shadows get bolder with Vampire The Masquerade: B...
21/10/2025
Coastal communities in the U.S. have a 26% chance of flooding within a 30-year period. This percentage is expected to increase due to climate-change-driven sea-...
20/10/2025
NVIDIA and Google Cloud are expanding access to accelerated computing to transform the full spectrum of enterprise workloads, from visual computing to agentic a...
17/10/2025
As Open Source AI Week comes to a close, we're celebrating the innovation, c...
17/10/2025
AI has ignited a new industrial revolution.
NVIDIA and TSMC are working togethe...
16/10/2025
GeForce NOW is more than just a platform to stream fresh games every week - it offers celebrations for the gamers who make it epic, with member rewards to sweet...
14/10/2025
AI is transforming the way enterprises build, deploy and scale intelligent applications. As demand surges for enterprise-grade AI applications that offer speed,...
14/10/2025
At Oracle AI World, NVIDIA and Oracle announced they are deepening their collabo...
13/10/2025
The future of AI took flight at Starbase, Texas - where NVIDIA CEO Jensen Huang ...
13/10/2025
At the OCP Global Summit, NVIDIA is offering a glimpse into the future of gigawa...
09/10/2025
NVIDIA Blackwell swept the new SemiAnalysis InferenceMAX v1 benchmarks, deliveri...
09/10/2025
Microsoft Azure today announced the new NDv6 GB300 VM series, delivering the ind...