
NVIDIA Blackwell swept the new SemiAnalysis InferenceMAX v1 benchmarks, delivering the highest performance and best overall efficiency.
InferenceMax v1 is the first independent benchmark to measure total cost of compute across diverse models and real-world scenarios.
Best return on investment: NVIDIA GB200 NVL72 delivers unmatched AI factory economics - a $5 million investment generates $75 million in DSR1 token revenue, a 15x return on investment.
Lowest total cost of ownership: NVIDIA B200 software optimizations achieve two cents per million tokens on gpt-oss, delivering 5x lower cost per token in just 2 months.
Best throughput and interactivity: NVIDIA B200 sets the pace with 60,000 tokens per second per GPU and 1,000 tokens per second per user on gpt-oss with the latest NVIDIA TensorRT-LLM stack.
As AI shifts from one-shot answers to complex reasoning, the demand for inference - and the economics behind it - is exploding.
The new independent InferenceMAX v1 benchmarks are the first to measure total cost of compute across real-world scenarios. The results? The NVIDIA Blackwell platform swept the field - delivering unmatched performance and best overall efficiency for AI factories.
A $5 million investment in an NVIDIA GB200 NVL72 system can generate $75 million in token revenue. That's a 15x return on investment (ROI) - the new economics of inference.
Inference is where AI delivers value every day, said Ian Buck, vice president of hyperscale and high-performance computing at NVIDIA. These results show that NVIDIA's full-stack approach gives customers the performance and efficiency they need to deploy AI at scale.
Enter InferenceMAX v1 InferenceMAX v1, a new benchmark from SemiAnalysis released Monday, is the latest to highlight Blackwell's inference leadership. It runs popular models across leading platforms, measures performance for a wide range of use cases and publishes results anyone can verify.
Why do benchmarks like this matter?
Because modern AI isn't just about raw speed - it's about efficiency and economics at scale. As models shift from one-shot replies to multistep reasoning and tool use, they generate far more tokens per query, dramatically increasing compute demands.
NVIDIA's open-source collaborations with OpenAI (gpt-oss 120B), Meta (Llama 3 70B), and DeepSeek AI (DeepSeek R1) highlight how community-driven models are advancing state-of-the-art reasoning and efficiency.
Partnering with these leading model builders and the open-source community, NVIDIA ensures the latest models are optimized for the world's largest AI inference infrastructure. These efforts reflect a broader commitment to open ecosystems - where shared innovation accelerates progress for everyone.
Deep collaborations with the FlashInfer, SGLang and vLLM communities enable codeveloped kernel and runtime enhancements that power these models at scale.
Software Optimizations Deliver Continued Performance Gains NVIDIA continuously improves performance through hardware and software codesign optimizations. Initial gpt-oss-120b performance on an NVIDIA DGX Blackwell B200 system with the NVIDIA TensorRT LLM library was market-leading, but NVIDIA's teams and the community have significantly optimized TensorRT LLM for open-source large language models.
The TensorRT LLM v1.0 release is a major breakthrough in making large AI models faster and more responsive for everyone.
Through advanced parallelization techniques, it uses the B200 system and NVIDIA NVLink Switch's 1,800 GB/s bidirectional bandwidth to dramatically improve the performance of the gpt-oss-120b model.
The innovation doesn't stop there. The newly released gpt-oss-120b-Eagle3-v2 model introduces speculative decoding, a clever method that predicts multiple tokens at a time.
This reduces lag and delivers even quicker results, tripling throughput at 100 tokens per second per user (TPS/user) - boosting per-GPU speeds from 6,000 to 30,000 tokens.
For dense AI models like Llama 3.3 70B, which demand significant computational resources due to their large parameter count and the fact that all parameters are utilized simultaneously during inference, NVIDIA Blackwell B200 sets a new performance standard in InferenceMAX v1 benchmarks.
Blackwell delivers over 10,000 TPS per GPU at 50 TPS per user interactivity - 4x higher per-GPU throughput compared with the NVIDIA H200 GPU.
Performance Efficiency Drives Value Metrics like tokens per watt, cost per million tokens and TPS/user matter as much as throughput. In fact, for power-limited AI factories, Blackwell delivers 10x throughput per megawatt compared with the previous generation, which translates into higher token revenue.
The cost per token is crucial for evaluating AI model efficiency, directly impacting operational expenses. The NVIDIA Blackwell architecture lowered cost per million tokens by 15x versus the previous generation, leading to substantial savings and fostering wider AI deployment and innovation.
Multidimensional Performance InferenceMAX uses the Pareto frontier - a curve that shows the best trade-offs between different factors, such as data center throughput and responsiveness - to map performance.
But it's more than a chart. It reflects how NVIDIA Blackwell balances the full spectrum of production priorities: cost, energy efficiency, throughput and responsiveness. That balance enables the highest ROI across real-world workloads.
Systems that optimize for just one mode or scenario may show peak performance in isolation, but the economics of that doesn't scale. Blackwell's full-stack design delivers efficiency and value where it matters most: in production.
For a deeper look at how these curves are built - and why they matter for total cost of ownership and service-level agreement planning - check out this technical deep d
More from Nvidia
09/10/2025
NVIDIA Blackwell swept the new SemiAnalysis InferenceMAX v1 benchmarks, deliveri...
09/10/2025
Microsoft Azure today announced the new NDv6 GB300 VM series, delivering the ind...
09/10/2025
Lock, load and stream - the battle is just beginning. EA's highly anticipated Battlefield 6 is set to storm the cloud when it launches tomorrow with GeForce...
08/10/2025
Telecommunication networks are critical infrastructure for every nation, underpi...
02/10/2025
Editor's note: This blog has been updated to include an additional game for October, The Outer Worlds 2.
October is creeping in with plenty of gaming treat...
01/10/2025
Many users want to run large language models (LLMs) locally for more privacy and control, and without subscriptions, but until recently, this meant a trade-off ...
30/09/2025
Quantum computing promises to reshape industries - but progress hinges on solvin...
30/09/2025
Editor's note: This blog is a part of Into the Omniverse, a series focused o...
25/09/2025
Suit up and head for the cloud. Mecha BREAK, the popular third-person shooter, is now available to stream on GeForce NOW with NVIDIA DLSS 4 technology.
Catch i...
24/09/2025
Canada's role as a leader in artificial intelligence was on full display at ...
24/09/2025
Open technologies - made available to developers and businesses to adopt, modify...
23/09/2025
Energy efficiency in large language model inference has improved 100,000x in the...
22/09/2025
OpenAI and NVIDIA just announced a landmark AI infrastructure partnership - an initiative that will scale OpenAI's compute with multi-gigawatt data centers ...
19/09/2025
AI is no longer solely a back-office tool. It's a strategic partner that can...
18/09/2025
The U.K. was the center of the AI world this week as NVIDIA, U.K. and U.S. leade...
18/09/2025
GeForce NOW is packing a monstrous punch this week. Dying Light: The Beast, the latest adrenaline fueled chapter in Techland's parkour meets survival horror...
17/09/2025
Today's creators are equal parts entertainer, producer and gamer, juggling game commentary, scene changes, replay clips, chat moderation and technical troub...
16/09/2025
The U.K. is driving investments in sovereign AI, using the technology to advance...
13/09/2025
Celtic languages - including Cornish, Irish, Scottish Gaelic and Welsh - are the U.K.'s oldest living languages. To empower their speakers, the UK-LLM sover...
10/09/2025
GeForce NOW Blackwell RTX 5080-class SuperPODs are now rolling out, unlocking a new level of ultra high-performance, cinematic cloud gaming.
GeForce NOW Ultima...
09/09/2025
Inference has emerged as the new frontier of complexity in AI. Modern models are...
09/09/2025
As large language models (LLMs) grow larger, they get smarter, with open models from leading developers now featuring hundreds of billions of parameters. At the...
09/09/2025
At this week's AI Infrastructure Summit in Silicon Valley, NVIDIA's VP o...
09/09/2025
Inference performance is critical, as it directly influences the economics of an AI factory. The higher the throughput of AI factory infrastructure, the more to...
09/09/2025
At this week's IAA Mobility conference in Munich, NVIDIA Vice President of A...
09/09/2025
ComfyUI - an open-source, node-based graphical interface for running and buildin...
04/09/2025
NVIDIA today announced new AI education support for K-12 programs at a White House event to celebrate public-private partnerships that advance artificial intell...
04/09/2025
Editor's note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copi...
04/09/2025
NVIDIA Blackwell RTX is coming to the cloud on Wednesday, Sept. 10 - an upgrade ...
03/09/2025
3D artists are constantly prototyping.
In traditional workflows, modelers must build placeholder, low-fidelity assets to populate 3D scenes, tinkering and adju...
02/09/2025
For more than a century, meteorologists have chased storms with chalkboards, equ...
28/08/2025
Brace yourself, COGs - the Locusts aren't the only thing rising up. The Coal...
28/08/2025
Last week at Gamescom, NVIDIA announced the winners of the NVIDIA and ModDB RTX ...
27/08/2025
AI models are advancing at a rapid rate and scale.
But what might they lack that (most) humans don't? Common sense: an understanding, developed through rea...
25/08/2025
Robots around the world are about to get a lot smarter as physical AI developers...
25/08/2025
As autonomous vehicle systems rapidly grow in complexity, equipped with reasonin...
22/08/2025
As the latest member of the NVIDIA Blackwell architecture family, the NVIDIA Blackwell Ultra GPU builds on core innovations to accelerate training and AI reason...
22/08/2025
AI reasoning, inference and networking will be top of mind for attendees of next...
21/08/2025
Japan is once again building a landmark high-performance computing system - not ...
21/08/2025
From AI assistants doing deep research to autonomous vehicles making split-second navigation decisions, AI adoption is exploding across industries.
Behind ever...
21/08/2025
Across the globe, AI factories are rising - massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Inte...
21/08/2025
Get a glimpse into the future of gaming.
The NVIDIA Blackwell RTX architecture is coming to GeForce NOW in September, marking the service's biggest upgrade...
20/08/2025
Editor's note: This blog is a part of Into the Omniverse, a series focused o...
18/08/2025
With over 175 games now supporting NVIDIA DLSS 4 - a suite of advanced, AI-power...
18/08/2025
At Gamescom, NVIDIA is releasing its first major update to Project G Assist - an...
15/08/2025
Of around 7,000 languages in the world, a tiny fraction are supported by AI lang...
14/08/2025
NVIDIA is partnering with the U.S. National Science Foundation (NSF) to create a...
14/08/2025
Warhammer 40,000: Dawn of War - Definitive Edition is marching onto GeForce NOW,...
13/08/2025
Black Forest Labs' FLUX.1 Kontext [dev] image editing model is now available as an NVIDIA NIM microservice.
FLUX.1 models allow users to edit existing imag...
11/08/2025
Using NVIDIA digital twin technologies, Amazon Devices & Services is powering bi...