
It's official: NVIDIA delivered the world's fastest platform in industry-standard tests for inference on generative AI.
In the latest MLPerf benchmarks, NVIDIA TensorRT-LLM - software that speeds and simplifies the complex job of inference on large language models - boosted the performance of NVIDIA Hopper architecture GPUs on the GPT-J LLM nearly 3x over their results just six months ago.
The dramatic speedup demonstrates the power of NVIDIA's full-stack platform of chips, systems and software to handle the demanding requirements of running generative AI.
Leading companies are using TensorRT-LLM to optimize their models. And NVIDIA NIM - a set of inference microservices that includes inferencing engines like TensorRT-LLM - makes it easier than ever for businesses to deploy NVIDIA's inference platform.
Raising the Bar in Generative AI TensorRT-LLM running on NVIDIA H200 Tensor Core GPUs - the latest, memory-enhanced Hopper GPUs - delivered the fastest performance running inference in MLPerf's biggest test of generative AI to date.
The new benchmark uses the largest version of Llama 2, a state-of-the-art large language model packing 70 billion parameters. The model is more than 10x larger than the GPT-J LLM first used in the September benchmarks.
The memory-enhanced H200 GPUs, in their MLPerf debut, used TensorRT-LLM to produce up to 31,000 tokens/second, a record on MLPerf's Llama 2 benchmark.
The H200 GPU results include up to 14% gains from a custom thermal solution. It's one example of innovations beyond standard air cooling that systems builders are applying to their NVIDIA MGX designs to take the performance of Hopper GPUs to new heights.
Memory Boost for NVIDIA Hopper GPUs NVIDIA is sampling H200 GPUs to customers today and shipping in the second quarter. They'll be available soon from nearly 20 leading system builders and cloud service providers.
H200 GPUs pack 141GB of HBM3e running at 4.8TB/s. That's 76% more memory flying 43% faster compared to H100 GPUs. These accelerators plug into the same boards and systems and use the same software as H100 GPUs.
With HBM3e memory, a single H200 GPU can run an entire Llama 2 70B model with the highest throughput, simplifying and speeding inference.
GH200 Packs Even More Memory Even more memory - up to 624GB of fast memory, including 144GB of HBM3e - is packed in NVIDIA GH200 Superchips, which combine on one module a Hopper architecture GPU and a power-efficient NVIDIA Grace CPU. NVIDIA accelerators are the first to use HBM3e memory technology.
With nearly 5 TB/second memory bandwidth, GH200 Superchips delivered standout performance, including on memory-intensive MLPerf tests such as recommender systems.
Sweeping Every MLPerf Test On a per-accelerator basis, Hopper GPUs swept every test of AI inference in the latest round of the MLPerf industry benchmarks.
In addition, NVIDIA Jetson Orin remains at the forefront in MLPerf's edge category. In the last two inference rounds, Orin ran the most diverse set of models in the category, including GPT-J and Stable Diffusion XL.
The MLPerf benchmarks cover today's most popular AI workloads and scenarios, including generative AI, recommendation systems, natural language processing, speech and computer vision. NVIDIA was the only company to submit results on every workload in the latest round and every round since MLPerf's data center inference benchmarks began in October 2020.
Continued performance gains translate into lower costs for inference, a large and growing part of the daily work for the millions of NVIDIA GPUs deployed worldwide.
Advancing What's Possible Pushing the boundaries of what's possible, NVIDIA demonstrated three innovative techniques in a special section of the benchmarks called the open division, created for testing advanced AI methods.
NVIDIA engineers used a technique called structured sparsity - a way of reducing calculations, first introduced with NVIDIA A100 Tensor Core GPUs - to deliver up to 33% speedups on inference with Llama 2.
A second open division test found inference speedups of up to 40% using pruning, a way of simplifying an AI model - in this case, an LLM - to increase inference throughput.
Finally, an optimization called DeepCache reduced the math required for inference with the Stable Diffusion XL model, accelerating performance by a whopping 74%.
All these results were run on NVIDIA H100 Tensor Core GPUs.
A Trusted Source for Users MLPerf's tests are transparent and objective, so users can rely on the results to make informed buying decisions.
NVIDIA's partners participate in MLPerf because they know it's a valuable tool for customers evaluating AI systems and services. Partners submitting results on the NVIDIA AI platform in this round included ASUS, Cisco, Dell Technologies, Fujitsu, GIGABYTE, Google, Hewlett Packard Enterprise, Lenovo, Microsoft Azure, Oracle, QCT, Supermicro, VMware (recently acquired by Broadcom) and Wiwynn.
All the software NVIDIA used in the tests is available in the MLPerf repository. These optimizations are continuously folded into containers available on NGC, NVIDIA's software hub for GPU applications, as well as NVIDIA AI Enterprise - a secure, supported platform that includes NIM inference microservices.
The Next Big Thing The use cases, model sizes and datasets for generative AI continue to expand. That's why MLPerf continues to evolve, adding real-world tests with popular models like Llama 2 70B and Stable Diffusion XL.
Keeping pace with the explosion in LLM model sizes, NVIDIA founder and CEO Jensen Huang announced last week at GTC that the NVIDIA Blackwell architecture GPUs will deliver new levels of performance required for the multitrillion-parameter AI models.
Inference for large language models is difficult, requiring both e
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
01/04/2026
January 4 2026, 18:00 (PST) DOLBY AND DOUYIN EMPOWER THE NEXT GENERATON OF CREATORS WITH DOLBY VISION
Douyin Users Can Now Create And Share Videos With Stun...
12/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
12/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
12/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
12/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
12/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
12/02/2026
The GeForce NOW sixth-anniversary festivities roll on this February, continuing a monthlong celebration of NVIDIA's cloud gaming service.
This week brings ...
12/02/2026
TIME100 Health list features Scripps Research Professor Darrell Irvine Irvine is recognized for his work in empowering the immune system to fight disease, which...
11/02/2026
FYI: Phone Support Maintenance One thing we pride ourselves on here at Utah Scientific is our 24-hour support included with our signature 10-year hardware warra...
11/02/2026
Leading provider of video streaming solutions, Bitmovin, has appointed Ian Baglow as Co-CEO alongside existing CEO and Co-Founder Stefan Lederer. Under this str...
11/02/2026
Paramount and the CBS Television Network will partner to air UFC 326: HOLLOWAY vs. OLIVEIRA 2 live on Saturday, March 7, from T-Mobile Arena in Las Vegas, mar...
11/02/2026
Beginning February 10, fans can buy MLB.TV on ESPN, a new milestone in one of sports media's longest-standing partnerships. ESPN becomes the new streaming h...
11/02/2026
Fubo Sports Network is available to Hulu's Live TV subscribers in the core $89.99 a month subscription plan, which also includes full access to the entire H...
11/02/2026
Following a competitive public tender process, Rai (Radiotelevisione Italiana), the national public broadcasting company of Italy, has awarded Imagine Communica...
11/02/2026
Major League Baseball is making in-market streaming subscriptions for 20 Clubs available today for fans. Subscriptions for the following Clubs are available vi...
11/02/2026
Building on successful demonstrations during the Paris Olympics 2024, Italian public service broadcaster Rai and the European Broadcasting Union (EBU) are condu...
11/02/2026
Following Sunday's Super Bowl LX, ESPN and Disney unveiled We're Going,...
11/02/2026
Delayed streams are a growing source of frustration for sports fans. During the 2026 Super Bowl, some streams lagged up to 62 seconds behind the action on the f...
11/02/2026
NASCAR and FloSports announces an expanded slate of racing events that will bring FloRacing coverage live throughout the 2026 season to the NASCAR Channel, furt...
11/02/2026
Manifold technologies GmbH announces the appointment of Nick Tucker as Sales Manager for Europe, reinforcing the company's continued growth across broadcast...
11/02/2026
Genies, the AI avatar technology company powering the next era of interactive digital identity, entered into a landmark collaboration with MLB Players, Inc., th...
11/02/2026
The International Cricket Council (ICC) and Google have joined forces for an AI-...
11/02/2026
Dolby's CEO Kevin Yeaman and Giles Baker, SVP of Dolby Cloud Solutions, shared how the brand's latest innovations - Dolby Vision, Dolby Atmos, and Dolby...
11/02/2026
Ilitch Sports + Entertainment has entered a first of its kind partnership with Major League Baseball, which will provide broadcast support to both the Detroit T...
11/02/2026
For major U.S. events like Super Bowl 2026, FIFA World Cup, America 250, and the...
11/02/2026
Broadcasts of the NHL's Detroit Red Wings will also be produced by the leagu...
11/02/2026
Video moves fast can your DAM keep up?
Join Blue Lucy in LA for the West Coast's leading Digital Asset Management event as we explore, celebrate, and acc...
11/02/2026
NEW YORK - February 10, 2026 - An estimated 124.9 million viewers watched Super Bowl LX on Sunday, February 8, according to Nielsen's Big Data Panel measu...
11/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
11/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
11/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
11/02/2026
Clear-Com provided an advanced, IP-based communications infrastructure for TEDNext 2025, supporting production, media, and editorial teams with a highly flexib...
11/02/2026
Astera introduces QuikBeam, the newest addition to its acclaimed Quik family of focusing LED Fresnels. This ultra-compact spotlight combines the equivalent powe...
11/02/2026
Following a competitive public tender process, Rai (Radiotelevisione Italiana), the national public broadcasting company of Italy, has awarded Imagine Communica...
11/02/2026
With Convertible Mount for NL Bowens & Aputure A Mounts See it at BSC Expo Stand #133 LCA
DoPchoice continues to refine light shaping tools for professional LE...
11/02/2026
World Premiere at BSC Expo, Booth #319 Oberkochen/Germany, 10 February 2026
ZEISS introduces the new Aatma, set of nine high-end full frame T1.5 cinema primes ...
11/02/2026
As Re-recording Mixer and Head of Sound at The Farm, one of UK's leading post-production facilities, Nick Fry has built his career on making stories sound a...
11/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
11/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
11/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
11/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
11/02/2026
Graduate Spotlight: Gabrielle Rodriguez The educator, who grew up in the Philippines, shares how shes bringing what she learned at Berklee back home.
Februar...
11/02/2026
Wednesday 11 February 2026
Sky brings together Netflix, Disney , HBO Max and Ha...
11/02/2026
Back to All News
Netflix Confirms Production of Love O'Clock' From the...
11/02/2026
Back to All News
Investing in Belgian Stories: A Commitment to Culture and Choice
From left to right: Undercover, Ang le, Rough Diamonds, Into the Night, John...
11/02/2026
At the end of January, ICG headed off to the Portuguese capital, Lisbon, for our annual conference.
An early flight gave us plenty of time to start exploring s...