
In the age of AI reasoning, training smarter, more capable models is critical to scaling intelligence. Delivering the massive performance to meet this new age requires breakthroughs across GPUs, CPUs, NICs, scale-up and scale-out networking, system architectures, and mountains of software and algorithms.
In MLPerf Training v5.1 - the latest round in a long-running series of industry-standard tests of AI training performance - NVIDIA swept all seven tests, delivering the fastest time to train across large language models (LLMs), image generation, recommender systems, computer vision and graph neural networks.
NVIDIA was also the only platform to submit results on every test, underscoring the rich programmability of NVIDIA GPUs, and the maturity and versatility of its CUDA software stack.
NVIDIA Blackwell Ultra Doubles Down The GB300 NVL72 rack-scale system, powered by the NVIDIA Blackwell Ultra GPU architecture, made its debut in MLPerf Training this round, following a record-setting showing in the most recent MLPerf Inference round.
Compared with the prior-generation Hopper architecture, the Blackwell Ultra-based GB300 NVL72 delivered more than 4x the Llama 3.1 405B pretraining and nearly 5x the Llama 2 70B LoRA fine-tuning performance using the same number of GPUs.
These gains were fueled by Blackwell Ultra's architectural improvements - including new Tensor Cores that offer 15 petaflops of NVFP4 AI compute, twice the attention-layer compute and 279GB of HBM3e memory - as well as new training methods that tapped into the architecture's enormous NVFP4 compute performance.
Connecting multiple GB300 NVL72 systems, the NVIDIA Quantum-X800 InfiniBand platform - the industry's first end-to-end 800 Gb/s networking platform - also made its MLPerf debut, doubling scale-out networking bandwidth compared with the prior generation.
Performance Unlocked: NVFP4 Accelerates LLM Training Key to the outstanding results this round was performing calculations using NVFP4 precision - a first in the history of MLPerf Training.
One way to increase compute performance is to build an architecture capable of performing computations on data represented with fewer bits, and then to perform those calculations at a faster rate. However, lower precision means less information is available in each calculation. This means using low-precision calculations in the training process calls for careful design decisions to keep results accurate.
NVIDIA teams innovated at every layer of the stack to adopt FP4 precision for LLM training. The NVIDIA Blackwell GPU can perform FP4 calculations - including the NVIDIA-designed NVFP4 format as well as other FP4 variants - at double the rate of FP8. Blackwell Ultra boosts that to 3x, enabling the GPUs to deliver substantially greater AI compute performance.
NVIDIA is the only platform to date that has submitted MLPerf Training results with calculations performed using FP4 precision while meeting the benchmark's strict accuracy requirements.
NVIDIA Blackwell Scales to New Heights NVIDIA set a new Llama 3.1 405B time-to-train record of just 10 minutes, powered by more than 5,000 Blackwell GPUs working together efficiently. This entry was 2.7x faster than the best Blackwell-based result submitted in the prior round, resulting from efficient scaling to more than twice the number of GPUs, as well as the use of NVFP4 precision to dramatically increase the effective performance of each Blackwell GPU.
To illustrate the performance increase per GPU, NVIDIA submitted results this round using 2,560 Blackwell GPUs, achieving a time to train of 18.79 minutes - 45% faster than the submission last round using 2,496 GPUs.
New Benchmarks, New Records NVIDIA also set performance records on the two new benchmarks added this round: Llama 3.1 8B and FLUX.1.
Llama 3.1 8B - a compact yet highly capable LLM - replaced the long-running BERT-large model, adding a modern, smaller LLM to the benchmark suite. NVIDIA submitted results with up to 512 Blackwell Ultra GPUs, setting the bar at 5.2 minutes to train.
In addition, FLUX.1 - a state-of-the-art image generation model - replaced Stable Diffusion v2, with only the NVIDIA platform submitting results on the benchmark. NVIDIA submitted results using 1,152 Blackwell GPUs, setting a record time to train of 12.5 minutes.
NVIDIA continued to hold records on the existing graph neural network, object detection and recommender system tests.
A Broad and Deep Partner Ecosystem The NVIDIA ecosystem participated extensively this round, with compelling submissions from 15 organizations including ASUS, Dell Technologies, Giga Computing, Hewlett Packard Enterprise, Krai, Lambda, Lenovo, Nebius, Quanta Cloud Technology, Supermicro, University of Florida, Verda (formerly DataCrunch) and Wiwynn.
NVIDIA is innovating at a one-year rhythm, driving significant and rapid performance increases across pretraining, post-training and inference - paving the way to new levels of intelligence and accelerating AI adoption.
See more NVIDIA performance data on the Data Center Deep Learning Product Performance Hub and Performance Explorer pages.
More from Nvidia
25/12/2025
Holiday lights are twinkling, hot cocoa's on the stove and gamers are settling in for a well-earned break.
Whether staying in or heading on a winter getawa...
22/12/2025
The works of Plato state that when humans have an experience, some level of change occurs in their brain, which is powered by memory - specifically long-term me...
18/12/2025
NVIDIA will join the U.S. Department of Energy's (DOE) Genesis Mission as a ...
18/12/2025
Top-notch options for AI at the desktops of developers, engineers and designers ...
18/12/2025
Step out of the vault and into the future of gaming with Fallout: New Vegas streaming on GeForce NOW, just in time to celebrate the newest season of the hit Ama...
17/12/2025
The Hao AI Lab research team at the University of California San Diego - at the forefront of pioneering AI model innovation - recently received an NVIDIA DGX B...
17/12/2025
Editor's note: This post is part of Into the Omniverse, a series focused on ...
15/12/2025
NVIDIA today announced it has acquired SchedMD - the leading developer of Slurm, an open-source workload management system for high-performance computing (HPC) ...
15/12/2025
Modern workflows showcase the endless possibilities of generative and agentic AI on PCs.
Of many, some examples include tuning a chatbot to handle product-supp...
12/12/2025
In Las Vegas's T-Mobile Arena, fans of the Golden Knights are getting more than just hockey - they're getting a taste of the future. ADAM, a robot devel...
11/12/2025
Unveiling what it describes as the most capable model series yet for professional knowledge work, OpenAI launched GPT-5.2 today. The model was trained and deplo...
11/12/2025
Hunters, saddle up - adventure awaits in the cloud.
Journey into the world of M...
10/12/2025
The NVIDIA accelerated computing platform is leading supercomputing benchmarks once dominated by CPUs, enabling AI, science, business and computing efficiency w...
10/12/2025
The world's top-performing system for graph processing at scale was built on...
10/12/2025
As the scale and complexity of AI infrastructure grows, data center operators need continuous visibility into factors including performance, temperature and pow...
04/12/2025
Developers, researchers, hobbyists and students can take a byte out of holiday s...
04/12/2025
Editor's note: The Game Pass edition of Hogwarts Legacy' will also be supported on GeForce NOW when the Steam and Epic Games Store versions launch on t...
03/12/2025
The top 10 most intelligent open-source models all use a mixture-of-experts arch...
02/12/2025
Today, Mistral AI announced the Mistral 3 family of open-source multilingual, multimodal models, optimized across NVIDIA supercomputing and edge platforms.
M...
02/12/2025
At AWS re:Invent, NVIDIA and Amazon Web Services expanded their strategic collab...
01/12/2025
Researchers worldwide rely on open-source technologies as the foundation of their work. To equip the community with the latest advancements in digital and physi...
27/11/2025
Black Friday is leveling up. Get ready to score one of the biggest deals of the season - 50% off the first three months of a new GeForce NOW Ultimate membership...
25/11/2025
Black Forest Labs - the frontier AI research lab developing visual generative AI models - today released the FLUX.2 family of state-of-the-art image generation ...
24/11/2025
Editor's note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copi...
20/11/2025
Editor's note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows u...
20/11/2025
The NVIDIA Blackwell RTX upgrade is nearing the finish line, letting GeForce NOW Ultimate members across the globe experience true next-generation cloud gaming ...
20/11/2025
Tanya Berger-Wolf's first computational biology project started as a bet wit...
18/11/2025
Timed with the Microsoft Ignite conference running this week, NVIDIA is expandin...
18/11/2025
Today, Microsoft, NVIDIA and Anthropic announced new strategic partnerships. Anthropic is scaling its rapidly growing Claude AI model on Microsoft Azure, powere...
18/11/2025
AI agents have the potential to become indispensable tools for automating complex tasks. But bringing agents to production remains challenging.
According to Ga...
17/11/2025
NVIDIA Apollo - a family of open models for accelerating industrial and computat...
17/11/2025
To power future technologies including liquid-cooled data centers, high-resoluti...
17/11/2025
At SC25, NVIDIA unveiled advances across NVIDIA BlueField DPUs, next-generation networking, quantum computing, national research, AI physics and more - as accel...
17/11/2025
Across quantum physics, digital biology and climate research, the world's researchers are harnessing a universal scientific instrument to chart new frontier...
17/11/2025
It used to be that computing power trickled down from hulking supercomputers to ...
14/11/2025
Today's AI workloads are data-intensive, requiring more scalable and afforda...
13/11/2025
Editor's note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copi...
13/11/2025
Chaos has entered the chat. It's GFN Thursday, and things are getting intense with the launch of Call of Duty: Black Ops 7, streaming at launch this week on...
12/11/2025
In the age of AI reasoning, training smarter, more capable models is critical to scaling intelligence. Delivering the massive performance to meet this new age r...
12/11/2025
Large language model (LLM)-based AI assistants are powerful productivity tools, but without the right context and information, they can struggle to provide nuan...
10/11/2025
Editor's note: This post is part of Think SMART, a series focused on how lea...
06/11/2025
NVIDIA founder and CEO Jensen Huang and chief scientist Bill Dally were honored ...
06/11/2025
Editor's note: This blog has been updated to reflect the correct launch date for Call of Duty: Black Ops 7', November 14.
A crisp chill's in the...
04/11/2025
In Berlin on Tuesday, Deutsche Telekom and NVIDIA unveiled the world's first...
04/11/2025
When inspiration strikes, nothing kills momentum faster than a slow tool or a frozen timeline. Creative apps should feel fast and fluid - an extension of imagin...
03/11/2025
Two out of every three people are likely to be living in cities or other urban c...
31/10/2025
Amidst Gyeongju, South Korea's ancient temples and modern skylines, Jensen H...
30/10/2025
An unassuming van driving around rural India uses powerful AI technology that...
30/10/2025
Get ready, raiders - the wait is over. ARC Raiders is dropping onto GeForce NOW and bringing the fight from orbit to the screen.
To celebrate the launch, gamer...
29/10/2025
Editor's note: This post is part of Into the Omniverse, a series focused on ...