
Large language models and the applications they power enable unprecedented opportunities for organizations to get deeper insights from their data reservoirs and to build entirely new classes of applications.
But with opportunities often come challenges.
Both on premises and in the cloud, applications that are expected to run in real time place significant demands on data center infrastructure to simultaneously deliver high throughput and low latency with one platform investment.
To drive continuous performance improvements and improve the return on infrastructure investments, NVIDIA regularly optimizes the state-of-the-art community models, including Meta's Llama, Google's Gemma, Microsoft's Phi and our own NVLM-D-72B, released just a few weeks ago.
Relentless Improvements Performance improvements let our customers and partners serve more complex models and reduce the needed infrastructure to host them. NVIDIA optimizes performance at every layer of the technology stack, including TensorRT-LLM, a purpose-built library to deliver state-of-the-art performance on the latest LLMs. With improvements to the open-source Llama 70B model, which delivers very high accuracy, we've already improved minimum latency performance by 3.5x in less than a year.
We're constantly improving our platform performance and regularly publish performance updates. Each week, improvements to NVIDIA software libraries are published, allowing customers to get more from the very same GPUs. For example, in just a few months' time, we've improved our low-latency Llama 70B performance by 3.5x.
NVIDIA has increased performance on the Llama 70B model by 3.5x. In the most recent round of MLPerf Inference 4.1, we made our first-ever submission with the Blackwell platform. It delivered 4x more performance than the previous generation.
This submission was also the first-ever MLPerf submission to use FP4 precision. Narrower precision formats, like FP4, reduces memory footprint and memory traffic, and also boost computational throughput. The process takes advantage of Blackwell's second-generation Transformer Engine, and with advanced quantization techniques that are part of TensorRT Model Optimizer, the Blackwell submission met the strict accuracy targets of the MLPerf benchmark.
Blackwell B200 delivers up to 4x more performance versus previous generation on MLPerf Inference v4.1's Llama 2 70B workload. Improvements in Blackwell haven't stopped the continued acceleration of Hopper. In the last year, Hopper performance has increased 3.4x in MLPerf on H100 thanks to regular software advancements. This means that NVIDIA's peak performance today, on Blackwell, is 10x faster than it was just one year ago on Hopper.
These results track progress on the MLPerf Inference Llama 2 70B Offline scenario over the past year. Our ongoing work is incorporated into TensorRT-LLM, a purpose-built library to accelerate LLMs that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM is built on top of the TensorRT Deep Learning Inference library and leverages much of TensorRT's deep learning optimizations with additional LLM-specific improvements.
Improving Llama in Leaps and Bounds More recently, we've continued optimizing variants of Meta's Llama models, including versions 3.1 and 3.2 as well as model sizes 70B and the biggest model, 405B. These optimizations include custom quantization recipes, as well as efficient use of parallelization techniques to more efficiently split the model across multiple GPUs, leveraging NVIDIA NVLink and NVSwitch interconnect technologies. Cutting-edge LLMs like Llama 3.1 405B are very demanding and require the combined performance of multiple state-of-the-art GPUs for fast responses.
Parallelism techniques require a hardware platform with a robust GPU-to-GPU interconnect fabric to get maximum performance and avoid communication bottlenecks. Each NVIDIA H200 Tensor Core GPU features fourth-generation NVLink, which provides a whopping 900GB/s of GPU-to-GPU bandwidth. Every eight-GPU HGX H200 platform also ships with four NVLink Switches, enabling every H200 GPU to communicate with any other H200 GPU at 900GB/s, simultaneously.
Many LLM deployments use parallelism over choosing to keep the workload on a single GPU, which can have compute bottlenecks. LLMs seek to balance low latency and high throughput, with the optimal parallelization technique depending on application requirements.
For instance, if lowest latency is the priority, tensor parallelism is critical, as the combined compute performance of multiple GPUs can be used to serve tokens to users more quickly. However, for use cases where peak throughput across all users is prioritized, pipeline parallelism can efficiently boost overall server throughput.
The table below shows that tensor parallelism can deliver over 5x more throughput in minimum latency scenarios, whereas pipeline parallelism brings 50% more performance for maximum throughput use cases.
For production deployments that seek to maximize throughput within a given latency budget, a platform needs to provide the ability to effectively combine both techniques like in TensorRT-LLM.
Read the technical blog on boosting Llama 3.1 405B throughput to learn more about these techniques.
Different scenarios have different requirements, and parallelism techniques bring optimal performance for each of these scenarios. The Virtuous Cycle Over the lifecycle of our architectures, we deliver significant performance gains from ongoing software tuning and optimization. These improvements translate into additional value for customers who train and deploy on our platforms. They're able to create more capable models and applications and deploy their existing models using less infrastructure, enhancing th
Most recent headlines
03/01/2026
A still from OBEX by Albert Birney, an official selection of the 2025 Sundance...
02/01/2026
Freezing Florida: NHL's EVP, Entertainment Bob Chesterman on Taking the Wint...
02/01/2026
NHL Winter Classic 2026: TNT Sports Prepares for First NHL Outdoor Game in Sunsh...
02/01/2026
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
02/01/2026
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
02/01/2026
Any Given Day: Cork University Hospital premieres Wednesday 7 January on RT One and RT Player at 9:35pm
RT will debut a powerful new six-part documentary se...
02/01/2026
Friday 2 January 2026
All episodes of Heated Rivalry will be landing on Sky and...
02/01/2026
Sequins, chat shows, live sporting action, ground-breaking docuseries and brand-new Irish drama to kick off 2026
New Year, New Content Coming Soon across RT
...
01/01/2026
Latin Grammy Cultural Foundation Announces 2026 Noel Schajris Scholarship The scholarship will cover tuition and housing for one Berklee College of Music stud...
01/01/2026
New year, new games, all with RTX 5080-powered cloud energy. GeForce NOW is kicking off 2026 by looking back at an unforgettable year of wins and wildly high fr...
31/12/2025
Back to All News
NFL Christmas Gameday on Netflix Scores Again With the Lions-V...
30/12/2025
As the College Football Playoff Enters the Quarterfinals, ESPN Blows Out Its Meg...
30/12/2025
SVG's Best of 2025: Original ArticlesTake a look back at all our coverage of big-time productions, game-changing technologies, and state-of-the-art new faci...
30/12/2025
MELBOURNE, Fla., Dec. 30, 2025 - L3Harris Technologies (NYSE: LHX) will release its fourth quarter 2025 financial results before the market opens on Thursday, J...
30/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
30/12/2025
Your live countdown to 2026 with Inhaler, David Gray, Lyra, Garron Noone, Sharon...
30/12/2025
It marked the first civilian operational authorization for a HAPS flight in Europe, led by Space42's subsidiary, Mira Aerospace
The flight demonstrated HAP...
29/12/2025
San Francisco 49ers Strike Gold With Halftime Laser SpectacularStunning display caps $200 million renovation of Levi's Stadium techBy Dan Daley, Audio Edito...
29/12/2025
The Cup's Around the Corner: An Inside Look at Broadcast Preparations for th...
29/12/2025
SVG's Best of 2025: Longform VideoWatch the standout keynote conversations, deep dives, and panel discussions from the year for free on SVG PLAY!By Brandon ...
29/12/2025
From crisper Lossless audio and immersive music videos in beta to new Audiobooks+ plans, custom transitions between tracks, and in-app Messages, we keep levelin...
29/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
27/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
26/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
26/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
25/12/2025
Holiday lights are twinkling, hot cocoa's on the stove and gamers are settling in for a well-earned break.
Whether staying in or heading on a winter getawa...
24/12/2025
What is AI good for? Posted by MTI Film on December 24, 2025
What is AI good for?
What is AI good for?
It's been three years since ChatGPT first cap...
24/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
24/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
24/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
24/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
24/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
24/12/2025
RT has unveiled an exclusive first look at the new Dancing with the Stars promo...
24/12/2025
Back to All News
The Boyfriend' Season 2 Unveils Heartwarming Trailer, Key...
24/12/2025
Back to All News
Love, Fights, and Everything in Between: Badly in Love' Returns for Season 2
Entertainment
24 December 2025
GlobalJapan
Link copied t...
24/12/2025
Scripps Research study links sleep variability with sleep apnea and hypertension How consumers' digital activity trackers could enable personalized health s...
23/12/2025
How guilas Cibae as Dominican Winter League Games Are Locally Produced for Glob...
23/12/2025
BitFire's Jim Akimchuk on Supplying Scalability and Customization in the Clo...
23/12/2025
CAMB.AI Enables European Athletics to Offer Multi-Language SupportPlan is to eventually offer translation into all languages spoken in EuropeBy Ken Kerschbaumer...
23/12/2025
Analysis: As sports media values trend negative, scarcity and quality are king By Callum McCarthy, Editor-at-Large
Monday, December 22, 2025 - 14:08
Print ...
23/12/2025
ESPN, Disney, and NBA Return to the Animated Altcast Fray With Second Edition of...
23/12/2025
End the Year on a High Note and Donate to the Sports Broadcasting Fund Today!By Ken Kerschbaumer, Editorial Director
Tuesday, December 23, 2025 - 12:25 pm
P...
23/12/2025
The year is winding down, the weather outside is frightful, and it's the perfect time to escape into a story that warms the heart. For listeners looking for...
23/12/2025
A Zeus motor is hot fire tested at L3Harris' Camden, Arkansas, solid rocket ...
23/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
23/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
23/12/2025
Lightware will exhibit several major product innovations at ISE 2026, including the new USB-C BOOSTER-V1, Google Meet. integration for various Taurus UCX models...
23/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
23/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...