
Large language models and the applications they power enable unprecedented opportunities for organizations to get deeper insights from their data reservoirs and to build entirely new classes of applications.
But with opportunities often come challenges.
Both on premises and in the cloud, applications that are expected to run in real time place significant demands on data center infrastructure to simultaneously deliver high throughput and low latency with one platform investment.
To drive continuous performance improvements and improve the return on infrastructure investments, NVIDIA regularly optimizes the state-of-the-art community models, including Meta's Llama, Google's Gemma, Microsoft's Phi and our own NVLM-D-72B, released just a few weeks ago.
Relentless Improvements Performance improvements let our customers and partners serve more complex models and reduce the needed infrastructure to host them. NVIDIA optimizes performance at every layer of the technology stack, including TensorRT-LLM, a purpose-built library to deliver state-of-the-art performance on the latest LLMs. With improvements to the open-source Llama 70B model, which delivers very high accuracy, we've already improved minimum latency performance by 3.5x in less than a year.
We're constantly improving our platform performance and regularly publish performance updates. Each week, improvements to NVIDIA software libraries are published, allowing customers to get more from the very same GPUs. For example, in just a few months' time, we've improved our low-latency Llama 70B performance by 3.5x.
NVIDIA has increased performance on the Llama 70B model by 3.5x. In the most recent round of MLPerf Inference 4.1, we made our first-ever submission with the Blackwell platform. It delivered 4x more performance than the previous generation.
This submission was also the first-ever MLPerf submission to use FP4 precision. Narrower precision formats, like FP4, reduces memory footprint and memory traffic, and also boost computational throughput. The process takes advantage of Blackwell's second-generation Transformer Engine, and with advanced quantization techniques that are part of TensorRT Model Optimizer, the Blackwell submission met the strict accuracy targets of the MLPerf benchmark.
Blackwell B200 delivers up to 4x more performance versus previous generation on MLPerf Inference v4.1's Llama 2 70B workload. Improvements in Blackwell haven't stopped the continued acceleration of Hopper. In the last year, Hopper performance has increased 3.4x in MLPerf on H100 thanks to regular software advancements. This means that NVIDIA's peak performance today, on Blackwell, is 10x faster than it was just one year ago on Hopper.
These results track progress on the MLPerf Inference Llama 2 70B Offline scenario over the past year. Our ongoing work is incorporated into TensorRT-LLM, a purpose-built library to accelerate LLMs that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM is built on top of the TensorRT Deep Learning Inference library and leverages much of TensorRT's deep learning optimizations with additional LLM-specific improvements.
Improving Llama in Leaps and Bounds More recently, we've continued optimizing variants of Meta's Llama models, including versions 3.1 and 3.2 as well as model sizes 70B and the biggest model, 405B. These optimizations include custom quantization recipes, as well as efficient use of parallelization techniques to more efficiently split the model across multiple GPUs, leveraging NVIDIA NVLink and NVSwitch interconnect technologies. Cutting-edge LLMs like Llama 3.1 405B are very demanding and require the combined performance of multiple state-of-the-art GPUs for fast responses.
Parallelism techniques require a hardware platform with a robust GPU-to-GPU interconnect fabric to get maximum performance and avoid communication bottlenecks. Each NVIDIA H200 Tensor Core GPU features fourth-generation NVLink, which provides a whopping 900GB/s of GPU-to-GPU bandwidth. Every eight-GPU HGX H200 platform also ships with four NVLink Switches, enabling every H200 GPU to communicate with any other H200 GPU at 900GB/s, simultaneously.
Many LLM deployments use parallelism over choosing to keep the workload on a single GPU, which can have compute bottlenecks. LLMs seek to balance low latency and high throughput, with the optimal parallelization technique depending on application requirements.
For instance, if lowest latency is the priority, tensor parallelism is critical, as the combined compute performance of multiple GPUs can be used to serve tokens to users more quickly. However, for use cases where peak throughput across all users is prioritized, pipeline parallelism can efficiently boost overall server throughput.
The table below shows that tensor parallelism can deliver over 5x more throughput in minimum latency scenarios, whereas pipeline parallelism brings 50% more performance for maximum throughput use cases.
For production deployments that seek to maximize throughput within a given latency budget, a platform needs to provide the ability to effectively combine both techniques like in TensorRT-LLM.
Read the technical blog on boosting Llama 3.1 405B throughput to learn more about these techniques.
Different scenarios have different requirements, and parallelism techniques bring optimal performance for each of these scenarios. The Virtuous Cycle Over the lifecycle of our architectures, we deliver significant performance gains from ongoing software tuning and optimization. These improvements translate into additional value for customers who train and deploy on our platforms. They're able to create more capable models and applications and deploy their existing models using less infrastructure, enhancing th
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
01/04/2026
January 4 2026, 18:00 (PST) DOLBY AND DOUYIN EMPOWER THE NEXT GENERATON OF CREATORS WITH DOLBY VISION
Douyin Users Can Now Create And Share Videos With Stun...
18/03/2026
Newly named chief product officer (CPO), Larissa G rner Meeus will return to Net Insight on May 4.
Larissa G rner Meeus will become CPO of Net Insight on May 4...
18/03/2026
SVG Europe's The Football Summit 2026 explored how the sports broadcasting i...
18/03/2026
The 2026 NAB Show kicks off one month from today and SVG is once again set to cover the show from every angle. SVG's SportsTech@NAB Show Blog is now live - ...
18/03/2026
(L-R) The cast and crew of Dead Lover at The Ray Theater for its premiere at the 2025 Sundance Film Festival. (Photo by Robin Marshall/Shutterstock for Sundan...
18/03/2026
Yamaha C7 captured in Nashville
Wiltone Productions have announced the release of Music Row Piano, a deeply sampled Yamaha C7 piano library that's been ...
18/03/2026
Bass-enhancement plug-in upgraded
Along with a steady stream of new releases, Techivation have recently been revisiting some of the older plug-ins in their ...
18/03/2026
Mix-reference plug-in overhauled
iZotope's powerful mix-referencing plug-in has just reached its third major version, and now boasts a new capture proce...
18/03/2026
Latest EZKeys 2 expansion arrives
Toontrack's staggering collection of EZKeys 2 expansions has grown once again, and the latest instalment delivers a on...
18/03/2026
Jess Ho explores the politics of food in new SBS Audio podcast For The Culture
18 March, 2026
Media releases
New SBS Audio podcast For The Culture, hosted ...
18/03/2026
Future-ready broadcasts with ATSC 3.0 and enhanced services from Rohde & Schwarz...
18/03/2026
London Calling - When The Industry Convened to Help Streaming Find its MoJo In this blog, Laura Rognoni reflects on key discussions from the Connected TV World...
18/03/2026
SMPTE Unveils 2026 NAB Show Educational Presentations
Brie Clayton March 18, 2026
0 Comments
SMPTE , the home of media professionals, technologists, a...
18/03/2026
Auditel Ad Campaign Shot on Blackmagic PYXIS 12K
Brie Clayton March 18, 2026
0 Comments
LED wall virtual production blends 12K open gate acquisition w...
18/03/2026
Brainstorm transforms productivity and sustainability with Suite 7 at NAB Show 2...
18/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
18/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
18/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
18/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
18/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
18/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
18/03/2026
SMPTE , the home of media professionals, technologists, and engineers, today unveiled its educational presentations for the 2026 NAB Show. This year SMPTE will ...
18/03/2026
Maxon, maker of powerful, approachable software solutions for creators working in 2D and 3D design, motion graphics, visual effects, gaming, and more, today ann...
18/03/2026
Digital Alert Systems
Preview
2026 NAB Show
April 19 - 22
Booth C3452
At the 2026 NAB Show, Digital Alert Systems will showcase Version 6.0 of its DASDEC ...
18/03/2026
Setplex today announced that it will showcase its complete, fully integrated Zapflex platform for the first time at the 2026 NAB Show, introducing powerful new ...
18/03/2026
THIS ANNOUNCEMENT RELATES TO THE DISCLOSURE OF INFORMATION THAT QUALIFIED OR MAY HAVE QUALIFIED AS INSIDE INFORMATION WITHIN THE MEANING OF ARTICLE 7(1) OF THE ...
18/03/2026
COW Jobs: Seeking DP for Low Budget Dramedy - Chicago
Brie Clayton March 17, 2026
0 Comments
Seeking Director of Photography for Low Budget Dramedy Fe...
18/03/2026
COW Jobs: Seeking Gaffer for Low Budget Dramedy - Chicago
Brie Clayton March 17, 2026
0 Comments
Seeking Gaffer for Low Budget Dramedy Feature Film- I...
18/03/2026
COW Jobs: Seeking Location, Sound for Low Budget Dramedy - Chicago
Brie Clayton March 17, 2026
0 Comments
Seeking Location/Sound for Low Budget Dramed...
18/03/2026
COW Jobs: Seeking Child Wrangler for Low Budget Film - Chicago
Brie Clayton March 17, 2026
0 Comments
Seeking Child Wrangler for Low Budget Dramedy Fe...
18/03/2026
Calrec Redefines Broadcast Workflows at NAB 2026 with its Most Powerful Hardware...
18/03/2026
Oscar Nominated Two People Exchanging Saliva Posted with DaVinci Resolve Studio
Brie Clayton March 17, 2026
0 Comments
DaVinci Resolve Studio handle...
18/03/2026
Boston Conservatory Presents Celebrated Musical Satire Urinetown Performances for this Center Stage production will take place at Boston Conservatory Theater ...
18/03/2026
Charlie Puth Joins Switched on Pop at Berklee NYC The Berklee alum spoke with host and Berklee NYC professor Charlie Harding for a live taping, answering audi...
18/03/2026
X-Rite Pantone Demonstrates Advanced Color Management Solutions to Support Smart...
18/03/2026
Wednesday 18 March 2026
BAFTA winner James McAvoy to star in Sky Original serie...
18/03/2026
Wednesday 18 March 2026
Sky Sports and Zuffa Boxing announce multi-year agreement for the UK and Ireland
Sky Sports and Zuffa Boxing have announced a new mult...
18/03/2026
Wednesday 18 March 2026
Sky Unveils Poster and Trailer for Original Feature Fil...
18/03/2026
Wednesday 18 March 2026
UP NEXT: Sky presents a bold slate of new, premium acquired content
Sky reveals slate of newly acquired premium drama and comedy, as i...
18/03/2026
Wednesday 18 March 2026
UP NEXT: Sky unveils an unmissable genre-spanning slate...
18/03/2026
Back to All News
Preparatevi Alla Terza E Ultima Stagione De La Legge di Lidia ...
18/03/2026
Back to All News
Netflix Drops Teaser and First Look Images for The Chestnut M...
18/03/2026
New Playout-to-Delivery Capabilities Elevate ATSC 3.0 Experiences, Lower DTV In...
17/03/2026
NASA+'s Rebecca Sirmons and Brittany Brown offer unique look at live streami...
17/03/2026
The transition to IP has fundamentally reshaped professional media infrastructures. Video, audio, and increasingly metadata now circulate as independent, precis...