
Large language models and the applications they power enable unprecedented opportunities for organizations to get deeper insights from their data reservoirs and to build entirely new classes of applications.
But with opportunities often come challenges.
Both on premises and in the cloud, applications that are expected to run in real time place significant demands on data center infrastructure to simultaneously deliver high throughput and low latency with one platform investment.
To drive continuous performance improvements and improve the return on infrastructure investments, NVIDIA regularly optimizes the state-of-the-art community models, including Meta's Llama, Google's Gemma, Microsoft's Phi and our own NVLM-D-72B, released just a few weeks ago.
Relentless Improvements Performance improvements let our customers and partners serve more complex models and reduce the needed infrastructure to host them. NVIDIA optimizes performance at every layer of the technology stack, including TensorRT-LLM, a purpose-built library to deliver state-of-the-art performance on the latest LLMs. With improvements to the open-source Llama 70B model, which delivers very high accuracy, we've already improved minimum latency performance by 3.5x in less than a year.
We're constantly improving our platform performance and regularly publish performance updates. Each week, improvements to NVIDIA software libraries are published, allowing customers to get more from the very same GPUs. For example, in just a few months' time, we've improved our low-latency Llama 70B performance by 3.5x.
NVIDIA has increased performance on the Llama 70B model by 3.5x. In the most recent round of MLPerf Inference 4.1, we made our first-ever submission with the Blackwell platform. It delivered 4x more performance than the previous generation.
This submission was also the first-ever MLPerf submission to use FP4 precision. Narrower precision formats, like FP4, reduces memory footprint and memory traffic, and also boost computational throughput. The process takes advantage of Blackwell's second-generation Transformer Engine, and with advanced quantization techniques that are part of TensorRT Model Optimizer, the Blackwell submission met the strict accuracy targets of the MLPerf benchmark.
Blackwell B200 delivers up to 4x more performance versus previous generation on MLPerf Inference v4.1's Llama 2 70B workload. Improvements in Blackwell haven't stopped the continued acceleration of Hopper. In the last year, Hopper performance has increased 3.4x in MLPerf on H100 thanks to regular software advancements. This means that NVIDIA's peak performance today, on Blackwell, is 10x faster than it was just one year ago on Hopper.
These results track progress on the MLPerf Inference Llama 2 70B Offline scenario over the past year. Our ongoing work is incorporated into TensorRT-LLM, a purpose-built library to accelerate LLMs that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM is built on top of the TensorRT Deep Learning Inference library and leverages much of TensorRT's deep learning optimizations with additional LLM-specific improvements.
Improving Llama in Leaps and Bounds More recently, we've continued optimizing variants of Meta's Llama models, including versions 3.1 and 3.2 as well as model sizes 70B and the biggest model, 405B. These optimizations include custom quantization recipes, as well as efficient use of parallelization techniques to more efficiently split the model across multiple GPUs, leveraging NVIDIA NVLink and NVSwitch interconnect technologies. Cutting-edge LLMs like Llama 3.1 405B are very demanding and require the combined performance of multiple state-of-the-art GPUs for fast responses.
Parallelism techniques require a hardware platform with a robust GPU-to-GPU interconnect fabric to get maximum performance and avoid communication bottlenecks. Each NVIDIA H200 Tensor Core GPU features fourth-generation NVLink, which provides a whopping 900GB/s of GPU-to-GPU bandwidth. Every eight-GPU HGX H200 platform also ships with four NVLink Switches, enabling every H200 GPU to communicate with any other H200 GPU at 900GB/s, simultaneously.
Many LLM deployments use parallelism over choosing to keep the workload on a single GPU, which can have compute bottlenecks. LLMs seek to balance low latency and high throughput, with the optimal parallelization technique depending on application requirements.
For instance, if lowest latency is the priority, tensor parallelism is critical, as the combined compute performance of multiple GPUs can be used to serve tokens to users more quickly. However, for use cases where peak throughput across all users is prioritized, pipeline parallelism can efficiently boost overall server throughput.
The table below shows that tensor parallelism can deliver over 5x more throughput in minimum latency scenarios, whereas pipeline parallelism brings 50% more performance for maximum throughput use cases.
For production deployments that seek to maximize throughput within a given latency budget, a platform needs to provide the ability to effectively combine both techniques like in TensorRT-LLM.
Read the technical blog on boosting Llama 3.1 405B throughput to learn more about these techniques.
Different scenarios have different requirements, and parallelism techniques bring optimal performance for each of these scenarios. The Virtuous Cycle Over the lifecycle of our architectures, we deliver significant performance gains from ongoing software tuning and optimization. These improvements translate into additional value for customers who train and deploy on our platforms. They're able to create more capable models and applications and deploy their existing models using less infrastructure, enhancing th
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
04/08/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
04/07/2026
April 7 2026, 19:00 (PDT) Detective Conan: Fallen Angel of the Highway Opens in...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
08/05/2026
For the second straight year, Miri Technologies Inc. has been honored with a pair of coveted awards at the broadcast, media and entertainment industry's pre...
08/05/2026
Nine Alumni Nominated for Tony Awards More than 20 alumni contributed to this year's Tony-nominated productions.
May 7, 2026
By
Tori Donahue
Boston Co...
08/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
08/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
08/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
08/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
08/05/2026
Liberty University on why broadcast technology isn't just a technical invest...
08/05/2026
COW Jobs: UGC, On-Camera Video Content
Brie Clayton May 7, 2026
0 Comments
UGC, On-Camera Video Content
April 17, 2026Documentary Editor - US,......
07/05/2026
Journalists reporting on Sudan are working in one of the most complex and fast-m...
07/05/2026
Multi-year partnership positions Victory+ as a free home for Dallas Cowboys orig...
07/05/2026
From sideline reporting to directing and producing, the talented sophomore is building a well-rounded foundation for a career in live sports production
In the ...
07/05/2026
Cobalt Digital has received two Future Best of Show awards at NAB Show 2026. The COBALT blueCORE platform was recognized by TV Tech, and the COBALT PACIFIC ULL-...
07/05/2026
ESPN analyst Mina Kimes will host the televised semifinals and finals of the 202...
07/05/2026
Bitmovin has announced that MUBI, the global film streaming platform, has selected Bitmovin as its cloud VOD encoding partner. Bitmovin's encoding infrastru...
07/05/2026
NBCUniversal Telemundo Enterprises and the U.S. Soccer Federation have announced...
07/05/2026
Angel City Football Club (ACFC) and Victory have announced a regional broadcast partnership bringing live match coverage to fans across the greater Los Angeles...
07/05/2026
Spiideo has announced the launch of AI Highlights inside Spiideo Play, its automated sports production platform. AI Highlights combines video, event data, audio...
07/05/2026
Leostream Corporation has announced a unified remote access ecosystem for high-p...
07/05/2026
The Atlanta Dream has partnered with Victory to stream all locally broadcast Dream games for free, expanding the team's digital distribution strategy and g...
07/05/2026
Riedel Communications has announced the appointment of Marc Engroff as Chief Fin...
07/05/2026
SES and ARD, Germany's largest public broadcasting network, have announced a long-term extension of their satellite distribution partnership through 2039. U...
07/05/2026
UpLight Technologies delivers a flexible video and lighting system for a new televised sportLaunching the Pro Cheer League required more than creating a compell...
07/05/2026
Full Day Productions and GSE Worldwide have announced Spikes Under the Lights, a...
07/05/2026
Seit dem Start im Jahr 2023 hat DJ (Beta) das personalisierte H rerlebnis von 94...
07/05/2026
Da quando stata presentata nel 2023, DJ (beta) ha aiutato a definire un'esperienza d'ascolto pi personalizzata per 94 milioni di utenti Spotify Premi...
07/05/2026
Since launching in 2023, DJ (beta) has helped shape a more personalized listenin...
07/05/2026
Desde o lan amento em 2023, o DJ (beta) j ajudou a deixar a experi ncia de ouvi...
07/05/2026
From our earliest days, Spotify has been built on a simple principle: Great audio should be easy to reach. It's what's driven us to expand from music to...
07/05/2026
Asian and Pacific Islander artists continue to shape the global soundscape, pushing creative boundaries and connecting with fans worldwide. This Asian & Pacific...
07/05/2026
Company announce long-requested choir instrument
In the latest expansion of their Symphonic Elements series, UJAM have introduced an all-new vocal instrumen...
07/05/2026
Modular-inspired sound generation with digital control
Buchla have introduced Ziggy, a new compact synthesizer built around the company's complex oscill...
07/05/2026
Combines analogue voices with digital synthesis and sequencing
Polyend have announced Drums, a new hybrid analogue and digital drum machine that combines sy...
07/05/2026
Control surface now officially supports four DAWs
When Nektar launched the Panorama CS12 control surface, it worked exclusively with Apple's Logic Pro, ...
07/05/2026
In the first quarter 2026, SGL Carbon generated consolidated sales of 184.5 million, which was 49.8 million, or 21.3%, lower than in the prior year (Q1 2025: ...
07/05/2026
Rohde & Schwarz and Greenerwave achieve precise and fast ESA antenna characteriz...
07/05/2026
Two multi-role L3Harris products - the Red Wolf launched effects vehicle and SK...
07/05/2026
L3Harris will be developing key features of a secure and resilient digital infra...
07/05/2026
Alysa Liu and Shohei Ohtani Help Drive Viewership as 91 of the Top 100 Broadcast...
07/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
07/05/2026
Cobalt Digital Wins Two Future Best of Show Awards at 2026 NAB Show
Manufacturer Recognized by TV Tech and TVBEurope for Innovation in signal processing
Cobal...
07/05/2026
Software and hardware platforms, AI power and user-friendliness on show...
07/05/2026
Intinor will demonstrate its latest technical enhancements for the Direkt series at BroadcastAsia 2026. With a continued focus on reliable contribution and remo...
07/05/2026
Bitmovin has announced that MUBI has chosen Bitmovin as its cloud VOD encoding partner, replacing MUBI's legacy on premises encoding setup to improve scalab...
07/05/2026
Meet Graduates from Berklees Class of 2026 Members of this years graduating class reflect on their proudest moments at Berklee and look ahead to whats next.
...
07/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...