
Large language models and the applications they power enable unprecedented opportunities for organizations to get deeper insights from their data reservoirs and to build entirely new classes of applications.
But with opportunities often come challenges.
Both on premises and in the cloud, applications that are expected to run in real time place significant demands on data center infrastructure to simultaneously deliver high throughput and low latency with one platform investment.
To drive continuous performance improvements and improve the return on infrastructure investments, NVIDIA regularly optimizes the state-of-the-art community models, including Meta's Llama, Google's Gemma, Microsoft's Phi and our own NVLM-D-72B, released just a few weeks ago.
Relentless Improvements Performance improvements let our customers and partners serve more complex models and reduce the needed infrastructure to host them. NVIDIA optimizes performance at every layer of the technology stack, including TensorRT-LLM, a purpose-built library to deliver state-of-the-art performance on the latest LLMs. With improvements to the open-source Llama 70B model, which delivers very high accuracy, we've already improved minimum latency performance by 3.5x in less than a year.
We're constantly improving our platform performance and regularly publish performance updates. Each week, improvements to NVIDIA software libraries are published, allowing customers to get more from the very same GPUs. For example, in just a few months' time, we've improved our low-latency Llama 70B performance by 3.5x.
NVIDIA has increased performance on the Llama 70B model by 3.5x. In the most recent round of MLPerf Inference 4.1, we made our first-ever submission with the Blackwell platform. It delivered 4x more performance than the previous generation.
This submission was also the first-ever MLPerf submission to use FP4 precision. Narrower precision formats, like FP4, reduces memory footprint and memory traffic, and also boost computational throughput. The process takes advantage of Blackwell's second-generation Transformer Engine, and with advanced quantization techniques that are part of TensorRT Model Optimizer, the Blackwell submission met the strict accuracy targets of the MLPerf benchmark.
Blackwell B200 delivers up to 4x more performance versus previous generation on MLPerf Inference v4.1's Llama 2 70B workload. Improvements in Blackwell haven't stopped the continued acceleration of Hopper. In the last year, Hopper performance has increased 3.4x in MLPerf on H100 thanks to regular software advancements. This means that NVIDIA's peak performance today, on Blackwell, is 10x faster than it was just one year ago on Hopper.
These results track progress on the MLPerf Inference Llama 2 70B Offline scenario over the past year. Our ongoing work is incorporated into TensorRT-LLM, a purpose-built library to accelerate LLMs that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM is built on top of the TensorRT Deep Learning Inference library and leverages much of TensorRT's deep learning optimizations with additional LLM-specific improvements.
Improving Llama in Leaps and Bounds More recently, we've continued optimizing variants of Meta's Llama models, including versions 3.1 and 3.2 as well as model sizes 70B and the biggest model, 405B. These optimizations include custom quantization recipes, as well as efficient use of parallelization techniques to more efficiently split the model across multiple GPUs, leveraging NVIDIA NVLink and NVSwitch interconnect technologies. Cutting-edge LLMs like Llama 3.1 405B are very demanding and require the combined performance of multiple state-of-the-art GPUs for fast responses.
Parallelism techniques require a hardware platform with a robust GPU-to-GPU interconnect fabric to get maximum performance and avoid communication bottlenecks. Each NVIDIA H200 Tensor Core GPU features fourth-generation NVLink, which provides a whopping 900GB/s of GPU-to-GPU bandwidth. Every eight-GPU HGX H200 platform also ships with four NVLink Switches, enabling every H200 GPU to communicate with any other H200 GPU at 900GB/s, simultaneously.
Many LLM deployments use parallelism over choosing to keep the workload on a single GPU, which can have compute bottlenecks. LLMs seek to balance low latency and high throughput, with the optimal parallelization technique depending on application requirements.
For instance, if lowest latency is the priority, tensor parallelism is critical, as the combined compute performance of multiple GPUs can be used to serve tokens to users more quickly. However, for use cases where peak throughput across all users is prioritized, pipeline parallelism can efficiently boost overall server throughput.
The table below shows that tensor parallelism can deliver over 5x more throughput in minimum latency scenarios, whereas pipeline parallelism brings 50% more performance for maximum throughput use cases.
For production deployments that seek to maximize throughput within a given latency budget, a platform needs to provide the ability to effectively combine both techniques like in TensorRT-LLM.
Read the technical blog on boosting Llama 3.1 405B throughput to learn more about these techniques.
Different scenarios have different requirements, and parallelism techniques bring optimal performance for each of these scenarios. The Virtuous Cycle Over the lifecycle of our architectures, we deliver significant performance gains from ongoing software tuning and optimization. These improvements translate into additional value for customers who train and deploy on our platforms. They're able to create more capable models and applications and deploy their existing models using less infrastructure, enhancing th
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
04/08/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
04/07/2026
April 7 2026, 19:00 (PDT) Detective Conan: Fallen Angel of the Highway Opens in...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
07/05/2026
January 8, 2024
Colorfront (colorfront.com), a leader in high-performance, on-s...
07/05/2026
February 9, 2024
Colorfront (colorfront.com), the multi-award-winning pioneer i...
07/05/2026
March 20, 2024
NAB 2024, Las Vegas - Colorfront (colorfront.com), the multi-awa...
07/05/2026
April 1, 2025
CINEMACON, APRIL 1, 2025 - Colorfront (colorfront.com), the multi-award-winning developer of high-performance dailies/transcoding/streaming syste...
07/05/2026
June 15, 2025
Colorfront (colorfront.com), an Academy and Emmy Award-winning de...
07/05/2026
June 15, 2025
Colorfront participated in the ICTA Barcelona Cinema Technology Summit on Sunday, June 15, 2025. Held at The Phenomena Experience, the event feat...
07/05/2026
July 1, 2025
Colorfront (colorfront.com), the multi-award-winning developer of high-performance dailies/transcoding/streaming systems for motion pictures, OTT,...
07/05/2026
July 1, 2025
Passion and dedication will take you places. Come with us on a short trip to the heart of India, where Annapurna Studios is living-up to the inspi...
07/05/2026
July 3, 2025
Colorfront (colorfront.com), the multi-award-winning developer of high-performance dailies/transcoding/streaming systems for motion pictures, OTT,...
07/05/2026
September 1, 2025
IBC 2025, Amsterdam - Colorfront (colorfront.com) - the Acade...
07/05/2026
April 17, 2026
LOS ANGELES - April 17, 2026 - Colorfront today announced Colorf...
07/05/2026
April 23, 2026
NAB 2026, Las Vegas - the Academy and Emmy Award-winning develop...
07/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
07/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
07/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
07/05/2026
Recreating the 1974 Doctor Who Time Tunnel in After Effects
Graham Quince May 6, 2026
0 Comments
The Time Tunnel from Doctor Who titles is one of th...
07/05/2026
Well known Irish faces pushed to the edge in new series of
Uncharted with Ray G...
07/05/2026
RT will premiere A Traveller Family, a compelling new documentary exploring the...
07/05/2026
Less typing, more tanking.
Faster logins mean more time in the gaming action - and this week provides GeForce NOW members with a smoother path straight into th...
06/05/2026
Gravity Media Chief RF Communications Engineer Glenn Willems uses Wisycom RF over Fiber and wireless solutions across major cycling events and international mar...
06/05/2026
A Sennheiser Spectera module is now available in Bitfocus Companion and Buttons, enabling direct integration of Spectera with the two software platforms. The mo...
06/05/2026
Ted Turner, the visionary media entrepreneur whose appetite for disruption helpe...
06/05/2026
Peacock is going all-in on the beautiful game - streaming all 104 FIFA World Cup...
06/05/2026
New pricing tiers for vocal/dialogue restoration tool
NoiseWorks Audio's AI-powered vocal and dialogue processing plug-in is now available in three diff...
06/05/2026
Popular mixing & routing software overhauled
Following a recent public beta test, RME have launched the final release version of the powerful mixing and rou...
06/05/2026
New SOS Video Feature
Focusrites ISA C8X is a milestone product that brings together the companys analogue heritage and their expertise in digital audio. Yo...
06/05/2026
Polands Miecznik-class frigates are part of the largest contract in Polish shipbuilding history. (Image Credit: PGZ Stocznia Wojenna)...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
Riedel Communications today announced the expansion of its leadership structure as part of a strategic initiative to strengthen both its operational management ...
06/05/2026
For nearly three decades, Veteran Production Sound Mixer and Five-time Emmy Award Winner Dirk Sciarrotta has helped define the sonic identity of the long-runnin...
06/05/2026
ZEISS CinCraft LensCore: Cinema Lens Looks for Compositing
Brie Clayton May 6, 2026
0 Comments
ZEISS announces the launch of CinCraft LensCore, a nove...
06/05/2026
Wisycom Solves Extreme RF Challenges Across Miles of Live Action for Gravity Med...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
Narrative Entertainment has partnered with Encompass to deliver high-quality subtitling of its Great! network content using the Altitude Intelligence AI assiste...
06/05/2026
SipRadius, widely recognized for making content processing and connectivity secure and seamless, is proud to launch a dramatic new approach to AI content creati...
06/05/2026
When the broadband and media industry gathers at ANGA COM in Cologne from May 19 to 21, Big Blue Marble will be at the forefront. The international broadcast an...
06/05/2026
Cinegy GmbH, a leading developer of software-defined television technology, is proud to exhibit at MPTS for the first time. Visitors to the stand will discover ...
06/05/2026
Val Jeanty Receives 2026 Doris Duke Artist Award Jeanty, a composer, percussionist, and turntablist, is the fourth Berklee recipient of the prestigious award ...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
When live cycling races and international marathons stretch for miles across cities and countryside, there is no margin for RF failure in live broadcast. As Chi...
06/05/2026
Oberkochen/Germany, May 5, 2026
ZEISS announces the launch of CinCraft LensCore, a novel solution for creating physically based cinematic lens looks for visual...
06/05/2026
06 May 2026
VEON's Kyivstar Authorized to Resell Starlink for Businesses & ...