
Large language models and the applications they power enable unprecedented opportunities for organizations to get deeper insights from their data reservoirs and to build entirely new classes of applications.
But with opportunities often come challenges.
Both on premises and in the cloud, applications that are expected to run in real time place significant demands on data center infrastructure to simultaneously deliver high throughput and low latency with one platform investment.
To drive continuous performance improvements and improve the return on infrastructure investments, NVIDIA regularly optimizes the state-of-the-art community models, including Meta's Llama, Google's Gemma, Microsoft's Phi and our own NVLM-D-72B, released just a few weeks ago.
Relentless Improvements Performance improvements let our customers and partners serve more complex models and reduce the needed infrastructure to host them. NVIDIA optimizes performance at every layer of the technology stack, including TensorRT-LLM, a purpose-built library to deliver state-of-the-art performance on the latest LLMs. With improvements to the open-source Llama 70B model, which delivers very high accuracy, we've already improved minimum latency performance by 3.5x in less than a year.
We're constantly improving our platform performance and regularly publish performance updates. Each week, improvements to NVIDIA software libraries are published, allowing customers to get more from the very same GPUs. For example, in just a few months' time, we've improved our low-latency Llama 70B performance by 3.5x.
NVIDIA has increased performance on the Llama 70B model by 3.5x. In the most recent round of MLPerf Inference 4.1, we made our first-ever submission with the Blackwell platform. It delivered 4x more performance than the previous generation.
This submission was also the first-ever MLPerf submission to use FP4 precision. Narrower precision formats, like FP4, reduces memory footprint and memory traffic, and also boost computational throughput. The process takes advantage of Blackwell's second-generation Transformer Engine, and with advanced quantization techniques that are part of TensorRT Model Optimizer, the Blackwell submission met the strict accuracy targets of the MLPerf benchmark.
Blackwell B200 delivers up to 4x more performance versus previous generation on MLPerf Inference v4.1's Llama 2 70B workload. Improvements in Blackwell haven't stopped the continued acceleration of Hopper. In the last year, Hopper performance has increased 3.4x in MLPerf on H100 thanks to regular software advancements. This means that NVIDIA's peak performance today, on Blackwell, is 10x faster than it was just one year ago on Hopper.
These results track progress on the MLPerf Inference Llama 2 70B Offline scenario over the past year. Our ongoing work is incorporated into TensorRT-LLM, a purpose-built library to accelerate LLMs that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM is built on top of the TensorRT Deep Learning Inference library and leverages much of TensorRT's deep learning optimizations with additional LLM-specific improvements.
Improving Llama in Leaps and Bounds More recently, we've continued optimizing variants of Meta's Llama models, including versions 3.1 and 3.2 as well as model sizes 70B and the biggest model, 405B. These optimizations include custom quantization recipes, as well as efficient use of parallelization techniques to more efficiently split the model across multiple GPUs, leveraging NVIDIA NVLink and NVSwitch interconnect technologies. Cutting-edge LLMs like Llama 3.1 405B are very demanding and require the combined performance of multiple state-of-the-art GPUs for fast responses.
Parallelism techniques require a hardware platform with a robust GPU-to-GPU interconnect fabric to get maximum performance and avoid communication bottlenecks. Each NVIDIA H200 Tensor Core GPU features fourth-generation NVLink, which provides a whopping 900GB/s of GPU-to-GPU bandwidth. Every eight-GPU HGX H200 platform also ships with four NVLink Switches, enabling every H200 GPU to communicate with any other H200 GPU at 900GB/s, simultaneously.
Many LLM deployments use parallelism over choosing to keep the workload on a single GPU, which can have compute bottlenecks. LLMs seek to balance low latency and high throughput, with the optimal parallelization technique depending on application requirements.
For instance, if lowest latency is the priority, tensor parallelism is critical, as the combined compute performance of multiple GPUs can be used to serve tokens to users more quickly. However, for use cases where peak throughput across all users is prioritized, pipeline parallelism can efficiently boost overall server throughput.
The table below shows that tensor parallelism can deliver over 5x more throughput in minimum latency scenarios, whereas pipeline parallelism brings 50% more performance for maximum throughput use cases.
For production deployments that seek to maximize throughput within a given latency budget, a platform needs to provide the ability to effectively combine both techniques like in TensorRT-LLM.
Read the technical blog on boosting Llama 3.1 405B throughput to learn more about these techniques.
Different scenarios have different requirements, and parallelism techniques bring optimal performance for each of these scenarios. The Virtuous Cycle Over the lifecycle of our architectures, we deliver significant performance gains from ongoing software tuning and optimization. These improvements translate into additional value for customers who train and deploy on our platforms. They're able to create more capable models and applications and deploy their existing models using less infrastructure, enhancing th
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
01/04/2026
January 4 2026, 18:00 (PST) DOLBY AND DOUYIN EMPOWER THE NEXT GENERATON OF CREATORS WITH DOLBY VISION
Douyin Users Can Now Create And Share Videos With Stun...
26/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/02/2026
With more than four decades of experience in radio broadcasting and live sports production, Daryl Doss, owner of Doss Technical Services and a contract engineer...
26/02/2026
BCNEXXT has deployed live HLG-based HDR playout capabilities within its Vipe platform, enabling broadcasters to integrate High Dynamic Range into live productio...
26/02/2026
TAG Video Systems (Booth W2323) will unveil new capabilities across its IP-native Realtime Media Platform at NAB 2026. New releases include visual service healt...
26/02/2026
IBC today announced a new strategic partnership with EIT Culture & Creativity the institutional partnership for culture and creativity, supported by the Europ...
26/02/2026
Clear-Com kept the action on track at Red Bull Shay'iMoto, an adrenaline-fueled motorsport spinning event that transformed the streets of Durban, South Afr...
26/02/2026
Harmonic (NASDAQ: HLIT) today announced that Alcom, a leading telco operator in Finland, is powering its next-generation white-label headend video service with ...
26/02/2026
Big Blue Marble, a provider of broadcast-grade, cloud-native video solutions for broadcasters, service providers, and content owners, today announced that it ha...
26/02/2026
New approach enables video service providers to deliver multiple live feeds on the same screen with lower costs and improved device compatibility
Broadpeak, a ...
26/02/2026
Final quarter revenues increase 7% year-on-year, with accelerating momentum in the second half
Space Services grows revenues by 6% year-on-year and records hig...
25/02/2026
With the Olympic Flag officially handed over to the organisers of the next Winter Games and the baton passed from Milano Cortina 2026 to French Alps 2030, the I...
25/02/2026
From a studio overlooking the Dolomites to workflows routed through Milan and into Salford, the BBC delivered a lean and mean operation for its Winter Games c...
25/02/2026
Warner Bros. Discovery (WBD) Sports is managing a huge network of channels acros...
25/02/2026
From its base in the northern Italian town of Cortina, Warner Bros. Discovery (W...
25/02/2026
In addition to 16:9-to-9:16 intelligent cropping for live video, Inference autom...
25/02/2026
Longtime rivals Floyd Money Mayweather Jr. (50-0, 27 KOs) and Manny PacMan P...
25/02/2026
The WNBA's Portland Fire and NWSL's Portland Thorns announce a groundbre...
25/02/2026
Multi-angle coverage, on-demand access to ultra-high-resolution video are provided for replays and clips across multiple distribution channels
The NHL and Cosm...
25/02/2026
The implementation standardizes an integrated workflow connecting ultra-high-res...
25/02/2026
Targeting a younger audience, creator-led network's Access Granted series hi...
25/02/2026
Alpha, the project's systems integrator, assisted in the workflow transformation
Tipping off the second half of the 2025-26 home schedule against the Houst...
25/02/2026
OCVIBE, the 100-acre mixed-use development transforming the area surrounding Hon...
25/02/2026
It's never been easier to customize your Spotify listening experience. Last year, we introduced more control over the way your playlist sounds, giving Premi...
25/02/2026
Hip-hop thrives on constant reinvention, with bold voices and fearless experimentation continually pushing the genre's boundaries. Every era brings new lead...
25/02/2026
L3Harris technicians recently completed a major mirror refurbishment for the U.S...
25/02/2026
This new offering helps solve for the need to move beyond traditional audience d...
25/02/2026
Gold-standard Gracenote content metadata will power Samsung's LLM-enabled entertainment search discovery experiences and more
NEW YORK February 25, 202...
25/02/2026
Afrobeats Icon Tiwa Savage Joins Forces with Berklee to Empower African Talent In collaboration with Berklee Global, the Tiwa Savage Music Foundation will hos...
25/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
25/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
25/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
25/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
25/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
25/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
25/02/2026
Arch Platform Technologies, a leading platform for creating and managing cloud workstation infrastructure, and Wacom, the world's leading manufacturer of in...
25/02/2026
Today, AWS is announcing AWS Elemental Inference, a fully managed AI service that automatically transforms and maximizes live and on-demand video broadcasts to ...