
Large language models and the applications they power enable unprecedented opportunities for organizations to get deeper insights from their data reservoirs and to build entirely new classes of applications.
But with opportunities often come challenges.
Both on premises and in the cloud, applications that are expected to run in real time place significant demands on data center infrastructure to simultaneously deliver high throughput and low latency with one platform investment.
To drive continuous performance improvements and improve the return on infrastructure investments, NVIDIA regularly optimizes the state-of-the-art community models, including Meta's Llama, Google's Gemma, Microsoft's Phi and our own NVLM-D-72B, released just a few weeks ago.
Relentless Improvements Performance improvements let our customers and partners serve more complex models and reduce the needed infrastructure to host them. NVIDIA optimizes performance at every layer of the technology stack, including TensorRT-LLM, a purpose-built library to deliver state-of-the-art performance on the latest LLMs. With improvements to the open-source Llama 70B model, which delivers very high accuracy, we've already improved minimum latency performance by 3.5x in less than a year.
We're constantly improving our platform performance and regularly publish performance updates. Each week, improvements to NVIDIA software libraries are published, allowing customers to get more from the very same GPUs. For example, in just a few months' time, we've improved our low-latency Llama 70B performance by 3.5x.
NVIDIA has increased performance on the Llama 70B model by 3.5x. In the most recent round of MLPerf Inference 4.1, we made our first-ever submission with the Blackwell platform. It delivered 4x more performance than the previous generation.
This submission was also the first-ever MLPerf submission to use FP4 precision. Narrower precision formats, like FP4, reduces memory footprint and memory traffic, and also boost computational throughput. The process takes advantage of Blackwell's second-generation Transformer Engine, and with advanced quantization techniques that are part of TensorRT Model Optimizer, the Blackwell submission met the strict accuracy targets of the MLPerf benchmark.
Blackwell B200 delivers up to 4x more performance versus previous generation on MLPerf Inference v4.1's Llama 2 70B workload. Improvements in Blackwell haven't stopped the continued acceleration of Hopper. In the last year, Hopper performance has increased 3.4x in MLPerf on H100 thanks to regular software advancements. This means that NVIDIA's peak performance today, on Blackwell, is 10x faster than it was just one year ago on Hopper.
These results track progress on the MLPerf Inference Llama 2 70B Offline scenario over the past year. Our ongoing work is incorporated into TensorRT-LLM, a purpose-built library to accelerate LLMs that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM is built on top of the TensorRT Deep Learning Inference library and leverages much of TensorRT's deep learning optimizations with additional LLM-specific improvements.
Improving Llama in Leaps and Bounds More recently, we've continued optimizing variants of Meta's Llama models, including versions 3.1 and 3.2 as well as model sizes 70B and the biggest model, 405B. These optimizations include custom quantization recipes, as well as efficient use of parallelization techniques to more efficiently split the model across multiple GPUs, leveraging NVIDIA NVLink and NVSwitch interconnect technologies. Cutting-edge LLMs like Llama 3.1 405B are very demanding and require the combined performance of multiple state-of-the-art GPUs for fast responses.
Parallelism techniques require a hardware platform with a robust GPU-to-GPU interconnect fabric to get maximum performance and avoid communication bottlenecks. Each NVIDIA H200 Tensor Core GPU features fourth-generation NVLink, which provides a whopping 900GB/s of GPU-to-GPU bandwidth. Every eight-GPU HGX H200 platform also ships with four NVLink Switches, enabling every H200 GPU to communicate with any other H200 GPU at 900GB/s, simultaneously.
Many LLM deployments use parallelism over choosing to keep the workload on a single GPU, which can have compute bottlenecks. LLMs seek to balance low latency and high throughput, with the optimal parallelization technique depending on application requirements.
For instance, if lowest latency is the priority, tensor parallelism is critical, as the combined compute performance of multiple GPUs can be used to serve tokens to users more quickly. However, for use cases where peak throughput across all users is prioritized, pipeline parallelism can efficiently boost overall server throughput.
The table below shows that tensor parallelism can deliver over 5x more throughput in minimum latency scenarios, whereas pipeline parallelism brings 50% more performance for maximum throughput use cases.
For production deployments that seek to maximize throughput within a given latency budget, a platform needs to provide the ability to effectively combine both techniques like in TensorRT-LLM.
Read the technical blog on boosting Llama 3.1 405B throughput to learn more about these techniques.
Different scenarios have different requirements, and parallelism techniques bring optimal performance for each of these scenarios. The Virtuous Cycle Over the lifecycle of our architectures, we deliver significant performance gains from ongoing software tuning and optimization. These improvements translate into additional value for customers who train and deploy on our platforms. They're able to create more capable models and applications and deploy their existing models using less infrastructure, enhancing th
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
03/04/2026
Michigan's Fab Five will reunite for an alternate presentation of the Mich...
03/04/2026
Avid will exhibit at NAB Show 2026 (April 18-22, Booth N2226, Las Vegas Convention Center), demonstrating its Content Core platform and new AI-driven workflow c...
03/04/2026
Mark Roberts Motion Control (MRMC) has announced the appointment of Nick Barthee as Chief Operating Officer.
The announcement follows MRMC's transition fro...
03/04/2026
Interra Systems has announced that Elite Media Technologies has selected its BATON file-based QC solution for media workflows. Elite Media Technologies speciali...
03/04/2026
Ateme has announced that Moldtelecom has deployed Ateme technologies across its streaming workflow, covering encoding, delivery, operations, and analytics.
Mol...
03/04/2026
Grass Valley will demonstrate Framelight X, its content management platform, at NAB Show 2026. The platform connects capture, ingest, editing, and publishing in...
03/04/2026
Encompass Digital Media and Techex have announced a cloud-native Master Control ...
03/04/2026
Live Vertical Video automatically track the action on the court via AI technology and delivers a fully optimized, 9 16 live feed for viewers...
03/04/2026
As the Illini make their first trip to college basketball's biggest stage si...
03/04/2026
After last summer's Softball National Championship victory and last week'...
03/04/2026
The University of Arizona's Men's Basketball team has only loss twice th...
03/04/2026
Eight games across four tournaments will be played in three venues; accommodatio...
03/04/2026
The Ottawa Senators and Bell Media have announced a long-term rights extension for regional Ottawa Senators games on TSN and RDS. TSN Radio 1200 remains the exc...
03/04/2026
Massive production in Phoenix running out of Flagship Mobile unit, Features 50+ ...
03/04/2026
Iconic guitar pedals now available in plug-in form
Guitar effects experts Electro-Harmonix have teamed up with MixWave to turn a collection of their most pr...
03/04/2026
New multi-band AUv3 plug-in announced
Fred Anton Corvest (FAC) offer an extensive range of AUv3 plug-ins and iOS/iPadOS Apps, and their multiband effects pr...
03/04/2026
Just 84 units to be released in the US
Experimental synthesizer and sound-machine extraordinaires SOMA Laboratory have revealed an upcoming special-edition ...
03/04/2026
Emulates the input section of an Ampex 350
One of the latest arrivals to the Iconic Instruments range delivers a new tube preamp plug-in inspired by the cir...
03/04/2026
New York April 2, 2026 TelevisaUnivision, the world's leading Spanish-la...
03/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
03/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
03/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
03/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
03/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
03/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
03/04/2026
CVP, one of Europe's leading suppliers of professional video and broadcast solutions, today announces the launch of its new German operation and the formati...
03/04/2026
Mark Roberts Motion Control (MRMC) today announces the appointment of Nick Barthee as Chief Operating Officer, strengthening its leadership as the company conti...
03/04/2026
Net Insight introduces programmable Trust Boundaries that make live media interconnection predictable as traffic moves between facilities, networks and cloud en...
03/04/2026
Winning in the new media economy: Avid showcases AI-powered, connected intellige...
03/04/2026
NUGEN Audio CEO Dr. Paul Tapper to Lead Presentation About Dialog Intelligibilit...
03/04/2026
NAB Show 2026: PlayBox Neo Highlights Workflow, Security, and IP Advances
Brie Clayton April 2, 2026
0 Comments
PlayBox Neo will showcase the latest i...
03/04/2026
For Taku Hirano, Everything Is Connected From touring and composition to teaching and instrument design, the in-demand percussionist sees it all as one body o...
03/04/2026
Berklee Honors Humberto Ramirez with Master of Latin Music Award The alumnus and acclaimed trumpeter is honored for his influence as a performer, composer, an...
03/04/2026
VIZ Media Lands Rumiko Takahashi's MAO, Sets April 4 Premiere on Hulu in the...
03/04/2026
Back to All News
Competition Heats Up with Intrigue and Spices: Netflix Unveils...
03/04/2026
Back to All News
Radioactive Emergency Ranks #1 On Netflix's Global Top 10 ...
02/04/2026
HBO and NFL Films have announced Hard Knocks: Training Camp with the Seattle Sea...
02/04/2026
Haivision has announced the Makito ONE, a single-blade video encoding and decoding platform, at NAB Show 2026. The platform combines dual-channel video encoding...
02/04/2026
Telestream has introduced UP.Lens, a cloud-based multiviewer and monitoring serv...
02/04/2026
Mark Roberts Motion Control (MRMC) will exhibit at NAB Show 2026 (Booth C5220, April 19-22, Las Vegas Convention Center), marking the company's 60th anniver...
02/04/2026
Net Insight has introduced programmable Trust Boundaries, a feature integrated i...
02/04/2026
Bitmovin has announced support for SGAI (Server-Guided Ad Insertion) in its playback products, using HLS interstitials. SGAI combines elements of client-side an...
02/04/2026
Riedel Communications' SimplyLive RiMotion R12 replay system is supporting B...
02/04/2026
LTN, a managed IP video transport company, and Ateme, a video compression and de...
02/04/2026
Harmonic has announced that TDF, a broadcast infrastructure operator in France, has deployed Harmonic's XOS Advanced Media Processor and ProStream X Video S...
02/04/2026
Eluvio and the United Rugby Championship (URC) have announced first-year results...