
Large language models and the applications they power enable unprecedented opportunities for organizations to get deeper insights from their data reservoirs and to build entirely new classes of applications.
But with opportunities often come challenges.
Both on premises and in the cloud, applications that are expected to run in real time place significant demands on data center infrastructure to simultaneously deliver high throughput and low latency with one platform investment.
To drive continuous performance improvements and improve the return on infrastructure investments, NVIDIA regularly optimizes the state-of-the-art community models, including Meta's Llama, Google's Gemma, Microsoft's Phi and our own NVLM-D-72B, released just a few weeks ago.
Relentless Improvements Performance improvements let our customers and partners serve more complex models and reduce the needed infrastructure to host them. NVIDIA optimizes performance at every layer of the technology stack, including TensorRT-LLM, a purpose-built library to deliver state-of-the-art performance on the latest LLMs. With improvements to the open-source Llama 70B model, which delivers very high accuracy, we've already improved minimum latency performance by 3.5x in less than a year.
We're constantly improving our platform performance and regularly publish performance updates. Each week, improvements to NVIDIA software libraries are published, allowing customers to get more from the very same GPUs. For example, in just a few months' time, we've improved our low-latency Llama 70B performance by 3.5x.
NVIDIA has increased performance on the Llama 70B model by 3.5x. In the most recent round of MLPerf Inference 4.1, we made our first-ever submission with the Blackwell platform. It delivered 4x more performance than the previous generation.
This submission was also the first-ever MLPerf submission to use FP4 precision. Narrower precision formats, like FP4, reduces memory footprint and memory traffic, and also boost computational throughput. The process takes advantage of Blackwell's second-generation Transformer Engine, and with advanced quantization techniques that are part of TensorRT Model Optimizer, the Blackwell submission met the strict accuracy targets of the MLPerf benchmark.
Blackwell B200 delivers up to 4x more performance versus previous generation on MLPerf Inference v4.1's Llama 2 70B workload. Improvements in Blackwell haven't stopped the continued acceleration of Hopper. In the last year, Hopper performance has increased 3.4x in MLPerf on H100 thanks to regular software advancements. This means that NVIDIA's peak performance today, on Blackwell, is 10x faster than it was just one year ago on Hopper.
These results track progress on the MLPerf Inference Llama 2 70B Offline scenario over the past year. Our ongoing work is incorporated into TensorRT-LLM, a purpose-built library to accelerate LLMs that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM is built on top of the TensorRT Deep Learning Inference library and leverages much of TensorRT's deep learning optimizations with additional LLM-specific improvements.
Improving Llama in Leaps and Bounds More recently, we've continued optimizing variants of Meta's Llama models, including versions 3.1 and 3.2 as well as model sizes 70B and the biggest model, 405B. These optimizations include custom quantization recipes, as well as efficient use of parallelization techniques to more efficiently split the model across multiple GPUs, leveraging NVIDIA NVLink and NVSwitch interconnect technologies. Cutting-edge LLMs like Llama 3.1 405B are very demanding and require the combined performance of multiple state-of-the-art GPUs for fast responses.
Parallelism techniques require a hardware platform with a robust GPU-to-GPU interconnect fabric to get maximum performance and avoid communication bottlenecks. Each NVIDIA H200 Tensor Core GPU features fourth-generation NVLink, which provides a whopping 900GB/s of GPU-to-GPU bandwidth. Every eight-GPU HGX H200 platform also ships with four NVLink Switches, enabling every H200 GPU to communicate with any other H200 GPU at 900GB/s, simultaneously.
Many LLM deployments use parallelism over choosing to keep the workload on a single GPU, which can have compute bottlenecks. LLMs seek to balance low latency and high throughput, with the optimal parallelization technique depending on application requirements.
For instance, if lowest latency is the priority, tensor parallelism is critical, as the combined compute performance of multiple GPUs can be used to serve tokens to users more quickly. However, for use cases where peak throughput across all users is prioritized, pipeline parallelism can efficiently boost overall server throughput.
The table below shows that tensor parallelism can deliver over 5x more throughput in minimum latency scenarios, whereas pipeline parallelism brings 50% more performance for maximum throughput use cases.
For production deployments that seek to maximize throughput within a given latency budget, a platform needs to provide the ability to effectively combine both techniques like in TensorRT-LLM.
Read the technical blog on boosting Llama 3.1 405B throughput to learn more about these techniques.
Different scenarios have different requirements, and parallelism techniques bring optimal performance for each of these scenarios. The Virtuous Cycle Over the lifecycle of our architectures, we deliver significant performance gains from ongoing software tuning and optimization. These improvements translate into additional value for customers who train and deploy on our platforms. They're able to create more capable models and applications and deploy their existing models using less infrastructure, enhancing th
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
04/08/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
04/07/2026
April 7 2026, 19:00 (PDT) Detective Conan: Fallen Angel of the Highway Opens in...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
15/04/2026
Open Broadcast Systems has announced that BBC World Service has selected its IP ...
15/04/2026
LiveU has announced an expansion of its collaboration with Sony Corporation, add...
15/04/2026
Ateme has announced a collaboration with NVIDIA to support live Apple Immersive ...
15/04/2026
The Professional Fighters League (PFL) has announced a multi-year partnership renewal with DAZN DACH, covering Germany, Switzerland, Austria, Liechtenstein, and...
15/04/2026
Canon U.S.A. (NAB Booth C3825) today took the lid off of the CINE-SERVO 40-1200m...
15/04/2026
Panasonic Video and Audio Systems North America and NEP Group will demonstrate a...
15/04/2026
For the fourth year running, independent analysts found businesses across all industries and verticals pay roughly the same amount in fees as they spend on stor...
15/04/2026
The Soccer Tournament (TST) has announced a media rights deal with NBC Sports to...
15/04/2026
JB&A will host the Pre-NAB 2026 Technology Event on April 17-18 at Flamingo Las Vegas, ahead of NAB Show. The event features hands-on demonstrations and technic...
15/04/2026
The Sennheiser Group will exhibit at NAB Show 2026 (Booth 4931, Central Hall), with demonstrations from Sennheiser, Neumann, and Merging across three areas: Rel...
15/04/2026
NAB Show 2026 will take place April 18-22 at the Las Vegas Convention Center, wi...
15/04/2026
AI-Media has announced the LEXI Text Encoder and LEXI Voice Encoder at NAB Show 2026, the company's first new encoder hardware release in more than a decade...
15/04/2026
Italian camera support manufacturer Cartoni will introduce several new products at NAB Show 2026 (Booth C6540, Central Hall), including the Master 30 OB fluid h...
15/04/2026
Lawo and swXtch.io have announced a memorandum of understanding at NAB Show 2026, under which Lawo will explore incorporating swXtch.io's groundSwXtch softw...
15/04/2026
CacheFly will exhibit at NAB Show 2026 (Booth W3129, April 19-22, Las Vegas Convention Center), showcasing three new additions to its content delivery platform:...
15/04/2026
Synamedia has announced GO Shorts, a new module within its Synamedia Go OTT platform that uses AI to convert an operator's existing content library into a s...
15/04/2026
The NAB Show kicks off on Saturday, and the SVG and SVG Europe editorial teams a...
15/04/2026
AJA Video Systems has announced an agreement to acquire Comprimato, a live video encoding and processing software company. The deal will unite the two companies...
15/04/2026
Prime Video Sports' NBA Playoffs coverage, which includes the entire SoFi NB...
15/04/2026
Just announced, the SDE standard provides a unified method and file format to ensure consistent and reliably comparable noise predictions
Sports and entertainm...
15/04/2026
From immersive storytelling to laugh-out-loud comedies, podcasts are booming in ...
15/04/2026
Books have always moved with us, whether tucked in our bags or humming in our he...
15/04/2026
For many artists, independent venues are where music careers begin and fan communities take shape. Independent venue operators work hard every day to keep local...
15/04/2026
From gripping thrillers to poignant memoirs, the 21st century has had no shortage of unforgettable books. To celebrate the standout storytelling of our modern e...
15/04/2026
Vintage broadcast experts release second plug-in
Telsie T is the second plug-in to be released by SonicWorld, a German audio company who specialise in servi...
15/04/2026
Includes eight free UAD plug-ins
Universal Audio's latest bundle brings together a selection of their renowned plug-ins and virtual instruments, and is ...
15/04/2026
Maximum uptime for broadcasters: Rohde & Schwarz launches R&S BroadcastShield at...
15/04/2026
Image courtesy of MD Helicopters...
15/04/2026
Virginia Gov. Abigail Spanberger, L3Harris VP Mark Farley, and state and local l...
15/04/2026
U.S. Space Forces Ground-Based Optical Sensor System upgrade at the Maui Space S...
15/04/2026
NBCU-Versant notches 13.1% of TV viewing in February, its best since August 2024...
15/04/2026
New data reveals older Kiwis are financially resilient, loyal to local products,...
15/04/2026
aconnic AG (ISIN: DE000A0LBKW6), Munich, announces the market launch of the ACCE...
15/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
15/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
15/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
15/04/2026
Evergent introduces its Agentic Revenue Orchestration Platform, transforming how subscription businesses across direct-to-consumer streaming, pay-TV, telecommun...
15/04/2026
Harmonic's XOS Media Processor Delivers Exceptional Video Quality to More than Half of U.S. Public Media Viewership
Harmonic (NASDAQ: HLIT) today announce...
15/04/2026
LONGMONT, COLORADO, APRIL 15, 2026 DPA Microphones N Series Digital Wireless System users in North America can now take full advantage of the system's exc...
15/04/2026
Cobalt Iron, a leading provider of SaaS-based enterprise data protection, today announced the launch of Compass Tape Gateway (CTG), a transformative enhancemen...
15/04/2026
Disguise to Showcase Cutting-Edge Experience Tech for Sports, Broadcast and More...
15/04/2026
Arooj Aftab Makes the Music She Wants to Hear The singular artist explores the juxtaposition of grief and joy, dark and light, in her distinctive sound.
Apri...
15/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
15/04/2026
Interra Systems, a provider of end-to-end quality assurance solutions for the digital media industry, is proud to announce its central role in the digital trans...