
Large language models and the applications they power enable unprecedented opportunities for organizations to get deeper insights from their data reservoirs and to build entirely new classes of applications.
But with opportunities often come challenges.
Both on premises and in the cloud, applications that are expected to run in real time place significant demands on data center infrastructure to simultaneously deliver high throughput and low latency with one platform investment.
To drive continuous performance improvements and improve the return on infrastructure investments, NVIDIA regularly optimizes the state-of-the-art community models, including Meta's Llama, Google's Gemma, Microsoft's Phi and our own NVLM-D-72B, released just a few weeks ago.
Relentless Improvements Performance improvements let our customers and partners serve more complex models and reduce the needed infrastructure to host them. NVIDIA optimizes performance at every layer of the technology stack, including TensorRT-LLM, a purpose-built library to deliver state-of-the-art performance on the latest LLMs. With improvements to the open-source Llama 70B model, which delivers very high accuracy, we've already improved minimum latency performance by 3.5x in less than a year.
We're constantly improving our platform performance and regularly publish performance updates. Each week, improvements to NVIDIA software libraries are published, allowing customers to get more from the very same GPUs. For example, in just a few months' time, we've improved our low-latency Llama 70B performance by 3.5x.
NVIDIA has increased performance on the Llama 70B model by 3.5x. In the most recent round of MLPerf Inference 4.1, we made our first-ever submission with the Blackwell platform. It delivered 4x more performance than the previous generation.
This submission was also the first-ever MLPerf submission to use FP4 precision. Narrower precision formats, like FP4, reduces memory footprint and memory traffic, and also boost computational throughput. The process takes advantage of Blackwell's second-generation Transformer Engine, and with advanced quantization techniques that are part of TensorRT Model Optimizer, the Blackwell submission met the strict accuracy targets of the MLPerf benchmark.
Blackwell B200 delivers up to 4x more performance versus previous generation on MLPerf Inference v4.1's Llama 2 70B workload. Improvements in Blackwell haven't stopped the continued acceleration of Hopper. In the last year, Hopper performance has increased 3.4x in MLPerf on H100 thanks to regular software advancements. This means that NVIDIA's peak performance today, on Blackwell, is 10x faster than it was just one year ago on Hopper.
These results track progress on the MLPerf Inference Llama 2 70B Offline scenario over the past year. Our ongoing work is incorporated into TensorRT-LLM, a purpose-built library to accelerate LLMs that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM is built on top of the TensorRT Deep Learning Inference library and leverages much of TensorRT's deep learning optimizations with additional LLM-specific improvements.
Improving Llama in Leaps and Bounds More recently, we've continued optimizing variants of Meta's Llama models, including versions 3.1 and 3.2 as well as model sizes 70B and the biggest model, 405B. These optimizations include custom quantization recipes, as well as efficient use of parallelization techniques to more efficiently split the model across multiple GPUs, leveraging NVIDIA NVLink and NVSwitch interconnect technologies. Cutting-edge LLMs like Llama 3.1 405B are very demanding and require the combined performance of multiple state-of-the-art GPUs for fast responses.
Parallelism techniques require a hardware platform with a robust GPU-to-GPU interconnect fabric to get maximum performance and avoid communication bottlenecks. Each NVIDIA H200 Tensor Core GPU features fourth-generation NVLink, which provides a whopping 900GB/s of GPU-to-GPU bandwidth. Every eight-GPU HGX H200 platform also ships with four NVLink Switches, enabling every H200 GPU to communicate with any other H200 GPU at 900GB/s, simultaneously.
Many LLM deployments use parallelism over choosing to keep the workload on a single GPU, which can have compute bottlenecks. LLMs seek to balance low latency and high throughput, with the optimal parallelization technique depending on application requirements.
For instance, if lowest latency is the priority, tensor parallelism is critical, as the combined compute performance of multiple GPUs can be used to serve tokens to users more quickly. However, for use cases where peak throughput across all users is prioritized, pipeline parallelism can efficiently boost overall server throughput.
The table below shows that tensor parallelism can deliver over 5x more throughput in minimum latency scenarios, whereas pipeline parallelism brings 50% more performance for maximum throughput use cases.
For production deployments that seek to maximize throughput within a given latency budget, a platform needs to provide the ability to effectively combine both techniques like in TensorRT-LLM.
Read the technical blog on boosting Llama 3.1 405B throughput to learn more about these techniques.
Different scenarios have different requirements, and parallelism techniques bring optimal performance for each of these scenarios. The Virtuous Cycle Over the lifecycle of our architectures, we deliver significant performance gains from ongoing software tuning and optimization. These improvements translate into additional value for customers who train and deploy on our platforms. They're able to create more capable models and applications and deploy their existing models using less infrastructure, enhancing th
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
04/08/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
04/07/2026
April 7 2026, 19:00 (PDT) Detective Conan: Fallen Angel of the Highway Opens in...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
16/04/2026
Appear ASA (OSE: APR) will announce three additions to its X Platform at NAB Sho...
16/04/2026
Harmonic has announced that CentralCast, a centralized master control facility for U.S. public media, has deployed Harmonic's XOS Advanced Media Processor t...
16/04/2026
Interra Systems has announced that Encompass Digital Media has integrated its BA...
16/04/2026
Grass Valley will demonstrate its Media Infrastructure capabilities at NAB Show 2026 (Booth C2408, Central Hall), bringing routing, signal processing, and orche...
16/04/2026
As preparations ramp up for the FIFA World Cup 2026, Verizon has outlined a sweeping connectivity and infrastructure initiative that will underpin broadcast ope...
16/04/2026
Advanced Image Robotics (AIR) has announced its selection for the 2026 MLS Innovation Lab. AIR will work with MLS clubs, players, and executives on automated vi...
16/04/2026
JWX has announced the acquisition of True Anthem, an AI-powered social publishin...
16/04/2026
Jomboy Media and FuboTV Inc. have launched the Jomboy Media Channel, a 24/7 channel available to FuboTV base plan subscribers. The channel, timed to the start o...
16/04/2026
Wave Central, a Domo Broadcast Company (Booth C2820), and EVS Broadcast Equipmen...
16/04/2026
Tata Communications and Formula 1 have released Race Before the Race, the firs...
16/04/2026
DPA Microphones has released a firmware update for its N-Series Digital Wireless...
16/04/2026
Sony's Live Stage at NAB Show 2026 is the place to hear directly from the content creators, end users, and technology experts who are pushing boundaries in ...
16/04/2026
Perfect Game and Youth Prospects have announced a partnership covering broadcast...
16/04/2026
National Collegiate Rugby (NCR) has announced a media rights partnership with Al...
16/04/2026
Audio-Technica has announced two new mid-side (MS) stereo broadcast microphones:...
16/04/2026
PSSI Global Services has announced the acquisition of Beagle Networks, a provider of IT infrastructure and onsite technical support for media and enterprise cus...
16/04/2026
Blackmagic Design has released Blackmagic Camera for iOS 3.3, a free update available now from the Apple App Store. The update will be demonstrated at NAB Show ...
16/04/2026
As live sports production continues to expand across linear, digital, and in-ven...
16/04/2026
Part of an infrastructure upgrade, the recently constructed spaces accommodate n...
16/04/2026
NBC has added NEP's Supershooter 11, which only just came online in time for...
16/04/2026
At Spotify, we want your experience to feel intuitive and personal across every moment of the day. Whether you're streaming your favorite playlist while you...
16/04/2026
Vintage broadcast experts release second plug-in
Telsie T is the second plug-in to be released by SonicWorld, a German audio company who specialise in servi...
16/04/2026
Compact unit offers hands-on plug-in control
Softube have just announced the launch of their latest hardware unit, the Flow Studio. Housed in a compact desk...
16/04/2026
Use any VST3 plug-in on immersive audio
Fiedler Audio have just released a powerful new plug-in wrapper that brings full VST3 processing to Dolby Atmos and ...
16/04/2026
Analysis plug-in gains enhanced frequency readouts
Nugen Audio's real-time analysis plug-in has just received a significant update that introduces some ...
16/04/2026
Recording & Music Technology Show
Sound On Sound are pleased to announce a new recording and music technology exhibition taking place in London on Saturday ...
16/04/2026
SBS reveals all-star alumni team to celebrate 70 years of the Eurovision Song Co...
16/04/2026
For decades, understanding the physical world from space has required a trade-of...
16/04/2026
The integration accelerates demographic audience delivery across local markets a...
16/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
16/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
16/04/2026
Telestream Cloud Services, including Vantage Cloud, UP, and SENTRY monitoring tools, are now optimized for OCI, powering flexible multi-cloud media orchestratio...
16/04/2026
Fox Corporation (Nasdaq: FOXA, FOX; FOX or the Company ) today announced a strategic collaboration with Amazon Web Services (AWS), naming AWS as its preferre...
16/04/2026
Triveni Digital, a trusted leader in ATSC 1.0 and 3.0 service delivery, data broadcasting, and quality assurance solutions, today announced an ISDB-Tb capabilit...
16/04/2026
Synamedia and SoFast announce strategic go-to-market partnership to accelerate FAST, pay-TV, and VOD
At the NAB Show 2026, Synamedia and SoFast are announcing ...
16/04/2026
Showcases New, Open-Standard IP Solutions Across Its Portfolio, From Production to Playout
Imagine Communications is marking a decade of leadership in ST 2110 ...
16/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
16/04/2026
Sportway Media Group, a world leading AI-automated sports production company and Broadcast Solutions, a leading system integrator and provider of innovative sol...
16/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
16/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
16/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
16/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
16/04/2026
AJA Enters into Agreement to Acquire Video Encoding Software Company Comprimato
Brie Clayton April 15, 2026
0 Comments
Deal will expand AJA's video ...
16/04/2026
Deity Announces PR-4 Compact Field Recorder with Pre-Orders Launching April 14
Brie Clayton April 15, 2026
0 Comments
Deity Microphones today announce...