
Large language models and the applications they power enable unprecedented opportunities for organizations to get deeper insights from their data reservoirs and to build entirely new classes of applications.
But with opportunities often come challenges.
Both on premises and in the cloud, applications that are expected to run in real time place significant demands on data center infrastructure to simultaneously deliver high throughput and low latency with one platform investment.
To drive continuous performance improvements and improve the return on infrastructure investments, NVIDIA regularly optimizes the state-of-the-art community models, including Meta's Llama, Google's Gemma, Microsoft's Phi and our own NVLM-D-72B, released just a few weeks ago.
Relentless Improvements Performance improvements let our customers and partners serve more complex models and reduce the needed infrastructure to host them. NVIDIA optimizes performance at every layer of the technology stack, including TensorRT-LLM, a purpose-built library to deliver state-of-the-art performance on the latest LLMs. With improvements to the open-source Llama 70B model, which delivers very high accuracy, we've already improved minimum latency performance by 3.5x in less than a year.
We're constantly improving our platform performance and regularly publish performance updates. Each week, improvements to NVIDIA software libraries are published, allowing customers to get more from the very same GPUs. For example, in just a few months' time, we've improved our low-latency Llama 70B performance by 3.5x.
NVIDIA has increased performance on the Llama 70B model by 3.5x. In the most recent round of MLPerf Inference 4.1, we made our first-ever submission with the Blackwell platform. It delivered 4x more performance than the previous generation.
This submission was also the first-ever MLPerf submission to use FP4 precision. Narrower precision formats, like FP4, reduces memory footprint and memory traffic, and also boost computational throughput. The process takes advantage of Blackwell's second-generation Transformer Engine, and with advanced quantization techniques that are part of TensorRT Model Optimizer, the Blackwell submission met the strict accuracy targets of the MLPerf benchmark.
Blackwell B200 delivers up to 4x more performance versus previous generation on MLPerf Inference v4.1's Llama 2 70B workload. Improvements in Blackwell haven't stopped the continued acceleration of Hopper. In the last year, Hopper performance has increased 3.4x in MLPerf on H100 thanks to regular software advancements. This means that NVIDIA's peak performance today, on Blackwell, is 10x faster than it was just one year ago on Hopper.
These results track progress on the MLPerf Inference Llama 2 70B Offline scenario over the past year. Our ongoing work is incorporated into TensorRT-LLM, a purpose-built library to accelerate LLMs that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM is built on top of the TensorRT Deep Learning Inference library and leverages much of TensorRT's deep learning optimizations with additional LLM-specific improvements.
Improving Llama in Leaps and Bounds More recently, we've continued optimizing variants of Meta's Llama models, including versions 3.1 and 3.2 as well as model sizes 70B and the biggest model, 405B. These optimizations include custom quantization recipes, as well as efficient use of parallelization techniques to more efficiently split the model across multiple GPUs, leveraging NVIDIA NVLink and NVSwitch interconnect technologies. Cutting-edge LLMs like Llama 3.1 405B are very demanding and require the combined performance of multiple state-of-the-art GPUs for fast responses.
Parallelism techniques require a hardware platform with a robust GPU-to-GPU interconnect fabric to get maximum performance and avoid communication bottlenecks. Each NVIDIA H200 Tensor Core GPU features fourth-generation NVLink, which provides a whopping 900GB/s of GPU-to-GPU bandwidth. Every eight-GPU HGX H200 platform also ships with four NVLink Switches, enabling every H200 GPU to communicate with any other H200 GPU at 900GB/s, simultaneously.
Many LLM deployments use parallelism over choosing to keep the workload on a single GPU, which can have compute bottlenecks. LLMs seek to balance low latency and high throughput, with the optimal parallelization technique depending on application requirements.
For instance, if lowest latency is the priority, tensor parallelism is critical, as the combined compute performance of multiple GPUs can be used to serve tokens to users more quickly. However, for use cases where peak throughput across all users is prioritized, pipeline parallelism can efficiently boost overall server throughput.
The table below shows that tensor parallelism can deliver over 5x more throughput in minimum latency scenarios, whereas pipeline parallelism brings 50% more performance for maximum throughput use cases.
For production deployments that seek to maximize throughput within a given latency budget, a platform needs to provide the ability to effectively combine both techniques like in TensorRT-LLM.
Read the technical blog on boosting Llama 3.1 405B throughput to learn more about these techniques.
Different scenarios have different requirements, and parallelism techniques bring optimal performance for each of these scenarios. The Virtuous Cycle Over the lifecycle of our architectures, we deliver significant performance gains from ongoing software tuning and optimization. These improvements translate into additional value for customers who train and deploy on our platforms. They're able to create more capable models and applications and deploy their existing models using less infrastructure, enhancing th
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
01/04/2026
January 4 2026, 18:00 (PST) DOLBY AND DOUYIN EMPOWER THE NEXT GENERATON OF CREATORS WITH DOLBY VISION
Douyin Users Can Now Create And Share Videos With Stun...
27/03/2026
In-venue and creative video staffers at the professional and collegiate level ha...
27/03/2026
Comcast Business deployed network infrastructure for the 2026 PLAYERS Championsh...
27/03/2026
Czech production company CS live has equipped its newest outside broadcast van w...
27/03/2026
Edith Cowan University (ECU) in Perth, Western Australia has developed new broad...
27/03/2026
Deltatre has announced that CEO Andrea Marini will step down after five years in...
27/03/2026
DAZN has announced plans to launch DAZN Inflight, a live sports service for airline and maritime passengers, slated for 2027. Aviv Giladi, President of DAZN Par...
27/03/2026
Grass Valley has announced the completion of a live production deployment with NVP, a European media company specializing in live sports production, for LALIGA ...
27/03/2026
The Masters and Prime Video will debut Inside Amen Corner, a dedicated feed that...
27/03/2026
ESPN and the World Series of Poker (WSOP) have reached a multi-year agreement to bring the WSOP Main Event back to ESPN platforms. Coverage will include a three...
27/03/2026
USSI Global has opened its Media Transport Solutions Lab on its Melbourne campus. The engineering center provides a platform-agnostic environment for testing al...
27/03/2026
From 14-camera coverage to official review from Variant Systems Group, Sellitto ...
27/03/2026
The United Football League (UFL) has named Sportable its Official Connected Ball and Player Tracking Partner. Sportable's connected football and wearable pl...
27/03/2026
Ratings Roundup is a rundown of recent rating news and is derived from press rel...
27/03/2026
The Atlanta Braves and FuboTV have announced a multiyear distribution agreement to carry BravesVision on Fubo's live TV streaming platform beginning Opening...
27/03/2026
Besides restructuring for Season 3, the league worked with FOX and ESPN over the...
27/03/2026
The Atlanta Braves open their 2026 MLB season tonight against the Kansas City Ro...
27/03/2026
With new teams and new venues to adapt to, the spring-football league's part...
27/03/2026
By Lucy Spicer
One of the most exciting things about the Sundance Film Festival...
27/03/2026
New Classic, Amplitude & FlexRange units introduced
GIK Acoustics have just introduced a trio of new bass trap designs that bring improved low-end absorptio...
27/03/2026
Offers personalised Dolby Atmos headphone monitoring
Sonarworks have put their calibration expertise to work on a new mobile app that allows users to create...
27/03/2026
NITV to broadcast farewell to Rhoda Roberts AO with special coverage and week-lo...
27/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
27/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
27/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
27/03/2026
Marshall Electronics Showcases New Feature-Rich CV320 and CV520 IP and 3G-SDI PO...
27/03/2026
Sony Electronics Inc. Elevates Professional Video Workflows with Powerful Update...
27/03/2026
GatesAir Extends AirWatch365 Managed Service with Edge Gateway Site Appliance
Brie Clayton March 27, 2026
0 Comments
NAB marks global launch of servic...
27/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
27/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
27/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
27/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
27/03/2026
At NAB Show 2026, Net Insight will showcase the next evolution of Nimbra Edge, its orchestration and control layer designed to manage live media services across...
27/03/2026
Harmonic (NASDAQ: HLIT) today announced powerful new innovations that further elevate the company's sports streaming solution. The advanced capabilities enh...
27/03/2026
Bitmovin, the leading provider of video streaming solutions, today announced significant new capabilities for Player Web X, its next-generation web video player...
27/03/2026
Riedel Communications today announced that Czech-based production company CS live has equipped its newest outside broadcast (OB) van with an integrated Riedel i...
27/03/2026
130 Industry Experts Confirmed as Show Celebrates its 10th Anniversary
MPTS, the UK's largest and most influential event for the media, production and tech...
27/03/2026
Grass Valley today announced the successful completion of a major live production deployment with NVP, a leading European media company specializing in live spo...
27/03/2026
Showcases resilient solutions for satellite-to-IP migration, REMI and hybrid live production
Appear ASA (Appear, OSE:APR), a global leader in live production t...
27/03/2026
San Francisco, California, March 2026 - Microsoft Ignite, a major annual conference hosted by Microsoft for developers, IT professionals, partners and business ...
27/03/2026
Clear-Com is proud to highlight its support for worship teams through professional communication solutions, with the deployment of its EQUIP wireless system a...
27/03/2026
Orban Collaborates with Sage on Virtualized EAS Technology Demo
Brie Clayton March 26, 2026
0 Comments
Sage Alerting Systems (SAS), in cooperation wit...
27/03/2026
Berklee Alums Bring GIANT to Broadway Brian and Dayna Lee BM '13 met and fell in love at Berklee, and now their production companys first Broadway debut i...
27/03/2026
RT Supporting the Arts
RT is Supporting 20 Arts and Cultural Events all over Ireland this April
RT is proud to support an exciting month of arts and cul...
27/03/2026
Average of 1.37 million viewers watched Irish World Cup heartbreak on RT 2
Over one million streams on RT Player
Highest number of streams for any single eve...
27/03/2026
March 27 2026, 01:30 (PDT) Dolby Unveils Dolby House Shanghai, its First Global Flagship Experience Center
A New Global Platform Showcasing Dolbys Innovatio...