
Large language models and the applications they power enable unprecedented opportunities for organizations to get deeper insights from their data reservoirs and to build entirely new classes of applications.
But with opportunities often come challenges.
Both on premises and in the cloud, applications that are expected to run in real time place significant demands on data center infrastructure to simultaneously deliver high throughput and low latency with one platform investment.
To drive continuous performance improvements and improve the return on infrastructure investments, NVIDIA regularly optimizes the state-of-the-art community models, including Meta's Llama, Google's Gemma, Microsoft's Phi and our own NVLM-D-72B, released just a few weeks ago.
Relentless Improvements Performance improvements let our customers and partners serve more complex models and reduce the needed infrastructure to host them. NVIDIA optimizes performance at every layer of the technology stack, including TensorRT-LLM, a purpose-built library to deliver state-of-the-art performance on the latest LLMs. With improvements to the open-source Llama 70B model, which delivers very high accuracy, we've already improved minimum latency performance by 3.5x in less than a year.
We're constantly improving our platform performance and regularly publish performance updates. Each week, improvements to NVIDIA software libraries are published, allowing customers to get more from the very same GPUs. For example, in just a few months' time, we've improved our low-latency Llama 70B performance by 3.5x.
NVIDIA has increased performance on the Llama 70B model by 3.5x. In the most recent round of MLPerf Inference 4.1, we made our first-ever submission with the Blackwell platform. It delivered 4x more performance than the previous generation.
This submission was also the first-ever MLPerf submission to use FP4 precision. Narrower precision formats, like FP4, reduces memory footprint and memory traffic, and also boost computational throughput. The process takes advantage of Blackwell's second-generation Transformer Engine, and with advanced quantization techniques that are part of TensorRT Model Optimizer, the Blackwell submission met the strict accuracy targets of the MLPerf benchmark.
Blackwell B200 delivers up to 4x more performance versus previous generation on MLPerf Inference v4.1's Llama 2 70B workload. Improvements in Blackwell haven't stopped the continued acceleration of Hopper. In the last year, Hopper performance has increased 3.4x in MLPerf on H100 thanks to regular software advancements. This means that NVIDIA's peak performance today, on Blackwell, is 10x faster than it was just one year ago on Hopper.
These results track progress on the MLPerf Inference Llama 2 70B Offline scenario over the past year. Our ongoing work is incorporated into TensorRT-LLM, a purpose-built library to accelerate LLMs that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM is built on top of the TensorRT Deep Learning Inference library and leverages much of TensorRT's deep learning optimizations with additional LLM-specific improvements.
Improving Llama in Leaps and Bounds More recently, we've continued optimizing variants of Meta's Llama models, including versions 3.1 and 3.2 as well as model sizes 70B and the biggest model, 405B. These optimizations include custom quantization recipes, as well as efficient use of parallelization techniques to more efficiently split the model across multiple GPUs, leveraging NVIDIA NVLink and NVSwitch interconnect technologies. Cutting-edge LLMs like Llama 3.1 405B are very demanding and require the combined performance of multiple state-of-the-art GPUs for fast responses.
Parallelism techniques require a hardware platform with a robust GPU-to-GPU interconnect fabric to get maximum performance and avoid communication bottlenecks. Each NVIDIA H200 Tensor Core GPU features fourth-generation NVLink, which provides a whopping 900GB/s of GPU-to-GPU bandwidth. Every eight-GPU HGX H200 platform also ships with four NVLink Switches, enabling every H200 GPU to communicate with any other H200 GPU at 900GB/s, simultaneously.
Many LLM deployments use parallelism over choosing to keep the workload on a single GPU, which can have compute bottlenecks. LLMs seek to balance low latency and high throughput, with the optimal parallelization technique depending on application requirements.
For instance, if lowest latency is the priority, tensor parallelism is critical, as the combined compute performance of multiple GPUs can be used to serve tokens to users more quickly. However, for use cases where peak throughput across all users is prioritized, pipeline parallelism can efficiently boost overall server throughput.
The table below shows that tensor parallelism can deliver over 5x more throughput in minimum latency scenarios, whereas pipeline parallelism brings 50% more performance for maximum throughput use cases.
For production deployments that seek to maximize throughput within a given latency budget, a platform needs to provide the ability to effectively combine both techniques like in TensorRT-LLM.
Read the technical blog on boosting Llama 3.1 405B throughput to learn more about these techniques.
Different scenarios have different requirements, and parallelism techniques bring optimal performance for each of these scenarios. The Virtuous Cycle Over the lifecycle of our architectures, we deliver significant performance gains from ongoing software tuning and optimization. These improvements translate into additional value for customers who train and deploy on our platforms. They're able to create more capable models and applications and deploy their existing models using less infrastructure, enhancing th
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
01/04/2026
January 4 2026, 18:00 (PST) DOLBY AND DOUYIN EMPOWER THE NEXT GENERATON OF CREATORS WITH DOLBY VISION
Douyin Users Can Now Create And Share Videos With Stun...
28/02/2026
Berklee Presents Mambo Mania: Eguie Castrillo and the Berklee All-Stars Big Band...
28/02/2026
Berklee Announces Two New Summer Programs in Los Angeles The Berklee Music Business Program and Electronic Music Production and Sound Design Workshop bring imme...
28/02/2026
New way to intentionally discover molecular glues could expand drug discovery Scripps Research scientists and colleagues show how drugs that eliminate certain d...
27/02/2026
The E.W. Scripps Company names Oliver Gray as Vice President, Network Sports and...
27/02/2026
The Gotham Sports App, the exclusive direct-to-consumer streaming home of MSG Networks and the YES Network, is now available for purchase through Prime Video fo...
27/02/2026
ESPN and the Horizon League announce a new multi-year, multi-platform media rights agreement, continuing a 38-year collaboration that began with the 1988 Midwes...
27/02/2026
At the 2026 NAB Show in Las Vegas, NETGEAR will highlight its new switch models and major updates to its Engage Controller software. The company's network d...
27/02/2026
Riedel Communications announces that Fondazione Teatro alla Scala has deployed a...
27/02/2026
Lyuno specializes in media localization, including translation, dubbing, subtitling, and voice-over services for a wide array of entertainment content. The comp...
27/02/2026
Chyron Weather 2.3, the latest edition of Chyron's weather visualization suite for broadcasters and meteorologists, recently launched.
The release includes...
27/02/2026
Telestream, which concentrates in media workflow technologies, announces expanded practical AI enhancements across its Vantage, Vantage Cloud, EDC, Stanza, and ...
27/02/2026
Horizon Sports & Experiences (HS&E), a global sports marketing, media, and live ...
27/02/2026
Legendary sports broadcasters Bob Costas, Doug Collins, Mike Czar of the Telest...
27/02/2026
Beginning on March 1st, IndyCar will be kicking off their 31st season on the str...
27/02/2026
In-venue and creative video staffers at the professional and collegiate level ha...
27/02/2026
Ratings Roundup is a rundown of recent rating news and is derived from press rel...
27/02/2026
Owl AI a pioneer in artificial intelligence for professional sports, announces a...
27/02/2026
With over 447 million fans in APAC, Formula 1 and beIN will continue to innovate...
27/02/2026
12-year-old Noelle Taylor will be the Kid Reporter when the Brooklyn Nets host t...
27/02/2026
Entire CapCam system - including camera unit, RF transmitter, and battery - is h...
27/02/2026
Since its inception, Gorillaz has been known for blending art with genre-bending...
27/02/2026
This week, Spotify introduced Audiobook Charts for the U.S. and U.K. The charts make it easy to discover your next favorite book by showing what's popular a...
27/02/2026
Rohde & Schwarz and Viasat to collaborate on NB-NTN IoT test plan for connectivi...
27/02/2026
In media technology, big features often steal the spotlight - AI integrations, cloud transformations, automation frameworks. But for the people who use these to...
27/02/2026
Digital Asset Management systems sit at the heart of most marcoms operations. They centralise content, organise it, and make it discoverable. Integrated with th...
27/02/2026
The AI Wild West comes to NAB 2026 and Blue Lucy is bringing the Sheriff
The AI Wild West is here, and media organisations are feeling the heat. On Booth W23...
27/02/2026
NEW YORK - February 26, 2026 - An estimated 32.6 million people watched President Donald J. Trump deliver the 2026 State of the Union address on Tuesday, Februa...
27/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
27/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
27/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
27/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
27/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
27/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
27/02/2026
Video is one of the lawyer's most powerful storytelling tools in civil litigation today, whether used to transport jurors to an incident scene or challenge ...
27/02/2026
Creative software developer Foundry today released Nuke 17.0, the latest version of its powerful compositing tool for visual effects and animation. Marking one ...
27/02/2026
In a sun-drenched Los Angeles studio filled with guitars, laughter, and the low thrum of KRK monitors, Third Eye Blind's Kryz Reid balances rockstar energy ...
27/02/2026
Atomos announces the launch of Ninja RAW, a 5-inch HDR monitor-recorder designed to give filmmakers, content creators, and broadcasters an uncompromising and af...
27/02/2026
FM, leaders in commercial property insurance, is utilizing Marshall Electronics CV228 Weatherproof Lipstick Cameras to capture critical fire testing data inside...
27/02/2026
Telestream, global leader in media workflow technologies, today announced expanded practical AI enhancements across its Vantage, Vantage Cloud, EDC, Stanza, and...
27/02/2026
Riedel Communications today announced that Fondazione Teatro alla Scala has deployed a comprehensive wireless intercom solution, leveraging Riedel's Bolero,...
27/02/2026
27 02 2026 - Media release Screen Australia announces Rachel Perkins as Director of First Nations Strategy
Rachel Perkins. Photo credit: John Platt.
Screen ...
27/02/2026
From the Milan Winter Olympics to the FIFA World Cup 2026 and the next Super Bowl, live sports continue to command global attention. Broadcast and streaming com...
27/02/2026
Telos Alliance Partners with College Radio Foundation to Show Love to College B...
27/02/2026
A deeply personal, uncompromising portrait of the legendary punk rock iconFriday...
27/02/2026
Back to All News
Dandelion': Netflix Adapts Gintama' Creator Hideaki ...