
Large language models and the applications they power enable unprecedented opportunities for organizations to get deeper insights from their data reservoirs and to build entirely new classes of applications.
But with opportunities often come challenges.
Both on premises and in the cloud, applications that are expected to run in real time place significant demands on data center infrastructure to simultaneously deliver high throughput and low latency with one platform investment.
To drive continuous performance improvements and improve the return on infrastructure investments, NVIDIA regularly optimizes the state-of-the-art community models, including Meta's Llama, Google's Gemma, Microsoft's Phi and our own NVLM-D-72B, released just a few weeks ago.
Relentless Improvements Performance improvements let our customers and partners serve more complex models and reduce the needed infrastructure to host them. NVIDIA optimizes performance at every layer of the technology stack, including TensorRT-LLM, a purpose-built library to deliver state-of-the-art performance on the latest LLMs. With improvements to the open-source Llama 70B model, which delivers very high accuracy, we've already improved minimum latency performance by 3.5x in less than a year.
We're constantly improving our platform performance and regularly publish performance updates. Each week, improvements to NVIDIA software libraries are published, allowing customers to get more from the very same GPUs. For example, in just a few months' time, we've improved our low-latency Llama 70B performance by 3.5x.
NVIDIA has increased performance on the Llama 70B model by 3.5x. In the most recent round of MLPerf Inference 4.1, we made our first-ever submission with the Blackwell platform. It delivered 4x more performance than the previous generation.
This submission was also the first-ever MLPerf submission to use FP4 precision. Narrower precision formats, like FP4, reduces memory footprint and memory traffic, and also boost computational throughput. The process takes advantage of Blackwell's second-generation Transformer Engine, and with advanced quantization techniques that are part of TensorRT Model Optimizer, the Blackwell submission met the strict accuracy targets of the MLPerf benchmark.
Blackwell B200 delivers up to 4x more performance versus previous generation on MLPerf Inference v4.1's Llama 2 70B workload. Improvements in Blackwell haven't stopped the continued acceleration of Hopper. In the last year, Hopper performance has increased 3.4x in MLPerf on H100 thanks to regular software advancements. This means that NVIDIA's peak performance today, on Blackwell, is 10x faster than it was just one year ago on Hopper.
These results track progress on the MLPerf Inference Llama 2 70B Offline scenario over the past year. Our ongoing work is incorporated into TensorRT-LLM, a purpose-built library to accelerate LLMs that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM is built on top of the TensorRT Deep Learning Inference library and leverages much of TensorRT's deep learning optimizations with additional LLM-specific improvements.
Improving Llama in Leaps and Bounds More recently, we've continued optimizing variants of Meta's Llama models, including versions 3.1 and 3.2 as well as model sizes 70B and the biggest model, 405B. These optimizations include custom quantization recipes, as well as efficient use of parallelization techniques to more efficiently split the model across multiple GPUs, leveraging NVIDIA NVLink and NVSwitch interconnect technologies. Cutting-edge LLMs like Llama 3.1 405B are very demanding and require the combined performance of multiple state-of-the-art GPUs for fast responses.
Parallelism techniques require a hardware platform with a robust GPU-to-GPU interconnect fabric to get maximum performance and avoid communication bottlenecks. Each NVIDIA H200 Tensor Core GPU features fourth-generation NVLink, which provides a whopping 900GB/s of GPU-to-GPU bandwidth. Every eight-GPU HGX H200 platform also ships with four NVLink Switches, enabling every H200 GPU to communicate with any other H200 GPU at 900GB/s, simultaneously.
Many LLM deployments use parallelism over choosing to keep the workload on a single GPU, which can have compute bottlenecks. LLMs seek to balance low latency and high throughput, with the optimal parallelization technique depending on application requirements.
For instance, if lowest latency is the priority, tensor parallelism is critical, as the combined compute performance of multiple GPUs can be used to serve tokens to users more quickly. However, for use cases where peak throughput across all users is prioritized, pipeline parallelism can efficiently boost overall server throughput.
The table below shows that tensor parallelism can deliver over 5x more throughput in minimum latency scenarios, whereas pipeline parallelism brings 50% more performance for maximum throughput use cases.
For production deployments that seek to maximize throughput within a given latency budget, a platform needs to provide the ability to effectively combine both techniques like in TensorRT-LLM.
Read the technical blog on boosting Llama 3.1 405B throughput to learn more about these techniques.
Different scenarios have different requirements, and parallelism techniques bring optimal performance for each of these scenarios. The Virtuous Cycle Over the lifecycle of our architectures, we deliver significant performance gains from ongoing software tuning and optimization. These improvements translate into additional value for customers who train and deploy on our platforms. They're able to create more capable models and applications and deploy their existing models using less infrastructure, enhancing th
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
02/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
02/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
02/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
02/04/2026
Student Spotlight: Al-Fadl Salem The Danish singer recently performed for the queen of Denmark.
April 1, 2026
By
Editorial Staff
Image by Junia Morrow
Wh...
02/04/2026
Taku Hirano's Career Is Defined by Identity Whether he's performing, composing, teaching, or developing instruments, the do-it-all percussionist sees ...
01/04/2026
As sports media organization continue to seek out new ways to streamline their p...
01/04/2026
The SVG GFX Forum hit New York City earlier this month for a day packed with sessions focused on the creative strategy and technology behind today's cutting...
01/04/2026
This year, Spotify celebrates the five-year anniversary of EQUAL, our global pro...
01/04/2026
Analogue-style tape splicing in the digital domain
In this era of digital recording and multiple layers of Undo, it seems that the fading art of tape splici...
01/04/2026
Latest release introduces new Orbita Engine
Zero G's latest release marks the start of a new series of libraries, as well as introducing an all-new engi...
01/04/2026
Until now, one format has largely been left behind
Warm Audio's extensive product range includes modern-day recreations of all manner of sought-after s...
01/04/2026
A piano with glass vessels for strings!
The Crow Hill Company's recently released Gong Piano offered a refreshing new take on piano libraries, harnessin...
01/04/2026
Remote Streaming Studio Condenser
Aim Audio have just revealed their latest creation, the ESSENCE RS Remote Streaming Studio Condenser, which becomes the wo...
01/04/2026
The National Film and Video Foundation (NFVF) is pleased to announce that the ca...
01/04/2026
Bilbao, April 1st, 2026 - AgileTV, a leading provider of end-to-end TV technolog...
01/04/2026
Green Hippo is excited to announce the launch of its new Hippotizer Media Server training courses at Pixel Academy, a purpose built AV learning hub combining ha...
01/04/2026
TAG Video Systems, a global leader in IP-native broadcast monitoring, multiviewing, and quality control, today announced a collaboration with Oracle Cloud Infra...
01/04/2026
Professional Wireless Systems (PWS), a leading provider of wireless audio solutions and RF management, was on site at the Caesars Superdome in New Orleans, wher...
01/04/2026
AgileTV, a leading provider of end-to-end TV technology solutions, has deployed next , the new IPTV platform of the Austrian telco LIWEST, marking the first st...
01/04/2026
LTN, a leader in fully managed IP video transport, and Ateme, a global leader in video compression and delivery solutions, today announced a collaboration integ...
01/04/2026
Adobe Unveils Powerful New Innovations for Creative Pros in Adobe Illustrator
Deepa Subramaniam April 1, 2026
0 Comments
I'm excited to share that...
01/04/2026
Boland Communications Introduces QD4K315HDR10 QD-OLED Series Monitors for Live P...
01/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
01/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
01/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
01/04/2026
Mediagenix Showcases Semantic Intelligence-Powered Title Management, Schedule Op...
01/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
01/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
01/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
01/04/2026
Victoria Mon t Brings the Multi-Hyphenate Mindset to Career Jam 2026 The Grammy-winning singer, songwriter, and producer shared how versatility and self-inves...
01/04/2026
UKTV today announces that Jonathan Newman has formally stepped into the role of ...
01/04/2026
A New Season of Expression: Save 30% On Ivory 3There are moments in music when everything changes-when new ideas break through and redefine what's possible....
01/04/2026
Wuppertal April 1, 2026
Binghamton University Strengthens Student Run Producti...
01/04/2026
Back to All News
A Demon Falls for a Human in Long Vacation: Netflix Greenlight...
01/04/2026
Back to All News
All WWE Is Now Live Only on Netflix in Italy
Entertainment
01 April 2026
GlobalItaly
Link copied to clipboard
ALL WWE IS NOW LIVE
ON NET...
01/04/2026
Harmonic's Media Processing Solutions Maximize Bandwidth Efficiency for Terrestrial Broadcast Delivery SAN JOSE, Calif. - April 1, 2026 - Harmonic (NASDAQ: ...
01/04/2026
Rugby fans won't miss a moment of the action this spring, with full Women...
01/04/2026
T Bluey ag teacht go RT ! Beginning on Monday 6 April 2026, Bluey will be available as Gaeilge on RT KIDSjr and RT Player. Children will be able to connect w...
01/04/2026
January 4 2026, 18:00 (PST) DOLBY AND DOUYIN EMPOWER THE NEXT GENERATON OF CREATORS WITH DOLBY VISION
Douyin Users Can Now Create And Share Videos With Stun...
31/03/2026
MeyerPro delivered the LED production for the Microsoft Ignite 2025 keynote at Chase Center in San Francisco, using more than 2,700 ROE Visual panels across mul...
31/03/2026
The Media Coding Industry Forum (MC-IF) is expanding membership outreach in 2026...
31/03/2026
ViewLift and MyOutdoorTV (MOTV) have announced the launch of ViewLift Conversational AI Search on the MyOutdoorTV platform. The feature enables natural language...
31/03/2026
The Atlanta Braves and Gray Media have announced a multi-year agreement to simul...
31/03/2026
Gravity Media has delivered broadcast services for TAB Golden Slipper Day at Rosehill Gardens on March 21 and will now cover The Star Championships at Royal Ran...
31/03/2026
Synamedia has announced that Mileto Tecnologia, a Brazilian pay-TV operator, has selected the Synamedia Go platform for its OTT expansion. Mileto is also deploy...
31/03/2026
KMH Integration will attend NAB Show in Las Vegas with a focus on what the company calls AV Casting - an approach built on the idea that professional AV and bro...