
Large language models and the applications they power enable unprecedented opportunities for organizations to get deeper insights from their data reservoirs and to build entirely new classes of applications.
But with opportunities often come challenges.
Both on premises and in the cloud, applications that are expected to run in real time place significant demands on data center infrastructure to simultaneously deliver high throughput and low latency with one platform investment.
To drive continuous performance improvements and improve the return on infrastructure investments, NVIDIA regularly optimizes the state-of-the-art community models, including Meta's Llama, Google's Gemma, Microsoft's Phi and our own NVLM-D-72B, released just a few weeks ago.
Relentless Improvements Performance improvements let our customers and partners serve more complex models and reduce the needed infrastructure to host them. NVIDIA optimizes performance at every layer of the technology stack, including TensorRT-LLM, a purpose-built library to deliver state-of-the-art performance on the latest LLMs. With improvements to the open-source Llama 70B model, which delivers very high accuracy, we've already improved minimum latency performance by 3.5x in less than a year.
We're constantly improving our platform performance and regularly publish performance updates. Each week, improvements to NVIDIA software libraries are published, allowing customers to get more from the very same GPUs. For example, in just a few months' time, we've improved our low-latency Llama 70B performance by 3.5x.
NVIDIA has increased performance on the Llama 70B model by 3.5x. In the most recent round of MLPerf Inference 4.1, we made our first-ever submission with the Blackwell platform. It delivered 4x more performance than the previous generation.
This submission was also the first-ever MLPerf submission to use FP4 precision. Narrower precision formats, like FP4, reduces memory footprint and memory traffic, and also boost computational throughput. The process takes advantage of Blackwell's second-generation Transformer Engine, and with advanced quantization techniques that are part of TensorRT Model Optimizer, the Blackwell submission met the strict accuracy targets of the MLPerf benchmark.
Blackwell B200 delivers up to 4x more performance versus previous generation on MLPerf Inference v4.1's Llama 2 70B workload. Improvements in Blackwell haven't stopped the continued acceleration of Hopper. In the last year, Hopper performance has increased 3.4x in MLPerf on H100 thanks to regular software advancements. This means that NVIDIA's peak performance today, on Blackwell, is 10x faster than it was just one year ago on Hopper.
These results track progress on the MLPerf Inference Llama 2 70B Offline scenario over the past year. Our ongoing work is incorporated into TensorRT-LLM, a purpose-built library to accelerate LLMs that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM is built on top of the TensorRT Deep Learning Inference library and leverages much of TensorRT's deep learning optimizations with additional LLM-specific improvements.
Improving Llama in Leaps and Bounds More recently, we've continued optimizing variants of Meta's Llama models, including versions 3.1 and 3.2 as well as model sizes 70B and the biggest model, 405B. These optimizations include custom quantization recipes, as well as efficient use of parallelization techniques to more efficiently split the model across multiple GPUs, leveraging NVIDIA NVLink and NVSwitch interconnect technologies. Cutting-edge LLMs like Llama 3.1 405B are very demanding and require the combined performance of multiple state-of-the-art GPUs for fast responses.
Parallelism techniques require a hardware platform with a robust GPU-to-GPU interconnect fabric to get maximum performance and avoid communication bottlenecks. Each NVIDIA H200 Tensor Core GPU features fourth-generation NVLink, which provides a whopping 900GB/s of GPU-to-GPU bandwidth. Every eight-GPU HGX H200 platform also ships with four NVLink Switches, enabling every H200 GPU to communicate with any other H200 GPU at 900GB/s, simultaneously.
Many LLM deployments use parallelism over choosing to keep the workload on a single GPU, which can have compute bottlenecks. LLMs seek to balance low latency and high throughput, with the optimal parallelization technique depending on application requirements.
For instance, if lowest latency is the priority, tensor parallelism is critical, as the combined compute performance of multiple GPUs can be used to serve tokens to users more quickly. However, for use cases where peak throughput across all users is prioritized, pipeline parallelism can efficiently boost overall server throughput.
The table below shows that tensor parallelism can deliver over 5x more throughput in minimum latency scenarios, whereas pipeline parallelism brings 50% more performance for maximum throughput use cases.
For production deployments that seek to maximize throughput within a given latency budget, a platform needs to provide the ability to effectively combine both techniques like in TensorRT-LLM.
Read the technical blog on boosting Llama 3.1 405B throughput to learn more about these techniques.
Different scenarios have different requirements, and parallelism techniques bring optimal performance for each of these scenarios. The Virtuous Cycle Over the lifecycle of our architectures, we deliver significant performance gains from ongoing software tuning and optimization. These improvements translate into additional value for customers who train and deploy on our platforms. They're able to create more capable models and applications and deploy their existing models using less infrastructure, enhancing th
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
01/04/2026
January 4 2026, 18:00 (PST) DOLBY AND DOUYIN EMPOWER THE NEXT GENERATON OF CREATORS WITH DOLBY VISION
Douyin Users Can Now Create And Share Videos With Stun...
23/03/2026
The Professional Fighters League (PFL) has renewed its multi-year partnership wi...
23/03/2026
The Snow League has named Google Cloud as its Official Cloud and AI Partner. The...
23/03/2026
Chyron has appointed Eric Wolff as Director of Venues Sales, North America. Wolff previously served as Director of Broadcast Operations & Media Production for T...
23/03/2026
Chicago Sports Network (CHSN) and Weigel Broadcasting's WCIU (The U, ch. 26.1) will simulcast 10 Chicago White Sox games during the 2026 season, the compani...
23/03/2026
Cosm has appointed Jon Werbeck as Vice President, Head of Sponsorships. He will report to Corey Breton, Head of Venues, and will focus on corporate sponsorship ...
23/03/2026
CP Communications has announced a partnership with Mark Roberts Motion Control (...
23/03/2026
NAB Show 2026, taking place April 18-22 (exhibits April 19-22) at the Las Vegas ...
23/03/2026
Bay FC and free streaming platform Victory have announced a partnership through...
23/03/2026
Gemini AI models will surface hidden context around pitches, matchups, rare stat...
23/03/2026
Behind The Mic provides a roundup of recent news regarding on-air talent, includ...
23/03/2026
Growing from broadcast engineer to strategic planner, this Ithaca College grad h...
23/03/2026
16 Science-Focused Nonfiction Projects Selected for Funding
LOS ANGELES, CA, March 23, 2026 - The nonprofit Sundance Institute and Sandbox Films announced toda...
23/03/2026
It's been 20 years since Miley Cyrus introduced the world to Hannah Montana,...
23/03/2026
Made entirely from real natural recordings
Aimed at sound designers and editors working in film, TV and game audio, the latest release from BOOM Library com...
23/03/2026
Transcribe sheets, tabs or MIDI from audio files
Klang.io have announced the launch of a new AI-powered software tool that's capable of detecting multip...
23/03/2026
An auxiliary target has been affixed to the Interim Cryogenic Propulsion Stage f...
23/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
23/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
23/03/2026
Pro8mm, the Super 8 experts, provided cameras, Super 8 movie film, and scanning services for Bruno Mars' Risk It All music video. The debut single from Br...
23/03/2026
Matthews, introduces their first aluminum grid clamp collection, engineered for the rigging needs of film, television and live production. Combining light weigh...
23/03/2026
Monday 23 March 2026
Hacks, the multi-Emmy -winning Sky Exclusive comedy, retur...
23/03/2026
Back to All News
Too Hot to Handle: Italy Reignites for a Second Season With th...
23/03/2026
Autonomous agents mark a new inflection point in AI. Systems are no longer limited to generating responses or reasoning through tasks. They can take action: Age...
23/03/2026
RT is sad today to learn of the death of legendary RT Sport broadcaster Michael Lyster, who died this morning aged 71 years.
Kevin Bakhurst, Director-General...
23/03/2026
RT Documentary On One has scooped its first ever dedicated music award. At the 2026 Icelandic Music Awards, composer lfur Eldj rn won Release of the Year in t...
23/03/2026
Inside Sport, Liveline, Morning Ireland and 2FM DRIVE will all be in Prague to bring fans to the heart of the action
Every Moment, Every GenerationRT | FIFA W...
22/03/2026
Free updates now available
VSL have just released some free updates that add some existing features to a selection of libraries in their expansive Synchron ...
22/03/2026
Back to All News
Live-Action Sins of Kujo' Premieres April 2: Main Trailer and Key Art Debut
Entertainment
22 March 2026
GlobalJapan
Link copied to cl...
21/03/2026
Presented to War Child UK's HELP(2) project
The MPG (Music Producers Guild) have announced the launch of the MPG Impact Award, a brand-new honour that w...
21/03/2026
Microtuning support for Arabic, Persian & Turkish scales
The latest release from Best Service brings together a selection of string, wind and percussion ins...
21/03/2026
New campaign from NAATI and SBS CulturalConnect highlights how we all deserve t...
21/03/2026
Statement regarding Rhoda Roberts AO
21 March, 2026
Media releases
SBS is deeply saddened by the passing of Widjabul Wia-bal woman from the Bundjalung Na...
21/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
21/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
21/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
21/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
21/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
21/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
21/03/2026
Cine Gear Connect NY, presented by Universal Production Services, is filling in the slate for a full day of panels, peers, learning the latest, and mixing it up...
21/03/2026
Studio Technologies Debuts New StudioComm System at NAB 2026
Brie Clayton March 20, 2026
0 Comments
StudioComm Model 794 Central Controller and Model ...
21/03/2026
Restoration Christian Fellowship Captures Worship Music Videos with PYXIS 12K
Brie Clayton March 20, 2026
0 Comments
PYXIS' open gate provides cre...
20/03/2026
Net Insight will introduce a JPEG XS solution for full IP environments at NAB Sh...
20/03/2026
LTN has expanded its technology partnership with Harmonic ahead of the FCC's...
20/03/2026
Solid State Logic will preview SSL Live V6.2 at NAB Show, booth C6907. The softw...
20/03/2026
FUJIFILM North America Corporation's Optical Devices Division has announced ...