
Large language models and the applications they power enable unprecedented opportunities for organizations to get deeper insights from their data reservoirs and to build entirely new classes of applications.
But with opportunities often come challenges.
Both on premises and in the cloud, applications that are expected to run in real time place significant demands on data center infrastructure to simultaneously deliver high throughput and low latency with one platform investment.
To drive continuous performance improvements and improve the return on infrastructure investments, NVIDIA regularly optimizes the state-of-the-art community models, including Meta's Llama, Google's Gemma, Microsoft's Phi and our own NVLM-D-72B, released just a few weeks ago.
Relentless Improvements Performance improvements let our customers and partners serve more complex models and reduce the needed infrastructure to host them. NVIDIA optimizes performance at every layer of the technology stack, including TensorRT-LLM, a purpose-built library to deliver state-of-the-art performance on the latest LLMs. With improvements to the open-source Llama 70B model, which delivers very high accuracy, we've already improved minimum latency performance by 3.5x in less than a year.
We're constantly improving our platform performance and regularly publish performance updates. Each week, improvements to NVIDIA software libraries are published, allowing customers to get more from the very same GPUs. For example, in just a few months' time, we've improved our low-latency Llama 70B performance by 3.5x.
NVIDIA has increased performance on the Llama 70B model by 3.5x. In the most recent round of MLPerf Inference 4.1, we made our first-ever submission with the Blackwell platform. It delivered 4x more performance than the previous generation.
This submission was also the first-ever MLPerf submission to use FP4 precision. Narrower precision formats, like FP4, reduces memory footprint and memory traffic, and also boost computational throughput. The process takes advantage of Blackwell's second-generation Transformer Engine, and with advanced quantization techniques that are part of TensorRT Model Optimizer, the Blackwell submission met the strict accuracy targets of the MLPerf benchmark.
Blackwell B200 delivers up to 4x more performance versus previous generation on MLPerf Inference v4.1's Llama 2 70B workload. Improvements in Blackwell haven't stopped the continued acceleration of Hopper. In the last year, Hopper performance has increased 3.4x in MLPerf on H100 thanks to regular software advancements. This means that NVIDIA's peak performance today, on Blackwell, is 10x faster than it was just one year ago on Hopper.
These results track progress on the MLPerf Inference Llama 2 70B Offline scenario over the past year. Our ongoing work is incorporated into TensorRT-LLM, a purpose-built library to accelerate LLMs that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM is built on top of the TensorRT Deep Learning Inference library and leverages much of TensorRT's deep learning optimizations with additional LLM-specific improvements.
Improving Llama in Leaps and Bounds More recently, we've continued optimizing variants of Meta's Llama models, including versions 3.1 and 3.2 as well as model sizes 70B and the biggest model, 405B. These optimizations include custom quantization recipes, as well as efficient use of parallelization techniques to more efficiently split the model across multiple GPUs, leveraging NVIDIA NVLink and NVSwitch interconnect technologies. Cutting-edge LLMs like Llama 3.1 405B are very demanding and require the combined performance of multiple state-of-the-art GPUs for fast responses.
Parallelism techniques require a hardware platform with a robust GPU-to-GPU interconnect fabric to get maximum performance and avoid communication bottlenecks. Each NVIDIA H200 Tensor Core GPU features fourth-generation NVLink, which provides a whopping 900GB/s of GPU-to-GPU bandwidth. Every eight-GPU HGX H200 platform also ships with four NVLink Switches, enabling every H200 GPU to communicate with any other H200 GPU at 900GB/s, simultaneously.
Many LLM deployments use parallelism over choosing to keep the workload on a single GPU, which can have compute bottlenecks. LLMs seek to balance low latency and high throughput, with the optimal parallelization technique depending on application requirements.
For instance, if lowest latency is the priority, tensor parallelism is critical, as the combined compute performance of multiple GPUs can be used to serve tokens to users more quickly. However, for use cases where peak throughput across all users is prioritized, pipeline parallelism can efficiently boost overall server throughput.
The table below shows that tensor parallelism can deliver over 5x more throughput in minimum latency scenarios, whereas pipeline parallelism brings 50% more performance for maximum throughput use cases.
For production deployments that seek to maximize throughput within a given latency budget, a platform needs to provide the ability to effectively combine both techniques like in TensorRT-LLM.
Read the technical blog on boosting Llama 3.1 405B throughput to learn more about these techniques.
Different scenarios have different requirements, and parallelism techniques bring optimal performance for each of these scenarios. The Virtuous Cycle Over the lifecycle of our architectures, we deliver significant performance gains from ongoing software tuning and optimization. These improvements translate into additional value for customers who train and deploy on our platforms. They're able to create more capable models and applications and deploy their existing models using less infrastructure, enhancing th
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
04/08/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
04/07/2026
April 7 2026, 19:00 (PDT) Detective Conan: Fallen Angel of the Highway Opens in...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
(L-R) Dustin Hoffman and Leo Woodall appear in Tuner by Daniel Roher, an official selection of the 2026 Sundance Film Festival. (Photo courtesy of Sundance In...
02/05/2026
Versatile re-amping tool announced
Warm Audio are best known for their recreations of sought-after vintage studio gear, but their latest release brings a ne...
02/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
02/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
02/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
02/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
02/05/2026
Scripps Research immunologist Dennis Burton elected to American Academy of Arts and Sciences A leader in broadly neutralizing antibodies, Burton has helped driv...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
Ratings Roundup is a rundown of recent rating news and is derived from press rel...
01/05/2026
BKB Bare Knuckle Boxing ( BKB ), today announced the appointment of Will Wright ...
01/05/2026
Lawo has been at the center of the industry's transition to IP and other next-generation technologies. At NAB 2026, its story was the Edge One AV stagebox, ...
01/05/2026
HBA Media, acting on behalf of NBC Sports and Churchill Downs Incorporated, has announced broadcast and streaming distribution for Kentucky Derby 152, taking pl...
01/05/2026
By Bailey Pennick
One of the most exciting things about the Sundance Film Festi...
01/05/2026
Florals for spring? Groundbreaking. But a playlist that tells you which The Devi...
01/05/2026
One of the world's biggest popstars is headed to El Cl sico. Later this mont...
01/05/2026
Limited-edition model celebrates 15th anniversary
Heritage Audio's range of monitor controllers has just gained a new member, the Baby RAM Black Edition...
01/05/2026
Dumble recreation now available as UAD plug-in
Along with their renowned processing plug-ins, Universal Audio have been steadily introducing emulations of c...
01/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
01/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
01/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
01/05/2026
To celebrate the opening of its new showroom and office, Lightware UK hosted a dedicated launch event at the new London location. The event welcomed partners, c...
01/05/2026
Choice without compromise
The broadcast industrys transformation is accelerating, and traditional broadcasters are having to fundamentally reinvent how they o...
01/05/2026
Beam Dynamics will return to MPTS 2026 with its asset intelligence platform, helping systems integrators, live production teams, media facilities and profession...
01/05/2026
Best-in-class UX design and rapid, scalable delivery for next-generation viewing experiences
Leading video software provider, Synamedia, today announced a coll...
01/05/2026
Compact new cforce MAX lens motor brings unrivaled speed and responsiveness to t...
01/05/2026
Panavision welcomes Fritz Heinzle as Vice President of Sales
Brie Clayton May 1, 2026
0 Comments
Heinzle will support Panavision's global growth s...
01/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
01/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
01/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
01/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
01/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
01/05/2026
LONDON, APRIL 30, 2026 The Post Republic London's Re-recording Mixer and Dialogue Editor Dan Johnson has built a reputation for clean, emotionally resonan...
01/05/2026
Adobe Unveils Powerful New Innovations in Photoshop & Lightroom
Deepa Subramaniam April 30, 2026
0 Comments
Your most tedious creative tasks just got ea...
01/05/2026
Berklee Partners with Santander US to Establish Global Opportunity Fund The $400,000 grant offers students access to experiential learning opportunities withi...
01/05/2026
Student Spotlight: Keziah Thomas The Indian composer, who was named the 2026 student commencement speaker for Berklee College of Music, talks about how shes p...
01/05/2026
Friday 1 May 2026
Hannah Waddingham and Ncuti Gatwa to host the series final tw...
01/05/2026
Friday 1 May 2026
Got plans? Cancel them. Sky Sports Big Weekend is coming
Sky Sports is preparing for a bumper weekend of live action, including Manchester ...
01/05/2026
Friday 1 May 2026
Sky Sports to broadcast all matches from World Sevens Football London edition
Sky Sports will be the exclusive UK broadcaster of the women...
01/05/2026
Back to All News
NIAJ Fest Gets Los Angeles In on the Joke With Free Pop-Up Events
Entertainment
01 May 2026
GlobalUnited States
Link copied to clipboard
...
01/05/2026
RT Sport awarded first pick free-to-air on Wednesday nights
Champions League and Super Cup finals
Highlights on Wednesday nights
RT today (Thursday 30 Apri...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
30/04/2026
The Professional Women's Hockey League (PWHL) concluded its third regular season on Saturday, reporting growth across attendance, viewership, digital engage...
30/04/2026
NBC Sports will air national MLB coverage on Sundays beginning May 3, with MLB Sunday Leadoff on Peacock and NBCSN at 12:30 p.m. ET, followed by the debut of th...
30/04/2026
Clear-Com has appointed Brian Grahn as Market Outreach Manager of the Americas and Ben Turnwell as Business Development Manager for EMEA live.
Grahn joined Cle...
30/04/2026
ARRI has introduced the cforce MAX, a new lens motor for the Hi-5 lens control system. The cforce MAX is twice as fast as the cforce plus motor it replaces whil...
30/04/2026
Knuerr, Voxtronic, and IHSE will jointly present an integrated control room solu...