
Large language models and the applications they power enable unprecedented opportunities for organizations to get deeper insights from their data reservoirs and to build entirely new classes of applications.
But with opportunities often come challenges.
Both on premises and in the cloud, applications that are expected to run in real time place significant demands on data center infrastructure to simultaneously deliver high throughput and low latency with one platform investment.
To drive continuous performance improvements and improve the return on infrastructure investments, NVIDIA regularly optimizes the state-of-the-art community models, including Meta's Llama, Google's Gemma, Microsoft's Phi and our own NVLM-D-72B, released just a few weeks ago.
Relentless Improvements Performance improvements let our customers and partners serve more complex models and reduce the needed infrastructure to host them. NVIDIA optimizes performance at every layer of the technology stack, including TensorRT-LLM, a purpose-built library to deliver state-of-the-art performance on the latest LLMs. With improvements to the open-source Llama 70B model, which delivers very high accuracy, we've already improved minimum latency performance by 3.5x in less than a year.
We're constantly improving our platform performance and regularly publish performance updates. Each week, improvements to NVIDIA software libraries are published, allowing customers to get more from the very same GPUs. For example, in just a few months' time, we've improved our low-latency Llama 70B performance by 3.5x.
NVIDIA has increased performance on the Llama 70B model by 3.5x. In the most recent round of MLPerf Inference 4.1, we made our first-ever submission with the Blackwell platform. It delivered 4x more performance than the previous generation.
This submission was also the first-ever MLPerf submission to use FP4 precision. Narrower precision formats, like FP4, reduces memory footprint and memory traffic, and also boost computational throughput. The process takes advantage of Blackwell's second-generation Transformer Engine, and with advanced quantization techniques that are part of TensorRT Model Optimizer, the Blackwell submission met the strict accuracy targets of the MLPerf benchmark.
Blackwell B200 delivers up to 4x more performance versus previous generation on MLPerf Inference v4.1's Llama 2 70B workload. Improvements in Blackwell haven't stopped the continued acceleration of Hopper. In the last year, Hopper performance has increased 3.4x in MLPerf on H100 thanks to regular software advancements. This means that NVIDIA's peak performance today, on Blackwell, is 10x faster than it was just one year ago on Hopper.
These results track progress on the MLPerf Inference Llama 2 70B Offline scenario over the past year. Our ongoing work is incorporated into TensorRT-LLM, a purpose-built library to accelerate LLMs that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM is built on top of the TensorRT Deep Learning Inference library and leverages much of TensorRT's deep learning optimizations with additional LLM-specific improvements.
Improving Llama in Leaps and Bounds More recently, we've continued optimizing variants of Meta's Llama models, including versions 3.1 and 3.2 as well as model sizes 70B and the biggest model, 405B. These optimizations include custom quantization recipes, as well as efficient use of parallelization techniques to more efficiently split the model across multiple GPUs, leveraging NVIDIA NVLink and NVSwitch interconnect technologies. Cutting-edge LLMs like Llama 3.1 405B are very demanding and require the combined performance of multiple state-of-the-art GPUs for fast responses.
Parallelism techniques require a hardware platform with a robust GPU-to-GPU interconnect fabric to get maximum performance and avoid communication bottlenecks. Each NVIDIA H200 Tensor Core GPU features fourth-generation NVLink, which provides a whopping 900GB/s of GPU-to-GPU bandwidth. Every eight-GPU HGX H200 platform also ships with four NVLink Switches, enabling every H200 GPU to communicate with any other H200 GPU at 900GB/s, simultaneously.
Many LLM deployments use parallelism over choosing to keep the workload on a single GPU, which can have compute bottlenecks. LLMs seek to balance low latency and high throughput, with the optimal parallelization technique depending on application requirements.
For instance, if lowest latency is the priority, tensor parallelism is critical, as the combined compute performance of multiple GPUs can be used to serve tokens to users more quickly. However, for use cases where peak throughput across all users is prioritized, pipeline parallelism can efficiently boost overall server throughput.
The table below shows that tensor parallelism can deliver over 5x more throughput in minimum latency scenarios, whereas pipeline parallelism brings 50% more performance for maximum throughput use cases.
For production deployments that seek to maximize throughput within a given latency budget, a platform needs to provide the ability to effectively combine both techniques like in TensorRT-LLM.
Read the technical blog on boosting Llama 3.1 405B throughput to learn more about these techniques.
Different scenarios have different requirements, and parallelism techniques bring optimal performance for each of these scenarios. The Virtuous Cycle Over the lifecycle of our architectures, we deliver significant performance gains from ongoing software tuning and optimization. These improvements translate into additional value for customers who train and deploy on our platforms. They're able to create more capable models and applications and deploy their existing models using less infrastructure, enhancing th
Most recent headlines
23/12/2025
How guilas Cibae as Dominican Winter League Games Are Locally Produced for Glob...
23/12/2025
BitFire's Jim Akimchuk on Supplying Scalability and Customization in the Clo...
23/12/2025
CAMB.AI Enables European Athletics to Offer Multi-Language SupportPlan is to eventually offer translation into all languages spoken in EuropeBy Ken Kerschbaumer...
23/12/2025
Analysis: As sports media values trend negative, scarcity and quality are king By Callum McCarthy, Editor-at-Large
Monday, December 22, 2025 - 14:08
Print ...
23/12/2025
ESPN, Disney, and NBA Return to the Animated Altcast Fray With Second Edition of...
23/12/2025
End the Year on a High Note and Donate to the Sports Broadcasting Fund Today! By Ken Kerschbaumer, Editorial Director
Tuesday, December 23, 2025 - 12:25 pm
...
23/12/2025
The year is winding down, the weather outside is frightful, and it's the perfect time to escape into a story that warms the heart. For listeners looking for...
23/12/2025
A Zeus motor is hot fire tested at L3Harris' Camden, Arkansas, solid rocket ...
23/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
23/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
23/12/2025
Lightware will exhibit several major product innovations at ISE 2026, including the new USB-C BOOSTER-V1, Google Meet. integration for various Taurus UCX models...
23/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
23/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
23/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
23/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
23/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
23/12/2025
Taking the Stage at Carnegie Hall-On a Global Scale Boston Conservatory Orchestra students reflect on their epic concert marking the 80th session of the UN Gene...
23/12/2025
Back to All News
How Steamy Can It Get? Single's Inferno' Season 5 Pre...
23/12/2025
Back to All News
33 Million Global Viewers on Netflix Watched Jake Paul vs. Ant...
23/12/2025
New technique lights up where drugs go in the body, cell by cell Scripps Research scientists developed a technique that maps drug binding in individual cells th...
22/12/2025
SVG New Sponsor Spotlight: Presidio's Neerav Shah on the Role of Its Captiva...
22/12/2025
Hitting the bullseye: Sky Sports readies itself for the biggest PDC World Darts ...
22/12/2025
Unique skillset: Bringing new directors to the world of darts at The Worlds with...
22/12/2025
Gravity Media prepares for a flight of fancy with the PDC World Darts Championsh...
22/12/2025
One hundred and eighty: Gravity Media on hitting the production bullseye at the ...
22/12/2025
The Famous Group's Jon Slusser on Fascinating Fans Through Immersive Content...
22/12/2025
ESPN's Meg Aronowitz on Continuing High-Quality Broadcasts of Collegiate Spo...
22/12/2025
ESPN Takes Data-Driven Storytelling to New Heights with MNF Playbook with Next ...
22/12/2025
For a decade, popular German podcast Fest & Flauschig has hosted an annual Chris...
22/12/2025
Paramount Scores Largest Share Increase Among Distributors as Paramount and CBS...
22/12/2025
New multi-year deal integrates Roku's data to fuel Nielsen's measurement suite
Roku gains access to Nielsen's streaming ratings, showing The Roku C...
22/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
22/12/2025
Berklee Wrapped 2025: Our Top News and Stories A look back at a year highlighted by faculty milestones, major film and television projects, Bob Dylan's ho...
22/12/2025
The works of Plato state that when humans have an experience, some level of change occurs in their brain, which is powered by memory - specifically long-term me...
22/12/2025
Partnership integrates complementary satellite data and AI analytics to enhance security, infrastructure, and environmental monitoring solutions for global cust...
22/12/2025
Workflows allow you to create a sequence of planned events which may be added to your template(s) or inserted directly into your sequential or background playli...
22/12/2025
Monday 22 December 2025
Sky extends PGA TOUR partnership until 2029, as Sky Spo...
22/12/2025
Back to All News
Global Anime Hits and New Releases Take Center Stage at Jump Festa 2026
Entertainment
22 December 2025
GlobalJapan
Link copied to clipboar...
22/12/2025
Siobh n McSweeney, Rory McIlroy, Elon Musk, Catherine Connolly, Jim Gavin, Ivan Yates and Traitor Paudie Moloney lead new characters for Callan Kicks the Year 2...
22/12/2025
Winner announced in the picturesque surroundings of Wicklow's Avondale Tower and Treetop Walk
Andrew Trimble wins the show in his first series as coach
Th...
22/12/2025
The 2025 winners have been announced today, Sunday 21 December, for Ireland's largest choral competition Choirs for Christmas hosted by RT lyric fm.
Ove...
21/12/2025
Back to All News
Legoshi and Haru's Story Reaches Its Finale: BEASTARS Fin...
21/12/2025
John Shortt named Young Sportsperson of the Year Kerry are the Team of the Year
...
20/12/2025
Atomos announced the immediate availability of a new firmware update for its Ninja TX GO and Ninja TX monitor-recorders, unlocking ProRes RAW recording from the...
20/12/2025
CJP Broadcast has completed the digitisation of the European Gymnastics tape archive, converting 328 tapes containing more than forty years of recorded material...
20/12/2025
Bitmovin, the leading provider of video streaming solutions, today announced the launch of the Stream Lab MCP Server, to give AI agents and large language model...
20/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
20/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
20/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...