
Large language models and the applications they power enable unprecedented opportunities for organizations to get deeper insights from their data reservoirs and to build entirely new classes of applications.
But with opportunities often come challenges.
Both on premises and in the cloud, applications that are expected to run in real time place significant demands on data center infrastructure to simultaneously deliver high throughput and low latency with one platform investment.
To drive continuous performance improvements and improve the return on infrastructure investments, NVIDIA regularly optimizes the state-of-the-art community models, including Meta's Llama, Google's Gemma, Microsoft's Phi and our own NVLM-D-72B, released just a few weeks ago.
Relentless Improvements Performance improvements let our customers and partners serve more complex models and reduce the needed infrastructure to host them. NVIDIA optimizes performance at every layer of the technology stack, including TensorRT-LLM, a purpose-built library to deliver state-of-the-art performance on the latest LLMs. With improvements to the open-source Llama 70B model, which delivers very high accuracy, we've already improved minimum latency performance by 3.5x in less than a year.
We're constantly improving our platform performance and regularly publish performance updates. Each week, improvements to NVIDIA software libraries are published, allowing customers to get more from the very same GPUs. For example, in just a few months' time, we've improved our low-latency Llama 70B performance by 3.5x.
NVIDIA has increased performance on the Llama 70B model by 3.5x. In the most recent round of MLPerf Inference 4.1, we made our first-ever submission with the Blackwell platform. It delivered 4x more performance than the previous generation.
This submission was also the first-ever MLPerf submission to use FP4 precision. Narrower precision formats, like FP4, reduces memory footprint and memory traffic, and also boost computational throughput. The process takes advantage of Blackwell's second-generation Transformer Engine, and with advanced quantization techniques that are part of TensorRT Model Optimizer, the Blackwell submission met the strict accuracy targets of the MLPerf benchmark.
Blackwell B200 delivers up to 4x more performance versus previous generation on MLPerf Inference v4.1's Llama 2 70B workload. Improvements in Blackwell haven't stopped the continued acceleration of Hopper. In the last year, Hopper performance has increased 3.4x in MLPerf on H100 thanks to regular software advancements. This means that NVIDIA's peak performance today, on Blackwell, is 10x faster than it was just one year ago on Hopper.
These results track progress on the MLPerf Inference Llama 2 70B Offline scenario over the past year. Our ongoing work is incorporated into TensorRT-LLM, a purpose-built library to accelerate LLMs that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM is built on top of the TensorRT Deep Learning Inference library and leverages much of TensorRT's deep learning optimizations with additional LLM-specific improvements.
Improving Llama in Leaps and Bounds More recently, we've continued optimizing variants of Meta's Llama models, including versions 3.1 and 3.2 as well as model sizes 70B and the biggest model, 405B. These optimizations include custom quantization recipes, as well as efficient use of parallelization techniques to more efficiently split the model across multiple GPUs, leveraging NVIDIA NVLink and NVSwitch interconnect technologies. Cutting-edge LLMs like Llama 3.1 405B are very demanding and require the combined performance of multiple state-of-the-art GPUs for fast responses.
Parallelism techniques require a hardware platform with a robust GPU-to-GPU interconnect fabric to get maximum performance and avoid communication bottlenecks. Each NVIDIA H200 Tensor Core GPU features fourth-generation NVLink, which provides a whopping 900GB/s of GPU-to-GPU bandwidth. Every eight-GPU HGX H200 platform also ships with four NVLink Switches, enabling every H200 GPU to communicate with any other H200 GPU at 900GB/s, simultaneously.
Many LLM deployments use parallelism over choosing to keep the workload on a single GPU, which can have compute bottlenecks. LLMs seek to balance low latency and high throughput, with the optimal parallelization technique depending on application requirements.
For instance, if lowest latency is the priority, tensor parallelism is critical, as the combined compute performance of multiple GPUs can be used to serve tokens to users more quickly. However, for use cases where peak throughput across all users is prioritized, pipeline parallelism can efficiently boost overall server throughput.
The table below shows that tensor parallelism can deliver over 5x more throughput in minimum latency scenarios, whereas pipeline parallelism brings 50% more performance for maximum throughput use cases.
For production deployments that seek to maximize throughput within a given latency budget, a platform needs to provide the ability to effectively combine both techniques like in TensorRT-LLM.
Read the technical blog on boosting Llama 3.1 405B throughput to learn more about these techniques.
Different scenarios have different requirements, and parallelism techniques bring optimal performance for each of these scenarios. The Virtuous Cycle Over the lifecycle of our architectures, we deliver significant performance gains from ongoing software tuning and optimization. These improvements translate into additional value for customers who train and deploy on our platforms. They're able to create more capable models and applications and deploy their existing models using less infrastructure, enhancing th
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
01/04/2026
January 4 2026, 18:00 (PST) DOLBY AND DOUYIN EMPOWER THE NEXT GENERATON OF CREATORS WITH DOLBY VISION
Douyin Users Can Now Create And Share Videos With Stun...
24/01/2026
Masami Kawai Selected as the 2026 Merata Mita Fellow; Isabella Madrigal and Tsanavi Spoonhunter Named 2026 Graton Fellows During Native Forum Celebration in Par...
24/01/2026
The MC-55A Peregrine aircraft will give the Royal Australian Air Force information superiority and serve as strategic assets for future Australian Defence Force...
24/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
24/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
24/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
24/01/2026
RT and Virgin Media Television kick off comprehensive free-to-air coverage of t...
23/01/2026
Staines-upon-Thames, UK, 29 July, 2025 - Yospace, the global leader in Dynamic A...
23/01/2026
WWE's Virtual Production Playbook: How the Professional Wrestling Super Powe...
23/01/2026
Tight set up: Squeezing the PSA's Tournament of Champions into Grand Central...
23/01/2026
Evolving production: The PSA on bringing squash to more viewers at the Tournamen...
23/01/2026
AFC Championship Preview: Behind the Scenes With NFL on CBS' Producer Jim R...
23/01/2026
NFC Championship Preview: FOX Sports Director Rich Russo Talks Technology, Story...
23/01/2026
Spotify's annual Best New Artist celebration honors the rising stars whose talent, creativity, and dedication have propelled them to the music industry'...
23/01/2026
Coalition military forces operating across the vast geography of the Indo-Pacific rely on interoperable, secure data links to share intelligence, surveillance a...
23/01/2026
Artist rendering of L3Harris Technologies' AERIS next generation airborne early warning and control solution....
23/01/2026
The U.S. Air Force AMP Increment II aircraft at L3Harris' facility in Waco, Texas. L3Harris has modernized C-130 avionics since 1985, delivering digital coc...
23/01/2026
Paramount is transforming its operations by unifying the media supply chains of their top brands into a scalable global pipeline.
This transformation enhances ...
23/01/2026
Every delay costs. When a subtitle fails QC, even the smallest issue can mean missed deadlines, extra vendor costs, or frustrated teams. The new Accurate.Video ...
23/01/2026
Multi-year deal utilizes Nielsen's full suite of local audience marketing in...
23/01/2026
New York, NY January 21, 2026 - Neptune BidCo US Inc. (the Issuer or the Co...
23/01/2026
ALT Systems, Inc., a leading system integrator and technology solutions provider for the media and entertainment industry, today announced the launch of PixSpan...
23/01/2026
The Alliance for IP Media Solutions (AIMS) will mark a major milestone for Pro AV-over-IP at ISE 2026 with the official launch of Internet Protocol Media Experi...
23/01/2026
KRK, a leader in professional studio monitoring for nearly four decades, will unveil the all new V Series Five at the 2026 NAMM Show, offering attendees an excl...
23/01/2026
SMPTE , the home of media professionals, technologists, and engineers, today announced Steve LLamb, Vice President of Technology Standards and Solutions for Cin...
23/01/2026
IBC today announces that the call for Technical Papers is now open for the IBC2026 Conference, inviting innovators from across the global media, entertainment, ...
23/01/2026
Grass Valley has announced that Asharq News, the leading multi-platform Arabic news service owned by the Saudi Research & Media Group (SRMG), has expanded its c...
23/01/2026
At the SET Expo 2025, a consortium including Qualcomm Technologies, Inc., Motorola, and Rohde & Schwarz successfully demonstrated a real-world proof-of-concept ...
23/01/2026
Dalet, a leading technology and service provider for media-rich organizations, today announced the appointment of Gwen Braygreen as Executive Vice President and...
23/01/2026
Alfalite, Brainstorm, Dejero, Domo Broadcast Systems, FOR-A, KitPlus, Ontario Soluciones and RGB Spectrum partner to demonstrate revolutionary integrated soluti...
23/01/2026
Vizrt, the leader in live production technology revolutionizing viewer experience and engagement, expands its team to ignite a new era of professional-grade pro...
23/01/2026
LOGIC media solutions, an Amazon Web Services (AWS) Advanced Partner specialising in AWS-based media workflows, is one of the official launch partners of the ne...
23/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
23/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
23/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
23/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
23/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
23/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
23/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
23/01/2026
Friday 23 January 2026
Sky appoints Lisa Clark as Commissioning Executive for SNL UK
Lisa Clark headshot
PNG (468KB)
Lisa Clark joins Sky as Commissioning E...
23/01/2026
Back to All News
Pavane' Drops Teaser Ahead of February 20 Debut - A Tende...
23/01/2026
FY2025 Full Year Results...
23/01/2026
RT is delighted to announce that the RT and F s ireann / Screen Ireland supported short film Retirement Plan has been nominated for Best Animated Short at th...
22/01/2026
SVG Students To Watch: Chuck Luarasi, Curry CollegeThe Massachusetts native is cutting his teeth with Harvard Athletics, Cape Cod Baseball LeagueBy Brandon Cost...
22/01/2026
Follow the Money, Episode 4: Talking Tech, Sports, and Private Capital With Sam ...
22/01/2026
Fever pitch: WRC is back for the start of the 2026 season with Rallye Monte-Carl...
22/01/2026
FloSports Prepares To Broadcast Outdoor Hockey Game Amidst Brutally Cold Tempera...