
Large language models and the applications they power enable unprecedented opportunities for organizations to get deeper insights from their data reservoirs and to build entirely new classes of applications.
But with opportunities often come challenges.
Both on premises and in the cloud, applications that are expected to run in real time place significant demands on data center infrastructure to simultaneously deliver high throughput and low latency with one platform investment.
To drive continuous performance improvements and improve the return on infrastructure investments, NVIDIA regularly optimizes the state-of-the-art community models, including Meta's Llama, Google's Gemma, Microsoft's Phi and our own NVLM-D-72B, released just a few weeks ago.
Relentless Improvements Performance improvements let our customers and partners serve more complex models and reduce the needed infrastructure to host them. NVIDIA optimizes performance at every layer of the technology stack, including TensorRT-LLM, a purpose-built library to deliver state-of-the-art performance on the latest LLMs. With improvements to the open-source Llama 70B model, which delivers very high accuracy, we've already improved minimum latency performance by 3.5x in less than a year.
We're constantly improving our platform performance and regularly publish performance updates. Each week, improvements to NVIDIA software libraries are published, allowing customers to get more from the very same GPUs. For example, in just a few months' time, we've improved our low-latency Llama 70B performance by 3.5x.
NVIDIA has increased performance on the Llama 70B model by 3.5x. In the most recent round of MLPerf Inference 4.1, we made our first-ever submission with the Blackwell platform. It delivered 4x more performance than the previous generation.
This submission was also the first-ever MLPerf submission to use FP4 precision. Narrower precision formats, like FP4, reduces memory footprint and memory traffic, and also boost computational throughput. The process takes advantage of Blackwell's second-generation Transformer Engine, and with advanced quantization techniques that are part of TensorRT Model Optimizer, the Blackwell submission met the strict accuracy targets of the MLPerf benchmark.
Blackwell B200 delivers up to 4x more performance versus previous generation on MLPerf Inference v4.1's Llama 2 70B workload. Improvements in Blackwell haven't stopped the continued acceleration of Hopper. In the last year, Hopper performance has increased 3.4x in MLPerf on H100 thanks to regular software advancements. This means that NVIDIA's peak performance today, on Blackwell, is 10x faster than it was just one year ago on Hopper.
These results track progress on the MLPerf Inference Llama 2 70B Offline scenario over the past year. Our ongoing work is incorporated into TensorRT-LLM, a purpose-built library to accelerate LLMs that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM is built on top of the TensorRT Deep Learning Inference library and leverages much of TensorRT's deep learning optimizations with additional LLM-specific improvements.
Improving Llama in Leaps and Bounds More recently, we've continued optimizing variants of Meta's Llama models, including versions 3.1 and 3.2 as well as model sizes 70B and the biggest model, 405B. These optimizations include custom quantization recipes, as well as efficient use of parallelization techniques to more efficiently split the model across multiple GPUs, leveraging NVIDIA NVLink and NVSwitch interconnect technologies. Cutting-edge LLMs like Llama 3.1 405B are very demanding and require the combined performance of multiple state-of-the-art GPUs for fast responses.
Parallelism techniques require a hardware platform with a robust GPU-to-GPU interconnect fabric to get maximum performance and avoid communication bottlenecks. Each NVIDIA H200 Tensor Core GPU features fourth-generation NVLink, which provides a whopping 900GB/s of GPU-to-GPU bandwidth. Every eight-GPU HGX H200 platform also ships with four NVLink Switches, enabling every H200 GPU to communicate with any other H200 GPU at 900GB/s, simultaneously.
Many LLM deployments use parallelism over choosing to keep the workload on a single GPU, which can have compute bottlenecks. LLMs seek to balance low latency and high throughput, with the optimal parallelization technique depending on application requirements.
For instance, if lowest latency is the priority, tensor parallelism is critical, as the combined compute performance of multiple GPUs can be used to serve tokens to users more quickly. However, for use cases where peak throughput across all users is prioritized, pipeline parallelism can efficiently boost overall server throughput.
The table below shows that tensor parallelism can deliver over 5x more throughput in minimum latency scenarios, whereas pipeline parallelism brings 50% more performance for maximum throughput use cases.
For production deployments that seek to maximize throughput within a given latency budget, a platform needs to provide the ability to effectively combine both techniques like in TensorRT-LLM.
Read the technical blog on boosting Llama 3.1 405B throughput to learn more about these techniques.
Different scenarios have different requirements, and parallelism techniques bring optimal performance for each of these scenarios. The Virtuous Cycle Over the lifecycle of our architectures, we deliver significant performance gains from ongoing software tuning and optimization. These improvements translate into additional value for customers who train and deploy on our platforms. They're able to create more capable models and applications and deploy their existing models using less infrastructure, enhancing th
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
01/04/2026
January 4 2026, 18:00 (PST) DOLBY AND DOUYIN EMPOWER THE NEXT GENERATON OF CREATORS WITH DOLBY VISION
Douyin Users Can Now Create And Share Videos With Stun...
28/01/2026
Top L-R: The Liars, Jazz Infernal, Living with a Visionary
Second Row L-R: Paper Trail, The Baddest Speechwriter of All, Crisis Actor
Third Row: The Boys and ...
28/01/2026
Music discovery should feel intuitive and personal. That's why we're continuing to give you more control, so you can ask for what you want, shape what y...
28/01/2026
Today, Charlie Hellman, Spotify's Head of Music, shared the following note on the Spotify for Artists blog that the company paid out more than $11 billion t...
28/01/2026
The National Film and Video Foundation (NFVF), in partnership with the Oudtshoorn Municipality, invites aspiring and emerging filmmakers to apply for the Sediba...
28/01/2026
As demand for more complex live sports coverage grows, Balkan broadcast specialist MVP has upgraded its flagship HD1 progressive OB truck with the installation ...
28/01/2026
Airlines, cruise and tour operators double down on ad spend as Australians' prioritise travel
Sydney January 28, 2026 - New Nielsen Ad Intel data shows a...
28/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
28/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
28/01/2026
Marshall Electronics launches the CV420-27X, its next-generation ultra-high-definition (UHD) IP camera, at ISE 2026 (Stand 4N900). Engineered for modern IP-base...
28/01/2026
Grass Valley has announced that Television Mobiles Ltd. (TVM), one of Europe's leading independent outside broadcast providers, has carried out a major refu...
28/01/2026
AI, graphics and virtual software power new production capabilities
FOR-A is bringing remarkable new technologies to FOMEX, the Future of Media Exhibition (ex...
28/01/2026
Continuing a longstanding collaboration, Riedel Communications and Nordic media technology company Media Tailor have once again joined forces to deliver a state...
28/01/2026
Pebble has appointed Paul Nagle-Smith as vice president for customer fulfilment, strengthening its senior leadership focus on customer delivery and operational ...
28/01/2026
Cloud playout solutions provider, Veset has announced that leading Mexican broadcaster, TV Azteca is using Veset Nimbus on AWS as a disaster recovery (DR) playo...
28/01/2026
Ensuring it can keep pace with a rapidly evolving live sports market, Balkan broadcast facility provider MVP Most Valuable Production has upgraded its flags...
28/01/2026
Akamai Technologies, Inc. (NASDAQ: AKAM), the cloud solutions provider that powers and protects life online, and Yospace, the leader in dynamic ad insertion tec...
28/01/2026
The renowned Reykjavik City Theatre (RCT) recently underwent a major intercom system upgrade using Clear-Com solutions. This milestone project utilizes Clear-C...
28/01/2026
Luxembourg, January 26, 2026 - SES S.A. ( SES or the Company ), a leading space solutions company, acknowledges the credit rating action announced by Fitch to...
28/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
28/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
28/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
28/01/2026
28 01 2026 - Media release Screen Australia refreshes Market & Audience approach...
28/01/2026
Boston Conservatory Orchestra Premieres a New Piano Concerto by Peter and Leonar...
28/01/2026
28 Jan 2026
VEON Notes Kyivstar Group Publication of Selected Full Year 2025 Fi...
28/01/2026
Rohde & Schwarz to host 2026 edition of its online event Demystifying EMC Rohde & Schwarz invites the global EMC community to join a crucial discussion on pre...
28/01/2026
Back to All News
The Wait Is Over: Teaser Trailer Drops for Jo Nesbo's Dete...
28/01/2026
Back to All News
Netflix Announces a Fictional Miniseries Inspired by the Marta del Castillo Case
Entertainment
28 January 2026
GlobalSpain
Link copied to ...
28/01/2026
Back to All News
Netflix Announces Santiago Mitres New Film Starring Ver nica L...
28/01/2026
Back to All News
Netflix Unveils the Teaser Trailer for Berlin and the Lady wit...
28/01/2026
Stadtwerke Wolfhagen Modernize Customer Management with AEP.energysuite from Arv...
27/01/2026
Click for Japanese version
Tokyo, Japan - January 27, 2026 - Akamai Technolo...
27/01/2026
L-R: Jonathan Cuchacovich, Sonia Kennebeck, Alan Fischer, Daeil Kim, Andrew Sta...
27/01/2026
Today, Spotify is proud to support our partner Backline, an industry-leading men...
27/01/2026
With nearly 29 million monthly listeners and clear momentum on Spotify, Net n Ve...
27/01/2026
January 14, 2026
We are proud to share that 25 Ontario, First Gulf's commercial project located just two minutes from our head office, has been recognized ...
27/01/2026
January 22, 2026
First Gulf is excited to share that a full-building lease has been secured at 625 Bronte Rd in Oakville, part of Bronte Station Business Park,...
27/01/2026
January 23, 2026
First Gulf continues to demonstrate its commitment to high-performance, sustainable real estate with 351 King Street East achieving BOMA BEST ...
27/01/2026
eds3_5_jq(document).ready(function($) { $(#eds_sliderM519).chameleonSlider_2_1({ content_source:......
27/01/2026
Nielsen will continue to drive growth and provide local audience measurement
in...
27/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
27/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
27/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Pinterest
Bluesky
Email...
27/01/2026
Amagi Media Labs Limited, the only cloud-native SaaS provider offering end-to-end solutions across live production, content preparation, distribution, and monet...
27/01/2026
Miri Technologies Inc. today unveiled its V410 live 4K video encoder/decoder for streaming, IP-based production workflows and AV-over-IP distribution. Designed ...
27/01/2026
The Paul E. Tsongas Center at UMass Lowell in Massachusetts has chosen Ikegami cameras for incorporation into its broadcast-quality television production facili...