
Large language models and the applications they power enable unprecedented opportunities for organizations to get deeper insights from their data reservoirs and to build entirely new classes of applications.
But with opportunities often come challenges.
Both on premises and in the cloud, applications that are expected to run in real time place significant demands on data center infrastructure to simultaneously deliver high throughput and low latency with one platform investment.
To drive continuous performance improvements and improve the return on infrastructure investments, NVIDIA regularly optimizes the state-of-the-art community models, including Meta's Llama, Google's Gemma, Microsoft's Phi and our own NVLM-D-72B, released just a few weeks ago.
Relentless Improvements Performance improvements let our customers and partners serve more complex models and reduce the needed infrastructure to host them. NVIDIA optimizes performance at every layer of the technology stack, including TensorRT-LLM, a purpose-built library to deliver state-of-the-art performance on the latest LLMs. With improvements to the open-source Llama 70B model, which delivers very high accuracy, we've already improved minimum latency performance by 3.5x in less than a year.
We're constantly improving our platform performance and regularly publish performance updates. Each week, improvements to NVIDIA software libraries are published, allowing customers to get more from the very same GPUs. For example, in just a few months' time, we've improved our low-latency Llama 70B performance by 3.5x.
NVIDIA has increased performance on the Llama 70B model by 3.5x. In the most recent round of MLPerf Inference 4.1, we made our first-ever submission with the Blackwell platform. It delivered 4x more performance than the previous generation.
This submission was also the first-ever MLPerf submission to use FP4 precision. Narrower precision formats, like FP4, reduces memory footprint and memory traffic, and also boost computational throughput. The process takes advantage of Blackwell's second-generation Transformer Engine, and with advanced quantization techniques that are part of TensorRT Model Optimizer, the Blackwell submission met the strict accuracy targets of the MLPerf benchmark.
Blackwell B200 delivers up to 4x more performance versus previous generation on MLPerf Inference v4.1's Llama 2 70B workload. Improvements in Blackwell haven't stopped the continued acceleration of Hopper. In the last year, Hopper performance has increased 3.4x in MLPerf on H100 thanks to regular software advancements. This means that NVIDIA's peak performance today, on Blackwell, is 10x faster than it was just one year ago on Hopper.
These results track progress on the MLPerf Inference Llama 2 70B Offline scenario over the past year. Our ongoing work is incorporated into TensorRT-LLM, a purpose-built library to accelerate LLMs that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM is built on top of the TensorRT Deep Learning Inference library and leverages much of TensorRT's deep learning optimizations with additional LLM-specific improvements.
Improving Llama in Leaps and Bounds More recently, we've continued optimizing variants of Meta's Llama models, including versions 3.1 and 3.2 as well as model sizes 70B and the biggest model, 405B. These optimizations include custom quantization recipes, as well as efficient use of parallelization techniques to more efficiently split the model across multiple GPUs, leveraging NVIDIA NVLink and NVSwitch interconnect technologies. Cutting-edge LLMs like Llama 3.1 405B are very demanding and require the combined performance of multiple state-of-the-art GPUs for fast responses.
Parallelism techniques require a hardware platform with a robust GPU-to-GPU interconnect fabric to get maximum performance and avoid communication bottlenecks. Each NVIDIA H200 Tensor Core GPU features fourth-generation NVLink, which provides a whopping 900GB/s of GPU-to-GPU bandwidth. Every eight-GPU HGX H200 platform also ships with four NVLink Switches, enabling every H200 GPU to communicate with any other H200 GPU at 900GB/s, simultaneously.
Many LLM deployments use parallelism over choosing to keep the workload on a single GPU, which can have compute bottlenecks. LLMs seek to balance low latency and high throughput, with the optimal parallelization technique depending on application requirements.
For instance, if lowest latency is the priority, tensor parallelism is critical, as the combined compute performance of multiple GPUs can be used to serve tokens to users more quickly. However, for use cases where peak throughput across all users is prioritized, pipeline parallelism can efficiently boost overall server throughput.
The table below shows that tensor parallelism can deliver over 5x more throughput in minimum latency scenarios, whereas pipeline parallelism brings 50% more performance for maximum throughput use cases.
For production deployments that seek to maximize throughput within a given latency budget, a platform needs to provide the ability to effectively combine both techniques like in TensorRT-LLM.
Read the technical blog on boosting Llama 3.1 405B throughput to learn more about these techniques.
Different scenarios have different requirements, and parallelism techniques bring optimal performance for each of these scenarios. The Virtuous Cycle Over the lifecycle of our architectures, we deliver significant performance gains from ongoing software tuning and optimization. These improvements translate into additional value for customers who train and deploy on our platforms. They're able to create more capable models and applications and deploy their existing models using less infrastructure, enhancing th
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
01/04/2026
January 4 2026, 18:00 (PST) DOLBY AND DOUYIN EMPOWER THE NEXT GENERATON OF CREATORS WITH DOLBY VISION
Douyin Users Can Now Create And Share Videos With Stun...
15/02/2026
With new partnership between the league and NBC, workflows distinguish more between live, broadcast sound
There'll be a lot new for the 75th NBA All-Star W...
15/02/2026
After 24-year absence, NBC Sports returns to NBA All-Star Weekend with unique ca...
15/02/2026
New to NBA coverage, the viewer experience offers several angles in addition to ...
15/02/2026
Coverage features 4X-slo-mo Supracam and Steadicam, Nucleus 4K cameras, closer play-by-play angle, 10 player mics
NBC Sports is in the midst of its first NBA A...
14/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/02/2026
Boston Conservatory Orchestra Helps Peter and Leonardo Dugan Complete Their Dre...
13/02/2026
Olympic Broadcasting Services (OBS) has provided an update on its adoption of the cloud as it continues on its journey to fully migrate to IT-based systems by 2...
13/02/2026
France T l visions has successfully launched France 2 UHD featuring Dolby Vision...
13/02/2026
Partnering with Worldwide Olympic Partner TCL, OBS deploys connected Athlete Mom...
13/02/2026
The men's figure skating long-form program is tonight, and it promises to be an exciting night for fans in the stands, fans at home, and even the production...
13/02/2026
With new partnership between the league and NBC, workflows distinguish more between live, broadcast sound
There'll be a lot new for the 75th NBA All-Star W...
13/02/2026
In-venue and creative video staffers at the professional and collegiate level have one major thing in common: the intensity and attention to detail ramps up dur...
13/02/2026
Teradek announces the launch of RF-X Auto Switcher, a revolutionary appliance designed to deliver flawless, uncompromised signal integrity for the world's m...
13/02/2026
Globecast and Synamedia announces that Pitch International (Pitch), the leading London-based sports marketing agency, has gone live with cloud-based distributi...
13/02/2026
Ratings Roundup is a rundown of recent rating news and is derived from press rel...
13/02/2026
Far from the action in the snow and on the ice, the team controls the production...
13/02/2026
The Daytona 500 is called The Super Bowl of Racing for a reason. Whether it's the culmination to five days of action on the track, the sheer size and scop...
13/02/2026
For the Milano Cortina Games, Olympic Broadcasting Services (OBS) is delivering more than 6,500 hours of content, with more than 900 hours of live action, sprea...
13/02/2026
After 24-year absence, NBC Sports returns to NBA All-Star Weekend with unique ca...
13/02/2026
By Jessica Herndon
We may have just wrapped an unforgettable 2026 Sundance Film...
13/02/2026
By Jessica Herndon
One of the most exciting things about the Sundance Film Fest...
13/02/2026
This Wednesday in Los Angeles, Spotify brought together a group of podcast creat...
13/02/2026
Yesterday, Spotify and LoveShackFancy hosted a Galentine's and Gents Lunch a...
13/02/2026
The upgrade to a Project 25 network provides state agencies communicating on the Statewide Law Enforcement Radio System flexibility to tailor the network to the...
13/02/2026
Riedel Communications has officially opened a new office in Kuala Lumpur, Malaysia, marking a strategic expansion of its global Customer Success and IT software...
13/02/2026
Two of ES Broadcast Hire's longest-serving employees recently celebrated a decade working for the company.
Annie Breislin, Operations Manager, and Charles ...
13/02/2026
Disguise, the award-winning technology company powering global experiences, today unveils a new 8,000-square-foot office and Experience Center in Atlanta, creat...
13/02/2026
At BSC Expo 2026, Mavis announced full support for the Accsoon SeeMo series of iOS camera adapters across Mavis Camera and Mavis Monitor apps. This new integrat...
13/02/2026
Executing technically ambitious live streams, virtual productions, and immersive media today requires talent, creativity, and the right supporting technology. L...
13/02/2026
Michal Miskin-Amir, Jonathan Stanton and Bobby Bond to lead technical advances amid surge in demand for LTN's IP video transport services as satellite capac...
13/02/2026
Grass Valley, the pioneering media and entertainment technology innovator, has won a competitive NATO-wide tender to provide the new camera system for NATO'...
13/02/2026
Wireless IP intercom underpins agile, multi-location live production workflows
Digital Azul, the independent production powerhouse specialising in complex liv...
13/02/2026
Actus Digital, a LiveU company, will unveil major new enhancements to its Actus X Intelligent Monitoring Platform at NAB Show (LiveU booth N1740), reinforcing i...
13/02/2026
Globecast, a worldwide leader in broadcast services, and leading video software provider, Synamedia, today announced that Pitch International (Pitch), the leadi...
13/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
13/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
13/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
13/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
13/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
13/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
13/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
13/02/2026
What can I watch on UKTV this week?What can I stream on U this week?
This guide highlights romantic dramas for Valentine's Day, alternative relationship t...
13/02/2026
New RT series tells stranger-than-fiction stories of Irish con artists
Swindlers airs Wednesday 18 February, 9.35pm on RT One and RT Player
Swindlers, a...