
Modern AI applications increasingly rely on models that combine huge parameter counts with multi-million-token context windows. Whether it is AI agents following months of conversation, legal assistants reasoning through gigabytes of case law as big as an entire encyclopedia set, or coding copilots navigating sprawling repositories, preserving long-range context is essential for relevance and coherence. On top of that, users expect fast, interactive responses.
The growing demand to decode such massive amounts of data-and let multiple GPUs quickly scale and communicate with each other-underscores the importance of FP4 compute and the high-bandwidth large NVLink domain provided by NVIDIA Blackwell systems. Helix Parallelism, introduced in this blog, is co-designed with Blackwell. It enables up to a 32x increase in the number of concurrent users at a given latency, compared to the best known prior parallelism methods for real-time decoding with ultra-long context.
In other words, it lets AI agents and virtual assistants serve more people, faster than ever before.
(Note: Context in this blog refers to the sequence of previously generated tokens, whose intermediate key and value representations are stored as KV cache and accessed at every decoding step.)
Decoding bottlenecks: KV cache and FFN weight reads To support real-time decoding at scale, a system must overcome two major bottlenecks during the decoding (aka generation) phase:
Key-Value (KV) cache streaming: When handling multi-million-token contexts, each GPU must read a massive history of past tokens (KV cache) from DRAM per sample. This constant streaming can, in turn, saturate DRAM bandwidth, increase token-to-token latency (TTL), and quickly become a major bottleneck as context length grows.
Feed-Forward Network (FFN) weight loading: During autoregressive decoding, generating every new token requires loading large Feed-Forward Network (FFN) weights from DRAM. In low latency scenarios with small batch sizes, this memory access cost is not well amortized, making FFN weight reads a dominant source of latency.
These two bottlenecks, KV cache streaming and FFN weight loading, are difficult to optimize simultaneously using traditional parallelism strategies.
Let's take Tensor Parallelism (TP) as an example: Increasing TP can help reduce FFN stalls by distributing weight loading across multiple GPUs and improving TTL, but only up to a point. In attention schemes like Grouped Query Attention (GQA)-used in Llama models-or Multi-Latent Attention (MLA)-found in DeepSeek models-multiple query heads share a limited number of KV heads. As illustrated in Figure 2(c), when TP exceeds the number of KV heads, the system ends up duplicating the multi-million-token KV cache per sample across GPUs for self-attention. As a result, KV read volume stays high even with increased TP, once again saturating DRAM bandwidth and limiting scalability. In the case of MLA, the upper limit for TP is just one to avoid duplication of KV cache.
So how can developers scale both model size and context length without sacrificing real-time interactivity? Helix Parallelism offers a path forward.
Helix execution flow Helix is a hybrid sharding strategy that disaggregates the parallelism strategies of attention and FFNs in a temporal pipeline, effectively addressing both KV cache and FFN weight-read bottlenecks during multi-million-token decoding.
Figure 1 (below) shows how Helix orchestrates the execution of attention and FFN within a single transformer layer. Inspired by the structure of a DNA helix, Helix interweaves multiple dimensions of parallelism-KV, tensor, and expert-into a unified execution loop. By decoupling the parallelism strategy used for attention and FFN, Helix allows each stage to operate in a configuration tuned to its own bottleneck, all while reusing the same pool of GPUs. Helix reuse approach keeps GPUs efficiently utilized across stages, eliminating idle time as computation flows through the model.
data-src=https://developer-blogs.nvidia.com/wp-content/uploads/2025/07/image2-2-png.webp alt=A diagram showing the execution flow of Helix Parallelism. Helix reuses the same pool of N GPUs per layer by switching between N=KVPxTPA during attention and N=TPFxEP during FFN. class=lazyload wp-image-102939 data-srcset=https://developer-blogs.nvidia.com/wp-content/uploads/2025/07/image2-2-png.webp 1522w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/07/image2-2-282x300-png.webp 282w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/07/image2-2-625x665-png.webp 625w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/07/image2-2-108x115-png.webp 108w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/07/image2-2-768x817-png.webp 768w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/07/image2-2-1443x1536-png.webp 1443w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/07/image2-2-645x687-png.webp 645w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/07/image2-2-85x90-png.webp 85w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/07/image2-2-362x385-png.webp 362w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/07/image2-2-103x110-png.webp 103w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/07/image2-2-1024x1090-png.webp 1024w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/07/image2-2-507x540-png.webp 507w data-sizes=(max-width: 1522px) 100vw, 1522px />Figure 1. Execution flow of Helix Parallelism. Helix reuses the same pool of N GPUs per layer by switching between N=KVPxTPA during attention and N=TPFxEP during FFN.
Attention phase Helix applies KV Parallelism (KVP) by sharding the multi-million-token KV cache along the sequence dimension across KVP GPUs, while applying Tensor Parallelism across attention heads (TPA), where TP
More from Nvidia
28/08/2025
Brace yourself, COGs - the Locusts aren't the only thing rising up. The Coal...
28/08/2025
Last week at Gamescom, NVIDIA announced the winners of the NVIDIA and ModDB RTX ...
27/08/2025
AI models are advancing at a rapid rate and scale.
But what might they lack that (most) humans don't? Common sense: an understanding, developed through rea...
25/08/2025
Robots around the world are about to get a lot smarter as physical AI developers...
25/08/2025
As autonomous vehicle systems rapidly grow in complexity, equipped with reasonin...
22/08/2025
As the latest member of the NVIDIA Blackwell architecture family, the NVIDIA Blackwell Ultra GPU builds on core innovations to accelerate training and AI reason...
22/08/2025
AI reasoning, inference and networking will be top of mind for attendees of next...
21/08/2025
Japan is once again building a landmark high-performance computing system - not ...
21/08/2025
From AI assistants doing deep research to autonomous vehicles making split-second navigation decisions, AI adoption is exploding across industries.
Behind ever...
21/08/2025
Across the globe, AI factories are rising - massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Inte...
21/08/2025
Get a glimpse into the future of gaming.
The NVIDIA Blackwell RTX architecture is coming to GeForce NOW in September, marking the service's biggest upgrade...
20/08/2025
Editor's note: This blog is a part of Into the Omniverse, a series focused o...
18/08/2025
With over 175 games now supporting NVIDIA DLSS 4 - a suite of advanced, AI-power...
18/08/2025
At Gamescom, NVIDIA is releasing its first major update to Project G Assist - an...
15/08/2025
Of around 7,000 languages in the world, a tiny fraction are supported by AI lang...
14/08/2025
NVIDIA is partnering with the U.S. National Science Foundation (NSF) to create a...
14/08/2025
Warhammer 40,000: Dawn of War - Definitive Edition is marching onto GeForce NOW,...
13/08/2025
Black Forest Labs' FLUX.1 Kontext [dev] image editing model is now available as an NVIDIA NIM microservice.
FLUX.1 models allow users to edit existing imag...
11/08/2025
Using NVIDIA digital twin technologies, Amazon Devices & Services is powering bi...
11/08/2025
Packing the power of the NVIDIA Blackwell architecture in compact, energy-effici...
11/08/2025
Physical AI is becoming the foundation of smart cities, facilities and industria...
07/08/2025
This GFN Thursday brings an offer members can't refuse - 2K's highly ant...
05/08/2025
Two new open-weight AI reasoning models from OpenAI released today bring cutting...
05/08/2025
In collaboration with OpenAI, NVIDIA has optimized the company's new open-so...
05/08/2025
NVIDIA and OpenAI began pushing the boundaries of AI with the launch of NVIDIA D...
05/08/2025
NVIDIA GPUs are at the heart of modern computing. They're used across industries - from healthcare and finance to scientific research, autonomous systems an...
31/07/2025
August brings new levels of gaming excitement on GeForce NOW, with 2,300 titles now available to stream in the cloud.
Grab a controller and get ready for epic ...
31/07/2025
Interest in generative AI is continuing to grow, as new models include more capabilities. With the latest advancements, even enthusiasts without a developer bac...
29/07/2025
FourCastNet3 (FCN3) is the latest AI global weather forecasting system from NVID...
28/07/2025
The electrical grid is designed to support loads that are relatively steady, such as lighting, household appliances, and industrial machines that operate at con...
24/07/2025
For media company Black Mixture, AI isn't just a tool - it's an entire p...
24/07/2025
Sharpen the blade and brace for a journey steeped in myth and mystery. WUCHANG: Fallen Feathers has launched in the cloud.
Ride in style with skateboarding leg...
23/07/2025
In today's fast-evolving digital landscape, marketing teams face increasing ...
22/07/2025
Editor's note: This post is part of the AI On blog series, which explores th...
17/07/2025
Listen up citizens, the law is back and patrolling the cloud. Nacon's RoboCop Rogue City - Unfinished Business launches today in the cloud, bringing justice...
15/07/2025
Submissions for NVIDIA's Plug and Play: Project G-Assist Plug-In Hackathon a...
14/07/2025
This month, NVIDIA founder and CEO Jensen Huang promoted AI in both Washington, D.C. and Beijing - emphasizing the benefits that AI will bring to business and s...
11/07/2025
Ceramics - the humble mix of earth, fire and artistry - have been part of a global conversation for millennia.
From Tang Dynasty trade routes to Renaissance pa...
10/07/2025
In the race to understand our planet's changing climate, speed and accuracy are everything. But today's most widely used climate simulators often strugg...
10/07/2025
As one of the world's largest emerging markets, Indonesia is making strides toward its Golden 2045 Vision - an initiative tapping digital technologies and...
10/07/2025
Grab a friend and climb toward the clouds - PEAK is now available on GeForce NOW, enabling members to try the hugely popular indie hit on virtually any device.
...
10/07/2025
Coding assistants or copilots - AI-powered assistants that can suggest, explain and debug code - are fundamentally changing how software is developed for both e...
08/07/2025
Modern AI applications increasingly rely on models that combine huge parameter c...
03/07/2025
The forecast this month is showing a 100% chance of epic gaming. Catch the scorching lineup of 20 titles coming to the cloud, which gamers can play whether indo...
02/07/2025
Black Forest Labs, one of the world's leading AI research labs, just changed the game for image generation.
The lab's FLUX.1 image models have earned g...
01/07/2025
In many parts of the world, including major technology hubs in the U.S., there's a yearslong wait for AI factories to come online, pending the buildout of n...
26/06/2025
As of today, NVIDIA now supports the general availability of Gemma 3n on NVIDIA RTX and Jetson. Gemma, previewed by Google DeepMind at Google I/O last month, in...
26/06/2025
Editor's note: This blog is a part of Into the Omniverse, a series focused o...
26/06/2025
Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold ...