
Inference has emerged as the new frontier of complexity in AI. Modern models are evolving into agentic systems capable of multi-step reasoning, persistent memory, and long-horizon context-enabling them to tackle complex tasks across domains such as software development, video generation, and deep research. These workloads place unprecedented demands on infrastructure, introducing new challenges in compute, memory, and networking that require a fundamental rethinking of how inference is scaled and optimized.
Among these challenges, processing massive context for a specific class of workloads has become increasingly critical. In software development, for example, AI systems must reason over entire codebases, maintain cross-file dependencies, and understand repository-level structure-transforming coding assistants from autocomplete tools into intelligent collaborators. Similarly, long-form video and research applications demand sustained coherence and memory across millions of tokens. These requirements are pushing the boundaries of what current infrastructure can support.
To address this shift, the NVIDIA SMART framework provides a path forward-optimizing inference across scale, multidimensional performance, architecture, ROI, and the broader technology ecosystem. It emphasizes a full-stack disaggregated infrastructure that enables efficient allocation of compute and memory resources. Platforms like NVIDIA Blackwell and NVIDIA GB200 NVL72, combined with NVFP4 for low-precision inference and open source software such as NVIDIA TensorRT-LLM and NVIDIA Dynamo, are redefining inference performance across the AI landscape.
This blog explores the next evolution in disaggregated inference infrastructure and introduces NVIDIA Rubin CPX-a purpose-built GPU designed to meet the demands of long-context AI workloads with greater efficiency and ROI.
Disaggregated inference: a scalable approach to AI complexity Inference consists of two distinct phases: the context phase and the generation phase, each placing fundamentally different demands on infrastructure. The context phase is compute-bound, requiring high-throughput processing to ingest and analyze large volumes of input data to produce the first token output result. In contrast, the generation phase is memory bandwidth-bound, relying on fast memory transfers and high-speed interconnects, such as NVLink, to sustain token-by-token output performance.
Disaggregated inference enables these phases to be processed independently, enabling targeted optimization of compute and memory resources. This architectural shift improves throughput, reduces latency, and enhances overall resource utilization (Figure 1).
data-src=https://developer-blogs.nvidia.com/wp-content/uploads/2025/09/Disaggregated-inference.gif alt=Diagram of a disaggregated inference pipeline. Documents/databases/videos feed a context processor (shown as GPU B with a swap to GPU A); its output goes to a key-value cache read by a GPU B generation node to produce results. Labels note GPU A is optimized for long-context processing, while GPU B delivers strong TCO for both context and generation. class=lazyload wp-image-105631/>Figure 1. Optimizing inference by aligning GPU capabilities with context and generation workloads
However, disaggregation introduces new layers of complexity, requiring precise coordination across low-latency KV cache transfers, LLM-aware routing, and efficient memory management. NVIDIA Dynamo serves as the orchestration layer for these components, and its capabilities played a pivotal role in the latest MLPerf Inference results. Learn how disaggregation with Dynamo on GB200 NVL72 set new performance records.
To capitalize on the benefits of disaggregated inference-particularly in the compute-intensive context phase-specialized acceleration is essential. Addressing this need, NVIDIA is introducing Rubin CPX GPU-a purpose-built solution designed to deliver high-throughput performance for high-value long-context inference workloads while seamlessly integrating into disaggregated infrastructure.
Rubin CPX: built to accelerate long-context processing The Rubin CPX GPU is designed to enhance long-context performance, complementing existing infrastructure while delivering scalable efficiency and maximizing ROI in context-aware inference deployments. Rubin CPX, built with the Rubin architecture, delivers breakthrough performance for the compute-intensive context phase of inference. It features 30 petaFLOPs of NVFP4 compute, 128 GB of GDDR7 memory, hardware support for video decoding and encoding, and 3x attention acceleration (compared to NVIDIA GB300 NVL72).
Optimized for efficiently processing long sequences, Rubin CPX is critical for high-value inference use cases like software application development and HD video generation. Designed to complement existing disaggregated inference architectures, it enhances throughput and responsiveness while maximizing ROI for large-scale generative AI workloads.
Rubin CPX works in tandem with NVIDIA Vera CPUs and Rubin GPUs for generation-phase processing, forming a complete, high-performance disaggregated serving solution for long-context use cases. The NVIDIA Vera Rubin NVL144 CPX rack integrates 144 Rubin CPX GPUs, 144 Rubin GPUs, and 36 Vera CPUs to deliver 8 exaFLOPs of NVFP4 compute-7.5 more than the GB300 NVL72-alongside 100 TB of high-speed memory and 1.7 PB/s of memory bandwidth, all within a single rack.
Using NVIDIA Quantum-X800 InfiniBand or Spectrum-X Ethernet, paired with NVIDIA ConnectX-9 SuperNICs and orchestrated by the Dynamo platform, the Vera Rubin NVL144 CPX is built to power the next wave of million-token context AI inference workloads-cutting inference costs and unlocking advanced capabilities for developers and creators worldwide.
At scale, the platform can deliver 30x to 50x return on investment, transl
More from Nvidia
10/09/2025
GeForce NOW Blackwell RTX 5080-class SuperPODs are now rolling out, unlocking a new level of ultra high-performance, cinematic cloud gaming.
GeForce NOW Ultima...
09/09/2025
Inference has emerged as the new frontier of complexity in AI. Modern models are...
09/09/2025
As large language models (LLMs) grow larger, they get smarter, with open models from leading developers now featuring hundreds of billions of parameters. At the...
09/09/2025
At this week's AI Infrastructure Summit in Silicon Valley, NVIDIA's VP o...
09/09/2025
Inference performance is critical, as it directly influences the economics of an AI factory. The higher the throughput of AI factory infrastructure, the more to...
09/09/2025
At this week's IAA Mobility conference in Munich, NVIDIA Vice President of A...
09/09/2025
ComfyUI - an open-source, node-based graphical interface for running and buildin...
04/09/2025
NVIDIA today announced new AI education support for K-12 programs at a White House event to celebrate public-private partnerships that advance artificial intell...
04/09/2025
Editor's note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copi...
04/09/2025
NVIDIA Blackwell RTX is coming to the cloud on Wednesday, Sept. 10 - an upgrade ...
03/09/2025
3D artists are constantly prototyping.
In traditional workflows, modelers must build placeholder, low-fidelity assets to populate 3D scenes, tinkering and adju...
02/09/2025
For more than a century, meteorologists have chased storms with chalkboards, equ...
28/08/2025
Brace yourself, COGs - the Locusts aren't the only thing rising up. The Coal...
28/08/2025
Last week at Gamescom, NVIDIA announced the winners of the NVIDIA and ModDB RTX ...
27/08/2025
AI models are advancing at a rapid rate and scale.
But what might they lack that (most) humans don't? Common sense: an understanding, developed through rea...
25/08/2025
Robots around the world are about to get a lot smarter as physical AI developers...
25/08/2025
As autonomous vehicle systems rapidly grow in complexity, equipped with reasonin...
22/08/2025
As the latest member of the NVIDIA Blackwell architecture family, the NVIDIA Blackwell Ultra GPU builds on core innovations to accelerate training and AI reason...
22/08/2025
AI reasoning, inference and networking will be top of mind for attendees of next...
21/08/2025
Japan is once again building a landmark high-performance computing system - not ...
21/08/2025
From AI assistants doing deep research to autonomous vehicles making split-second navigation decisions, AI adoption is exploding across industries.
Behind ever...
21/08/2025
Across the globe, AI factories are rising - massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Inte...
21/08/2025
Get a glimpse into the future of gaming.
The NVIDIA Blackwell RTX architecture is coming to GeForce NOW in September, marking the service's biggest upgrade...
20/08/2025
Editor's note: This blog is a part of Into the Omniverse, a series focused o...
18/08/2025
With over 175 games now supporting NVIDIA DLSS 4 - a suite of advanced, AI-power...
18/08/2025
At Gamescom, NVIDIA is releasing its first major update to Project G Assist - an...
15/08/2025
Of around 7,000 languages in the world, a tiny fraction are supported by AI lang...
14/08/2025
NVIDIA is partnering with the U.S. National Science Foundation (NSF) to create a...
14/08/2025
Warhammer 40,000: Dawn of War - Definitive Edition is marching onto GeForce NOW,...
13/08/2025
Black Forest Labs' FLUX.1 Kontext [dev] image editing model is now available as an NVIDIA NIM microservice.
FLUX.1 models allow users to edit existing imag...
11/08/2025
Using NVIDIA digital twin technologies, Amazon Devices & Services is powering bi...
11/08/2025
Packing the power of the NVIDIA Blackwell architecture in compact, energy-effici...
11/08/2025
Physical AI is becoming the foundation of smart cities, facilities and industria...
07/08/2025
This GFN Thursday brings an offer members can't refuse - 2K's highly ant...
05/08/2025
Two new open-weight AI reasoning models from OpenAI released today bring cutting...
05/08/2025
In collaboration with OpenAI, NVIDIA has optimized the company's new open-so...
05/08/2025
NVIDIA and OpenAI began pushing the boundaries of AI with the launch of NVIDIA D...
05/08/2025
NVIDIA GPUs are at the heart of modern computing. They're used across industries - from healthcare and finance to scientific research, autonomous systems an...
31/07/2025
August brings new levels of gaming excitement on GeForce NOW, with 2,300 titles now available to stream in the cloud.
Grab a controller and get ready for epic ...
31/07/2025
Interest in generative AI is continuing to grow, as new models include more capabilities. With the latest advancements, even enthusiasts without a developer bac...
29/07/2025
FourCastNet3 (FCN3) is the latest AI global weather forecasting system from NVID...
28/07/2025
The electrical grid is designed to support loads that are relatively steady, such as lighting, household appliances, and industrial machines that operate at con...
24/07/2025
For media company Black Mixture, AI isn't just a tool - it's an entire p...
24/07/2025
Sharpen the blade and brace for a journey steeped in myth and mystery. WUCHANG: Fallen Feathers has launched in the cloud.
Ride in style with skateboarding leg...
23/07/2025
In today's fast-evolving digital landscape, marketing teams face increasing ...
22/07/2025
Editor's note: This post is part of the AI On blog series, which explores th...
17/07/2025
Listen up citizens, the law is back and patrolling the cloud. Nacon's RoboCop Rogue City - Unfinished Business launches today in the cloud, bringing justice...
15/07/2025
Submissions for NVIDIA's Plug and Play: Project G-Assist Plug-In Hackathon a...
14/07/2025
This month, NVIDIA founder and CEO Jensen Huang promoted AI in both Washington, D.C. and Beijing - emphasizing the benefits that AI will bring to business and s...