
Editor's note: This post is part of Think SMART, a series focused on how leading AI service providers, developers and enterprises can boost their inference performance and return on investment with the latest advancements from NVIDIA's full-stack inference platform.
AI models are becoming increasingly complex and collaborative through multi-agent workflows. To keep up, AI inference must now scale across entire clusters to serve millions of concurrent users and deliver faster responses.
Much like it did for large-scale AI training, Kubernetes - the industry standard for containerized application management - is well-positioned to manage the multi-node inference needed to support advanced models.
The NVIDIA Dynamo platform works together with Kubernetes to streamline the management of both single- and multi-node AI inference. Read on to learn how the shift to multi-node inference is driving performance, as well as how cloud platforms are putting these technologies to work.
Tapping Disaggregated Inference for Optimized Performance For AI models that fit on a single GPU or server, developers often run many identical replicas of the model in parallel across multiple nodes to deliver high throughput. In a recent paper, Russ Fellows, principal analyst at Signal65, showed that this approach achieved an industry-first record aggregate throughput of 1.1 million tokens per second with 72 NVIDIA Blackwell Ultra GPUs.
When scaling AI models to serve many concurrent users in real time, or when managing demanding workloads with long input sequences, using a technique called disaggregated serving unlocks further performance and efficiency gains.
Serving AI models involves two phases: processing the input prompt (prefill) and generating the output (decode). Traditionally, both phases run on the same GPUs, which can create inefficiencies and resource bottlenecks.
Disaggregated serving solves this by intelligently assigning these tasks to independently optimized GPUs. This approach ensures that each part of the workload runs with the optimization techniques best suited for it, maximizing overall performance. For today's large AI reasoning models, such as DeepSeek-R1, disaggregated serving is essential.
NVIDIA Dynamo seamlessly brings multi-node inference optimization features such as disaggregated serving to production scale across GPU clusters.
It's already delivering value.
Baseten, for example, used NVIDIA Dynamo to speed up inference serving for long-context code generation by 2x and increase throughput by 1.6x, all without incremental hardware costs. Such software-driven performance boosts enable AI providers to significantly reduce the costs to manufacture intelligence.
In addition, recent SemiAnalysis InferenceMAX benchmarks demonstrated that disaggregated serving with Dynamo on NVIDIA GB200 NVL72 systems delivers the lowest cost per million tokens for mixture-of-experts reasoning models like DeepSeek-R1, among platforms tested.
Scaling Disaggregated Inference in the Cloud As disaggregated serving scales across dozens or even hundreds of nodes for enterprise-scale AI deployments, Kubernetes provides the critical orchestration layer. With NVIDIA Dynamo now integrated into managed Kubernetes services from all major cloud providers, customers can scale multi-node inference across NVIDIA Blackwell systems, including GB200 and GB300 NVL72, with the performance, flexibility and reliability that enterprise AI deployments demand.
Amazon Web Services is accelerating generative AI inference for its customers with NVIDIA Dynamo and integrated with Amazon EKS.
Google Cloud is providing a NVIDIA Dynamo recipe to optimize large language model (LLM) inference at enterprise scale on its AI Hypercomputer.
OCI is enabling multi-node LLM inferencing with OCI Superclusters and NVIDIA Dynamo.
The push towards enabling large-scale, multi-node inference extends beyond hyperscalers.
Nebius, for example, is designing its cloud to serve inference workloads at scale, built on NVIDIA accelerated computing infrastructure and working with NVIDIA Dynamo as an ecosystem partner.
Simplifying Inference on Kubernetes With NVIDIA Grove in NVIDIA Dynamo Disaggregated AI inference requires coordinating a team of specialized components - prefill, decode, routing and more - each with different needs. The challenge for Kubernetes is no longer about running more parallel copies of a model, but rather about masterfully conducting these distinct components as one cohesive, high-performance system.
NVIDIA Grove, an application programming interface now available within NVIDIA Dynamo, allows users to provide a single, high-level specification that describes their entire inference system.
For example, in that single specification, a user could simply declare their requirements: I need three GPU nodes for prefill and six GPU nodes for decode, and I require all nodes for a single model replica to be placed on the same high-speed interconnect for the quickest possible response.
From that specification, Grove automatically handles all the intricate coordination: scaling related components together while maintaining correct ratios and dependencies, starting them in the right order and placing them strategically across the cluster for fast, efficient communication. Learn more about how to get started with NVIDIA Grove in this technical deep dive.
As AI inference becomes increasingly distributed, the combination of Kubernetes and NVIDIA Dynamo with NVIDIA Grove simplifies how developers build and scale intelligent applications.
Explore how these technologies come together to make cluster-scale AI easy and production-ready by joining NVIDIA at KubeCon, running through Thursday, Nov. 13, in Atlanta.
More from Nvidia
25/12/2025
Holiday lights are twinkling, hot cocoa's on the stove and gamers are settling in for a well-earned break.
Whether staying in or heading on a winter getawa...
22/12/2025
The works of Plato state that when humans have an experience, some level of change occurs in their brain, which is powered by memory - specifically long-term me...
18/12/2025
NVIDIA will join the U.S. Department of Energy's (DOE) Genesis Mission as a ...
18/12/2025
Top-notch options for AI at the desktops of developers, engineers and designers ...
18/12/2025
Step out of the vault and into the future of gaming with Fallout: New Vegas streaming on GeForce NOW, just in time to celebrate the newest season of the hit Ama...
17/12/2025
The Hao AI Lab research team at the University of California San Diego - at the forefront of pioneering AI model innovation - recently received an NVIDIA DGX B...
17/12/2025
Editor's note: This post is part of Into the Omniverse, a series focused on ...
15/12/2025
NVIDIA today announced it has acquired SchedMD - the leading developer of Slurm, an open-source workload management system for high-performance computing (HPC) ...
15/12/2025
Modern workflows showcase the endless possibilities of generative and agentic AI on PCs.
Of many, some examples include tuning a chatbot to handle product-supp...
12/12/2025
In Las Vegas's T-Mobile Arena, fans of the Golden Knights are getting more than just hockey - they're getting a taste of the future. ADAM, a robot devel...
11/12/2025
Unveiling what it describes as the most capable model series yet for professional knowledge work, OpenAI launched GPT-5.2 today. The model was trained and deplo...
11/12/2025
Hunters, saddle up - adventure awaits in the cloud.
Journey into the world of M...
10/12/2025
The NVIDIA accelerated computing platform is leading supercomputing benchmarks once dominated by CPUs, enabling AI, science, business and computing efficiency w...
10/12/2025
The world's top-performing system for graph processing at scale was built on...
10/12/2025
As the scale and complexity of AI infrastructure grows, data center operators need continuous visibility into factors including performance, temperature and pow...
04/12/2025
Developers, researchers, hobbyists and students can take a byte out of holiday s...
04/12/2025
Editor's note: The Game Pass edition of Hogwarts Legacy' will also be supported on GeForce NOW when the Steam and Epic Games Store versions launch on t...
03/12/2025
The top 10 most intelligent open-source models all use a mixture-of-experts arch...
02/12/2025
Today, Mistral AI announced the Mistral 3 family of open-source multilingual, multimodal models, optimized across NVIDIA supercomputing and edge platforms.
M...
02/12/2025
At AWS re:Invent, NVIDIA and Amazon Web Services expanded their strategic collab...
01/12/2025
Researchers worldwide rely on open-source technologies as the foundation of their work. To equip the community with the latest advancements in digital and physi...
27/11/2025
Black Friday is leveling up. Get ready to score one of the biggest deals of the season - 50% off the first three months of a new GeForce NOW Ultimate membership...
25/11/2025
Black Forest Labs - the frontier AI research lab developing visual generative AI models - today released the FLUX.2 family of state-of-the-art image generation ...
24/11/2025
Editor's note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copi...
20/11/2025
Editor's note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows u...
20/11/2025
The NVIDIA Blackwell RTX upgrade is nearing the finish line, letting GeForce NOW Ultimate members across the globe experience true next-generation cloud gaming ...
20/11/2025
Tanya Berger-Wolf's first computational biology project started as a bet wit...
18/11/2025
Timed with the Microsoft Ignite conference running this week, NVIDIA is expandin...
18/11/2025
Today, Microsoft, NVIDIA and Anthropic announced new strategic partnerships. Anthropic is scaling its rapidly growing Claude AI model on Microsoft Azure, powere...
18/11/2025
AI agents have the potential to become indispensable tools for automating complex tasks. But bringing agents to production remains challenging.
According to Ga...
17/11/2025
NVIDIA Apollo - a family of open models for accelerating industrial and computat...
17/11/2025
To power future technologies including liquid-cooled data centers, high-resoluti...
17/11/2025
At SC25, NVIDIA unveiled advances across NVIDIA BlueField DPUs, next-generation networking, quantum computing, national research, AI physics and more - as accel...
17/11/2025
Across quantum physics, digital biology and climate research, the world's researchers are harnessing a universal scientific instrument to chart new frontier...
17/11/2025
It used to be that computing power trickled down from hulking supercomputers to ...
14/11/2025
Today's AI workloads are data-intensive, requiring more scalable and afforda...
13/11/2025
Editor's note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copi...
13/11/2025
Chaos has entered the chat. It's GFN Thursday, and things are getting intense with the launch of Call of Duty: Black Ops 7, streaming at launch this week on...
12/11/2025
In the age of AI reasoning, training smarter, more capable models is critical to scaling intelligence. Delivering the massive performance to meet this new age r...
12/11/2025
Large language model (LLM)-based AI assistants are powerful productivity tools, but without the right context and information, they can struggle to provide nuan...
10/11/2025
Editor's note: This post is part of Think SMART, a series focused on how lea...
06/11/2025
NVIDIA founder and CEO Jensen Huang and chief scientist Bill Dally were honored ...
06/11/2025
Editor's note: This blog has been updated to reflect the correct launch date for Call of Duty: Black Ops 7', November 14.
A crisp chill's in the...
04/11/2025
In Berlin on Tuesday, Deutsche Telekom and NVIDIA unveiled the world's first...
04/11/2025
When inspiration strikes, nothing kills momentum faster than a slow tool or a frozen timeline. Creative apps should feel fast and fluid - an extension of imagin...
03/11/2025
Two out of every three people are likely to be living in cities or other urban c...
31/10/2025
Amidst Gyeongju, South Korea's ancient temples and modern skylines, Jensen H...
30/10/2025
An unassuming van driving around rural India uses powerful AI technology that...
30/10/2025
Get ready, raiders - the wait is over. ARC Raiders is dropping onto GeForce NOW and bringing the fight from orbit to the screen.
To celebrate the launch, gamer...
29/10/2025
Editor's note: This post is part of Into the Omniverse, a series focused on ...