NVIDIA DGX SuperPOD is paving the way for large-scale system deployments built on the NVIDIA Rubin platform - the next leap forward in AI computing.At the CES trade show in Las Vegas, NVIDIA today introduced the Rubin platform, comprising six new chips designed to deliver one incredible AI supercomputer, and engineered to accelerate agentic AI, mixture of experts (MoE) models and long context reasoning.
The Rubin platform unites six chips - the NVIDIA Vera CPU, Rubin GPU, NVLink 6 Switch, ConnectX-9 SuperNIC, BlueField-4 DPU and Spectrum-6 Ethernet Switch - through an advanced codesign approach that accelerates training and reduces the cost of inference token generation.
DGX SuperPOD remains the foundational design for deploying Rubin based systems across enterprise and research environments.
The NVIDIA DGX platform addresses the entire technology stack - from NVIDIA computing to networking to software - as a single, cohesive system, removing the burden of infrastructure integration and allowing teams to focus on AI innovation and business results.
Rubin arrives at exactly the right moment, as AI computing demand for both training and inference is going through the roof, said Jensen Huang, founder and CEO of NVIDIA.
New Platform for the AI Industrial Revolution The Rubin platform used in the new DGX systems introduces five major technology advancements designed to drive a step function increase in intelligence and efficiency:
Sixth Generation NVIDIA NVLink - 3.6TB/s per GPU and 260TB/s per Vera Rubin NVL72 rack for massive MoE and long context workloads.
NVIDIA Vera CPU - 88 NVIDIA custom Olympus cores, full Armv9.2 compatibility and ultrafast NVLink-C2C connectivity for industry-leading efficient AI factory compute.
NVIDIA Rubin GPU - 50 petaflops of NVFP4 compute for AI inference featuring a third-generation Transformer Engine with hardware accelerated compression.
Third Generation NVIDIA Confidential Computing - Vera Rubin NVL72 is the first rack-scale platform delivering NVIDIA Confidential Computing, which maintains data security across CPU, GPU and NVLink domains.
Second Generation RAS Engine - Spanning GPU, CPU and NVLink, the NVIDIA Rubin platform delivers real-time health monitoring, fault tolerance and proactive maintenance, with modular cable-free trays enabling 3x faster servicing.
Together, these innovations deliver up to 10x reduction in inference token cost of the previous generation - a critical milestone as AI models grow in size, context and reasoning depth.
DGX SuperPOD: The Blueprint for NVIDIA Rubin Scale Out Rubin-based DGX SuperPOD deployments will integrate:
NVIDIA DGX Vera Rubin NVL72 or DGX Rubin NVL8 systems
NVIDIA BlueField 4 DPUs for secure, software defined infrastructure
NVIDIA Inference Context Memory Storage Platform for next-generation inference
NVIDIA ConnectX 9 SuperNICs
NVIDIA Quantum X800 InfiniBand and NVIDIA Spectrum X Ethernet
NVIDIA Mission Control for automated AI infrastructure orchestration and operations
NVIDIA DGX SuperPOD with DGX Vera Rubin NVL72 unifies eight DGX Vera Rubin NVL72 systems, featuring 576 Rubin GPUs, to deliver 28.8 exaflops of FP4 performance and 600TB of fast memory. Each DGX Vera Rubin NVL72 system - combining 36 Vera CPUs, 72 Rubin GPUs and 18 BlueField 4 DPUs - enables a unified memory and compute space across the rack. With 260TB/s of aggregate NVLink throughput, it eliminates the need for model partitioning and allows the entire rack to operate as a single, coherent AI engine.
NVIDIA DGX SuperPOD with DGX Rubin NVL8 systems delivers 64 DGX Rubin NVL8 systems featuring 512 Rubin GPUs. NVIDIA DGX Rubin NVL8 systems bring Rubin performance into a liquid-cooled form factor with x86 CPUs to give organizations an efficient on-ramp to the Rubin era for any AI project in the develop to deploy pipeline. Powered by eight NVIDIA Rubin GPUs and sixth-generation NVLink, each DGX Rubin NVL8 delivers 5.5x NVFP4 FLOPS compared with NVIDIA Blackwell systems.
Next Generation Networking for AI Factories The Rubin platform redefines the data center as a high-performance AI factory with revolutionary networking, featuring NVIDIA Spectrum-6 Ethernet switches, NVIDIA Quantum-X800 InfiniBand switches, BlueField-4 DPUs and ConnectX-9 SuperNICs, designed to sustain the world's most massive AI workloads. By integrating these innovations into the NVIDIA DGX SuperPOD, the Rubin platform eliminates the traditional bottlenecks of scale, congestion and reliability.
Optimized Connectivity for Massive-Scale Clusters
The next-generation 800Gb/s end-to-end networking suite provides two purpose-built paths for AI infrastructure, ensuring peak efficiency whether using InfiniBand or Ethernet:
NVIDIA Quantum-X800 InfiniBand: Delivers the industry's lowest latency and highest performance for dedicated AI clusters. It utilizes Scalable Hierarchical Aggregation and Reduction Protocol (SHARP v4) and adaptive routing to offload collective operations to the network.
NVIDIA Spectrum-X Ethernet: Built on the Spectrum-6 Ethernet switch and ConnectX-9 SuperNIC, this platform brings predictable, high-performance scale-out and scale-across connectivity to AI factories using standard Ethernet protocols, optimized specifically for the east-west traffic patterns of AI workloads.
Engineering the Gigawatt AI Factory
These innovations represent an extreme codesign with the Rubin platform. By mastering congestion control and performance isolation, NVIDIA is paving the way for the next wave of gigawatt AI factories. This holistic approach ensures that as AI models grow in complexity, the networking fabric of the AI factory remains a catalyst for speed rather than a constraint.
NVIDIA Software Advances AI Factory Operations and Deployments NVIDIA Mission Control - AI data center operation and orchestration software for NVIDIA Blackwell-ba










