Across the globe, AI factories are rising - massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations.Welcome to the age of AI factories - where the rules are being rewritten and the wiring doesn't look anything like the old internet. These aren't typical hyperscale data centers. They're something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs - not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It's the whole game.
This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won't cut it. What's needed is a layered design with bleeding-edge technologies - like co-packaged optics that once seemed like science fiction.
The complexity isn't a bug; it's the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn't rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance.
With that shift comes weight - literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi hundred pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out.
The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables - tightly wound and precisely routed. It almost as much data per second as the entire internet. That's 130 TB/s of GPU-to-GPU bandwidth, fully meshed.
This isn't just fast. It's foundational. The AI super-highway now lives inside the rack.
The Data Center Is the Computer
Training the modern large language models (LLMs) behind AI isn't about burning cycles on a single machine. It's about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation.
These systems rely on distributed computing, splitting massive calculations across nodes (individual servers), where each node handles a slice of the workload. In training, those slices - typically massive matrices of numbers - need to be regularly merged and updated. That merging occurs through collective operations, such as all-reduce (which combines data from all nodes and redistributes the result) and all-to-all (where each node exchanges data with every other node).
These processes are susceptible to the speed and responsiveness of the network - what engineers call latency (delay) and bandwidth (data capacity) - causing stalls in training.
For inference - the process of running trained models to generate answers or predictions - the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users.
Traditional Ethernet was designed for single-server workloads - not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it's a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance - and that legacy still shapes their latest generations.
Distributed computing requires a scale-out infrastructure built for zero-jitter operation - one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories.
With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It's why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world's most powerful supercomputers, demonstrating 35% growth in just two years.
For clusters spanning dozens of racks, NVIDIA Quantum X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute.
But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum X: a new kind of Ethernet purpose-built for distributed AI.
Spectrum X Ethernet:










