At SC25, NVIDIA unveiled advances across NVIDIA BlueField DPUs, next-generation networking, quantum computing, national research, AI physics and more - as accelerated systems drive the next chapter in AI supercomputing.Ian Buck, vice president and general manager of accelerated computing at NVIDIA, delivered a special address at SC25. NVIDIA also highlighted storage innovations powered by the NVIDIA BlueField-4 data processing unit, part of the full-stack BlueField platform that accelerates gigascale AI infrastructure.
More details also came on NVIDIA Quantum-X Photonics InfiniBand CPO networking switches - enabling AI factories to drastically reduce energy consumption and operational costs - including that TACC, Lambda and CoreWeave plan to integrate them.
And NVIDIA founder and CEO Jensen Huang made a surprise appearance at the St. Louis event, making a few remarks about NVIDIA's supercomputing news to the crowd at SC25.
The big news this year is Grace Blackwell, and you might have seen the production of our second-generation Grace platform, called GB300, is going incredibly, Huang said. We are, basically, manufacturing supercomputers like chiclets.
He also came bearing gifts of the most compact supercomputers on the planet: NVIDIA DGX Spark AI supercomputers.
So this is the DGX Spark. And apparently several of you - 10 of you - are going to win one of these, he said. Tell me this isn't gonna look great under a Christmas tree.
Last month, NVIDIA began shipping DGX Spark, the world's smallest AI supercomputer. DGX Spark packs a petaflop of AI performance and 128GB of unified memory into a desktop form factor, enabling developers to run inference on models up to 200 billion parameters and fine-tune models locally. Built on the Grace Blackwell architecture, it integrates NVIDIA GPUs, CPUs, networking, CUDA libraries and the full NVIDIA AI software stack.
DGX Spark's unified memory and NVIDIA NVLink-C2C deliver 5x the bandwidth of PCIe Gen5, enabling faster GPU-CPU data exchange. This boosts training efficiency for large models, reduces latency and supports seamless fine-tuning workflows - all within a desktop form factor.
NVIDIA Apollo Unveiled as Latest Open Model Family for AI Physics NVIDIA Apollo, a family of open models for AI Physics, was also introduced at SC25. Applied Materials, Cadence, LAM Research, Luminary Cloud, KLA, PhysicsX, Rescale, Siemens and Synopsys are among the industry leaders adopting these open models to simulate and accelerate their design processes in a broad range of fields - electronic device automation and semiconductors, computational fluid dynamics, structural mechanics, electromagnetics, weather and more.
The family of open models harness the latest developments in AI physics, incorporating best-in-class machine learning architectures, such as neural operators, transformers and diffusion methods, with domain-specific knowledge. Apollo will provide pretrained checkpoints and reference workflows for training, inference and benchmarking, allowing developers to integrate and customize the models for their specific needs.
NVIDIA Warp Supercharges Physics Simulations NVIDIA Warp is a purpose-built open-source Python framework delivering GPU acceleration for computational physics and AI by up to 245x.
NVIDIA Warp provides a structured approach for simulation, robotics and machine learning workloads, combining the accessibility of Python with performance comparable to native CUDA code.
Warp supports the creation of GPU-accelerated 3D simulation workflows that integrate with ML pipelines in PyTorch, JAX, NVIDIA PhysicsNeMo and NVIDIA Omniverse. This allows developers to run complex simulation tasks and generate data at scale without leaving the Python programming environment.
By offering CUDA-level performance with Python-level productivity, Warp simplifies the development of high-performance simulation workflows. It is designed to accelerate AI research and engineering by reducing barriers to GPU programming, making advanced simulation and data generation more efficient and widely accessible.
Siemens, Neural Concept, Luminary Cloud, among others, are adopting NVIDIA Warp.
NVIDIA BlueField-4 DPU: The Processor Powering the Operating System of AI Factories Showcasing BlueField-4 for Powering the OS of AI Factories Unveiled at GTC Washington, D.C., NVIDIA BlueField-4 DPUs are powering the operating system of AI factories. By offloading, accelerating and isolating critical data center functions - networking, storage and security - they free up CPUs and GPUs to focus entirely on compute-intensive workloads.
BlueField-4, combining a 64-core NVIDIA Grace CPU and NVIDIA ConnectX-9 networking, unlocks unprecedented performance, efficiency and zero-trust security at scale. It supports multi-tenant environments, rapid data access, and real-time protection, with native integration of NVIDIA DOCA microservices for scalable, containerized AI operations. Together, they are transforming data centers into intelligent, software-defined engines for trillion-token AI and beyond.
As AI factories and supercomputing centers continue to scale in size and capability, they require faster, more intelligent storage infrastructure to manage structured, unstructured and AI-native data for large-scale training and inference.
Leading storage innovators - DDN, VAST Data and WEKA - are adopting BlueField-4 to redefine performance and efficiency for AI and scientific workloads.
DDN is building next-generation AI factories, accelerating data pipelines to maximize GPU utilization for AI and HPC workloads.
VAST Data is advancing the AI pipeline with intelligent data movement and real-time efficiency across large-scale AI clusters.
WEKA is launching its NeuralMesh architecture on BlueField-4, running storage services directly on the DPU to simplify










