Open-source AI is accelerating innovation across industries, and NVIDIA DGX Spark and DGX Station are built to help developers turn innovation into impact.NVIDIA today unveiled at the CES trade show how the DGX Spark and DGX Station deskside AI supercomputers let developers harness the latest open and frontier AI models on a local deskside system, from 100-billion-parameter models on DGX Spark to 1-trillion-parameter models on DGX Station.
Powered by the NVIDIA Grace Blackwell architecture, with large unified memory and petaflop-level AI performance, these systems give developers new capabilities to develop locally and easily scale to the cloud.
Advancing Performance Across Open-Source AI Models A breadth of highly optimized open models that would've previously required a data center to run can now be accelerated at the desktop on DGX Spark and DGX Station, thanks to continual advancements in model optimization and collaborations with the open-source community.
Preconfigured with NVIDIA AI software and NVIDIA CUDA-X libraries, DGX Spark provides powerful, plug-and-play optimization for developers, researchers and data scientists to build, fine-tune and run AI.
Spark provides a foundation for all developers to run the latest AI models at their desk; Station enables enterprises and research labs to run more advanced, large-scale frontier AI models. The systems support running the latest frameworks and open-source models - including the recently announced NVIDIA Nemotron 3 models - right from desktops.
The NVIDIA Blackwell architecture powering DGX Spark includes the NVFP4 data format, which enables AI models to be compressed by up to 70% and boosts performance without losing intelligence.
NVIDIA's collaborations with the open-source software ecosystem, such as its work with llama.cpp, is pushing performance further, delivering a 35% performance uplift on average when running state-of-the-art AI models on DGX Spark. Llama.cpp also includes a quality-of-life upgrade that speeds up LLM loading times.
DGX Station, with the GB300 Grace Blackwell Ultra superchip and 775GB of coherent memory with FP4 precision, can run models up to 1 trillion parameters - giving frontier AI labs cutting-edge compute capability for large-scale models from the desktop. This includes a variety of advanced AI models including Kimi-K2 Thinking, DeepSeek-V3.2, Mistral Large 3, Meta Llama 4 Maverick, Qwen3 and OpenAI gpt-oss-120b.
NVIDIA GB300 is typically deployed as a rack-scale system, said Kaichao You, core maintainer of vLLM. This makes it difficult for projects like vLLM to test and develop directly on the powerful GB300 superchip. DGX Station changes this dynamic. By delivering GB300 in a compact, single-system form factor deskside, DGX Station enables vLLM to test and develop GB300-specific features at a significantly lower cost. This accelerates development cycles and makes it easy for vLLM to continuously validate and optimize against GB300.
DGX Station brings data-center-class GPU capability directly into my room, said Jerry Zhou, community contributor to SGLang. It is powerful enough to serve very large models like Qwen3-235B, test training frameworks with large model configurations and develop CUDA kernels with extremely large matrix sizes, all locally without relying on cloud racks. This dramatically shortens the iteration loop for systems and framework development.
NVIDIA will be showcasing the capabilities of DGX Station live at CES, demonstrating:
LLM pretraining that moves at a blistering 250,000 tokens per second.
A large data visualization of millions of data points in category clusters. The topic modeling workflow uses machine learning techniques and algorithms accelerated by the NVIDIA cuML library.
Visualizing massive knowledge databases with high accuracy using Text to Knowledge Graph and Llama 3.3 Nemotron Super 49B.
Expanding AI and Creator Workflows DGX Spark and Station are purpose-built to support the full AI development lifecycle, from prototyping and fine-tuning to inference and data science, for a wide range of industry-specific AI applications in healthcare, robotics, retail, creative workflows and more.
For creators, the latest diffusion and video generation models, including Black Forest Labs' FLUX.2 and FLUX.1, and Alibaba's Qwen-Image, now support NVFP4, reducing memory footprint and accelerating performance. And the new Lightricks' LTX-2 video model is now available for download, including NVFP8 quantized checkpoints for NVIDIA GPUs, delivering quality on par with the top cloud models.
Live CES demonstrations highlight how DGX Spark can offload demanding video generation workloads from creator laptops, delivering 8x acceleration compared with a top-of-the-line MacBook Pro with M4 Max, freeing local systems for uninterrupted creative work.
The open-source RTX Remix modding platform is expected to soon empower 3D artists and modders to use DGX Spark to create faster with generative AI. Additional CES demonstrations showcase how a mod team can offload all of their asset creation to DGX Spark, freeing up their PCs to mod without pauses and enabling them to view in-game changes in real time.
AI coding assistants are also transforming developer productivity. At CES, NVIDIA is demonstrating a local CUDA coding assistant powered by NVIDIA Nsight on DGX Spark, which allows developers to keep source code local and secure while benefiting from AI-assisted enterprise development.
Industry Leaders Validate the Shift to Local AI As demand grows for secure, high-performance AI at the edge, DGX Spark is gaining momentum across the industry.
Software leaders, open-source innovators and global workstation partners are adopting DGX Spark to power local inference, agentic workflows and retrieval-augmented generation without the complexity of centralized i










