
As the latest member of the NVIDIA Blackwell architecture family, the NVIDIA Blackwell Ultra GPU builds on core innovations to accelerate training and AI reasoning. It fuses silicon innovations with new levels of system-level integration, delivering next-level performance, scalability, and efficiency for AI factories and the large-scale, real-time AI services they power.
With its energy-efficient dual-reticle design, high bandwidth and large-capacity HBM3E memory subsystem, fifth-generation Tensor Cores, and breakthrough NVFP4 precision format, Blackwell Ultra is raising the bar for accelerated computing. This in-depth look explains the architectural advances, why they matter, and how they translate into measurable gains for AI workloads.
Dual-reticle design: one GPU Blackwell Ultra is composed of two reticle-sized dies connected using NVIDIA High-Bandwidth Interface (NV-HBI), a custom, power-efficient die-to-die interconnect technology that provides 10 TB/s of bandwidth. Blackwell Ultra is manufactured using TSMC 4NP and features 208B transistors-2.6x more than the NVIDIA Hopper GPU-all while functioning as a single, NVIDIA CUDA-programmed accelerator. This enables a large increase in performance while also maintaining the familiar CUDA programming model that developers have enjoyed for nearly two decades.
Benefits
Unified compute domain: 160 Streaming Multiprocessors (SMs) across two dies, providing 640 fifth-generation Tensor Cores with 15 PetaFLOPS dense NVFP4 compute.
Full coherence: Shared L2 cache with fully coherent memory accesses.
Maximum silicon utilization: Peak performance per square millimeter.
data-src=https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-GPU-chip-png.webp alt=Diagram of NVIDIA Blackwell Ultra GPU showing dual reticle dies linked by a 10 TB/s NV-HBI interface. Each die contains a GigaThread Engine with MIG control, L2 cache, and 8 GPCs with a total of 640 fifth-generation Tensor Cores (15 PFLOPS dense NVFP4). Callouts highlight PCIe Gen 6 (256 GB/s), NVLink v5 (1,800 GB/s to NVSwitch), NVLink-C2C (900 GB/s CPU-GPU), and 288 GB HBM3E (8 stacks, up to 8 GB/s). class=lazyload wp-image-104918 data-srcset=https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-GPU-chip-png.webp 1920w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-GPU-chip-300x204-png.webp 300w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-GPU-chip-625x424-png.webp 625w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-GPU-chip-169x115-png.webp 169w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-GPU-chip-768x521-png.webp 768w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-GPU-chip-1536x1042-png.webp 1536w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-GPU-chip-645x438-png.webp 645w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-GPU-chip-442x300-png.webp 442w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-GPU-chip-133x90-png.webp 133w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-GPU-chip-362x246-png.webp 362w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-GPU-chip-162x110-png.webp 162w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-GPU-chip-1024x695-png.webp 1024w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-GPU-chip-796x540-png.webp 796w data-sizes=(max-width: 1920px) 100vw, 1920px />Figure 1. NVIDIA Blackwell Ultra GPU chip explained
Streaming multiprocessors: compute engines for the AI Factory As shown in Figure 1, the heart of Blackwell Ultra is its 160 Streaming Multiprocessors (SMs) organized into eight Graphics Processing Clusters (GPCs) in the full GPU implementation. Every SM, shown in Figure 2, is a self-contained compute engine housing:
128 CUDA Cores for FP32 and INT32 operations, also FP16/BF16 and other precisions.
4 fifth-generation Tensor Cores with NVIDIA second-generation Transformer Engine, optimized for FP8, FP6, and NVFP4.
256 KB of Tensor Memory (TMEM) for warp-synchronous storage of intermediate results, enabling higher reuse and reduced off-chip memory traffic.
Special Function Units (SFUs) for transcendental math and special operations used in AI kernels.
data-src=https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-SM-architecture-png.webp alt=Diagram of Blackwell Ultra Streaming Multiprocessor (SM) architecture showing CUDA cores, Tensor Cores, TMEM, shared memory, SFUs, Tex blocks, and other SM units. class=lazyload wp-image-104889 style=width:712px;height:auto data-srcset=https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-SM-architecture-png.webp 1435w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-SM-architecture-215x300-png.webp 215w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-SM-architecture-625x871-png.webp 625w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-SM-architecture-83x115-png.webp 83w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-SM-architecture-768x1070-png.webp 768w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-SM-architecture-1103x1536-png.webp 1103w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-SM-architecture-645x899-png.webp 645w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-SM-architecture-65x90-png.webp 65w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Blackwell-Ultra-SM-architecture-362x504-png.webp 362w, https://
More from Nvidia
14/10/2025
AI is transforming the way enterprises build, deploy and scale intelligent applications. As demand surges for enterprise-grade AI applications that offer speed,...
14/10/2025
At Oracle AI World, NVIDIA and Oracle announced they are deepening their collabo...
13/10/2025
The next AI revolution starts where rockets launch. NVIDIA DGX Spark's first...
13/10/2025
At the OCP Global Summit, NVIDIA is offering a glimpse into the future of gigawa...
09/10/2025
NVIDIA Blackwell swept the new SemiAnalysis InferenceMAX v1 benchmarks, deliveri...
09/10/2025
Microsoft Azure today announced the new NDv6 GB300 VM series, delivering the ind...
09/10/2025
Lock, load and stream - the battle is just beginning. EA's highly anticipated Battlefield 6 is set to storm the cloud when it launches tomorrow with GeForce...
08/10/2025
Telecommunication networks are critical infrastructure for every nation, underpi...
02/10/2025
Editor's note: This blog has been updated to include an additional game for October, The Outer Worlds 2.
October is creeping in with plenty of gaming treat...
01/10/2025
Many users want to run large language models (LLMs) locally for more privacy and control, and without subscriptions, but until recently, this meant a trade-off ...
30/09/2025
Quantum computing promises to reshape industries - but progress hinges on solvin...
30/09/2025
Editor's note: This blog is a part of Into the Omniverse, a series focused o...
25/09/2025
Suit up and head for the cloud. Mecha BREAK, the popular third-person shooter, is now available to stream on GeForce NOW with NVIDIA DLSS 4 technology.
Catch i...
24/09/2025
Canada's role as a leader in artificial intelligence was on full display at ...
24/09/2025
Open technologies - made available to developers and businesses to adopt, modify...
23/09/2025
Energy efficiency in large language model inference has improved 100,000x in the...
22/09/2025
OpenAI and NVIDIA just announced a landmark AI infrastructure partnership - an initiative that will scale OpenAI's compute with multi-gigawatt data centers ...
19/09/2025
AI is no longer solely a back-office tool. It's a strategic partner that can...
18/09/2025
The U.K. was the center of the AI world this week as NVIDIA, U.K. and U.S. leade...
18/09/2025
GeForce NOW is packing a monstrous punch this week. Dying Light: The Beast, the latest adrenaline fueled chapter in Techland's parkour meets survival horror...
17/09/2025
Today's creators are equal parts entertainer, producer and gamer, juggling game commentary, scene changes, replay clips, chat moderation and technical troub...
16/09/2025
The U.K. is driving investments in sovereign AI, using the technology to advance...
13/09/2025
Celtic languages - including Cornish, Irish, Scottish Gaelic and Welsh - are the U.K.'s oldest living languages. To empower their speakers, the UK-LLM sover...
10/09/2025
GeForce NOW Blackwell RTX 5080-class SuperPODs are now rolling out, unlocking a new level of ultra high-performance, cinematic cloud gaming.
GeForce NOW Ultima...
09/09/2025
Inference has emerged as the new frontier of complexity in AI. Modern models are...
09/09/2025
As large language models (LLMs) grow larger, they get smarter, with open models from leading developers now featuring hundreds of billions of parameters. At the...
09/09/2025
At this week's AI Infrastructure Summit in Silicon Valley, NVIDIA's VP o...
09/09/2025
Inference performance is critical, as it directly influences the economics of an AI factory. The higher the throughput of AI factory infrastructure, the more to...
09/09/2025
At this week's IAA Mobility conference in Munich, NVIDIA Vice President of A...
09/09/2025
ComfyUI - an open-source, node-based graphical interface for running and buildin...
04/09/2025
NVIDIA today announced new AI education support for K-12 programs at a White House event to celebrate public-private partnerships that advance artificial intell...
04/09/2025
Editor's note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copi...
04/09/2025
NVIDIA Blackwell RTX is coming to the cloud on Wednesday, Sept. 10 - an upgrade ...
03/09/2025
3D artists are constantly prototyping.
In traditional workflows, modelers must build placeholder, low-fidelity assets to populate 3D scenes, tinkering and adju...
02/09/2025
For more than a century, meteorologists have chased storms with chalkboards, equ...
28/08/2025
Brace yourself, COGs - the Locusts aren't the only thing rising up. The Coal...
28/08/2025
Last week at Gamescom, NVIDIA announced the winners of the NVIDIA and ModDB RTX ...
27/08/2025
AI models are advancing at a rapid rate and scale.
But what might they lack that (most) humans don't? Common sense: an understanding, developed through rea...
25/08/2025
Robots around the world are about to get a lot smarter as physical AI developers...
25/08/2025
As autonomous vehicle systems rapidly grow in complexity, equipped with reasonin...
22/08/2025
As the latest member of the NVIDIA Blackwell architecture family, the NVIDIA Blackwell Ultra GPU builds on core innovations to accelerate training and AI reason...
22/08/2025
AI reasoning, inference and networking will be top of mind for attendees of next...
21/08/2025
Japan is once again building a landmark high-performance computing system - not ...
21/08/2025
From AI assistants doing deep research to autonomous vehicles making split-second navigation decisions, AI adoption is exploding across industries.
Behind ever...
21/08/2025
Across the globe, AI factories are rising - massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Inte...
21/08/2025
Get a glimpse into the future of gaming.
The NVIDIA Blackwell RTX architecture is coming to GeForce NOW in September, marking the service's biggest upgrade...
20/08/2025
Editor's note: This blog is a part of Into the Omniverse, a series focused o...
18/08/2025
With over 175 games now supporting NVIDIA DLSS 4 - a suite of advanced, AI-power...
18/08/2025
At Gamescom, NVIDIA is releasing its first major update to Project G Assist - an...
15/08/2025
Of around 7,000 languages in the world, a tiny fraction are supported by AI lang...