To get the most out of AI, optimizations are critical. When developers think about optimizing AI models for inference, model compression techniques-such as quantization, distillation, and pruning-typically come to mind. The most common of the three, without a doubt, is quantization. This is typically due to its post-optimization task-specific accuracy performance and broad choice of supported frameworks and techniques. Yet the main challenge with model quantization is the potential loss of model intelligence or task-specific accuracy, particularly when transitioning from higher precision data types like FP32 down to the latest FP4 format. NVIDIA Blackwell provides maximum flexibility with support for FP64, FP32/TF32, FP16/BF16, INT8/FP8, FP6, and FP4 data formats. Figure 1 compares the smallest supported floating-point data type and corresponding dense/sparse performance across NVIDIA Ampere, Hopper, and Blackwell GPUs, showcasing the evolution of performance and data type support across GPU generations.
data-src=https://developer-blogs.nvidia.com/wp-content/uploads/2025/06/performance-evolution-nvidia-gpu-generations-png.webp alt=Bar chart titled "Evolution of Performance Across GPU Generations" that compares the smallest floating-point data type supported performance (dense/sparse measured in petaflops) across three different NVIDIA GPU generations: A100 (0.3/0.6 petaflops), H100 (1.9/3.9 petaflops), B200 (9/18 petaflops), B300 (13/18 petaflops), GB200 (10/20 petaflops), and GB300 (15/20 petaflops). class=lazyload wp-image-102068 data-srcset=https://developer-blogs.nvidia.com/wp-content/uploads/2025/06/performance-evolution-nvidia-gpu-generations-png.webp 803w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/06/performance-evolution-nvidia-gpu-generations-300x176-png.webp 300w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/06/performance-evolution-nvidia-gpu-generations-625x367-png.webp 625w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/06/performance-evolution-nvidia-gpu-generations-179x105-png.webp 179w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/06/performance-evolution-nvidia-gpu-generations-768x450-png.webp 768w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/06/performance-evolution-nvidia-gpu-generations-645x378-png.webp 645w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/06/performance-evolution-nvidia-gpu-generations-500x293-png.webp 500w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/06/performance-evolution-nvidia-gpu-generations-153x90-png.webp 153w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/06/performance-evolution-nvidia-gpu-generations-362x212-png.webp 362w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/06/performance-evolution-nvidia-gpu-generations-188x110-png.webp 188w data-sizes=(max-width: 803px) 100vw, 803px />Figure 1. Peak low-precision performance across NVIDIA GPU architectures The latest fifth-generation NVIDIA Blackwell Tensor Cores pave the way for various ultra-low precision formats, enabling both research and real-world scenarios. Table 1 compares the three primary 4-bit floating point formats supported in NVIDIA Blackwell-FP4, MXFP4, and NVFP4-highlighting key differences in structure, memory usage, and accuracy. It illustrates how NVFP4 builds on the simplicity of earlier formats while maintaining model accuracy.
Feature FP4 (E2M1) MXFP4 NVFP4
Format
Structure 4 bits (1 sign, 2 exponent, 1 mantissa) plus software scaling factor 4 bits (1 sign, 2 exponent, 1 mantissa) plus 1 shared power-of-two scale per 32 value block 4 bits (1 sign, 2 exponent, 1 mantissa) plus 1 shared FP8 scale per 16 value block
Accelerated Hardware Scaling No Yes Yes
Memory 25% of FP16
Accuracy Risk of noticeable accuracy drop compared to FP8 Risk of noticeable accuracy drop compared to FP8 Lower risk of noticeable accuracy drop particularly for larger models
Table 1. Comparison of Blackwell-supported 4-bit floating point formats This post introduces NVFP4, a state-of-the-art data type, and explains how it was purpose-built to help developers scale more efficiently on Blackwell, with the best accuracy at ultra-low precision.
What is NVFP4? NVFP4 is an innovative 4-bit floating point format introduced with the NVIDIA Blackwell GPU architecture. NVFP4 builds on the concept of low-bit micro floating-point formats and grants greater flexibility to developers by providing an additional format to choose from.
The structure of NVFP4 is similar to most floating-point 4-bit formats (E2M1), meaning that it has 1 sign bit, 2 exponent bits, and 1 mantissa bit. The value in the format ranges approximately -6 to 6. For example, the values in the range could include 0.0, 0.5, 1.0, 1.5, 2, 3, 4, 6 (same for the negative range).
One of the key challenges in ultra-low precision formats is maintaining numerical accuracy across a wide dynamic range of tensor values. NVFP4 addresses this concern with two architectural innovations that make it highly effective for AI inference:
High-precision scale encoding
A two-level micro-block scaling strategy
This strategy applies a fine-grained E4M3 scaling factor to each 16-value micro-block, a compact subset of the larger tensor, while also leveraging a second-level FP32 scalar applied per tensor. Together, these two levels of scaling enable more accurate value representation and significantly reduce quantization error (Figure 2).
data-src=https://developer-blogs.nvidia.com/wp-content/uploads/2025/06/nvfp4-two-level-scaling.gif alt=A diagram showing NVFP4's internal 4-bit structure (E2M1: sign, exponent, mantissa) and how groups of 16 values each share an FP8 (E4M3) scale factor, demonstrating per-block scaling. These blocks are then globally normalized using a higher precision FP32 (E8M23) scale factor, il










