
NVIDIA and OpenAI began pushing the boundaries of AI with the launch of NVIDIA DGX back in 2016. The collaborative AI innovation continues with the OpenAI gpt-oss-20b and gpt-oss-120b launch. NVIDIA has optimized both new open-weight models for accelerated inference performance on NVIDIA Blackwell architecture, delivering up to 1.5 million tokens per second (TPS) on an NVIDIA GB200 NVL72 system.
The gpt-oss models are text-reasoning LLMs with chain-of-thought and tool-calling capabilities using the popular mixture of experts (MoE) architecture with SwigGLU activations. The attention layers use RoPE with 128k context, alternating between full context and a sliding 128-token window. The models are released in FP4 precision, which fits on a single 80 GB data center GPU and is natively supported by Blackwell.
The models were trained on NVIDIA H100 Tensor Core GPUs, with gpt-oss-120b requiring over 2.1 million hours and gpt-oss-20b about 10x less. NVIDIA worked with several top open-source frameworks such as Hugging Face Transformers, Ollama, and vLLM, in addition to NVIDIA TensorRT-LLM for optimized kernels and model enhancements. This blog post showcases how NVIDIA has integrated gpt-oss across the software platform to meet developers' needs.
Model name Transformer Blocks Total Parameters Active Params per Token # of Experts Active Experts per Token Input Context Length
gpt-oss-20b 24 20B 3.6B 32 4 128K
gpt-oss-120b 36 117B 5.1B 128 4 128K
Table 1. OpenAI gpt-oss-20b and gpt-oss-120b model specifications, including total parameters, active parameters, number of experts, and input context length NVIDIA also worked with OpenAI and the community to maximize performance, adding features such as:
TensorRT-LLM Gen for attention prefill, attention decode, and MoE low-latency on Blackwell.
CUTLASS MoE kernels on Blackwell.
XQA kernel for specialized attention on Hopper.
Optimized attention and MoE routing kernels are available through the FlashInfer kernel-serving library for LLMs.
OpenAI Triton kernel MoE support, which is used in both TensorRT-LLM and vLLM.
Deploy using vLLM In collaboration with vLLM, NVIDIA worked together to verify accuracy while also analyzing and optimizing performance for Hopper and Blackwell architectures. Data center developers can use NVIDIA optimized kernels through the FlashInfer LLM serving kernel library.
vLLM recommends using uv for Python dependency management. You can use vLLM to spin up an OpenAL-compatible web server. The following command will automatically download the model and start the server. Refer to the documentation and vLLM Cookbook guide for more details.
uv run --with vllm vllm serve openai/gpt-oss-20b
Deploy using TensorRT-LLM The optimizations are available on the NVIDIA/TensorRT-LLM GitHub repository, where developers can use the deployment guide to launch their high-performance server. The guide downloads the model checkpoints from Hugging Face. NVIDIA collaborated on the developer experience using the Transformers library with the new models. The guide then provides a Docker container and guidance on how to configure performance for both low-latency and max-throughput cases.
More than a million tokens per second with GB200 NVL72 NVIDIA engineers partnered closely with OpenAI to ensure that the new gpt-oss-120b and gpt-oss-20b models deliver accelerated performance on Day 0 across both the NVIDIA Blackwell and NVIDIA Hopper platforms.
At launch, based on early performance measurements, a single GB200 NVL72 rack-scale system is expected to serve the larger, more computationally demanding gpt-oss-120b model at 1.5 million tokens per second, or about 50,000 concurrent users. Blackwell features many architectural capabilities that accelerate inference performance. These include a second-generation Transformer Engine with FP4 Tensor Cores and fifth-generation NVIDIA NVLink and NVIDIA NVLink Switch, for high bandwidth, enabling 72 Blackwell GPUs to act as a single, massive GPU.
The performance, versatility, and pace of innovation of the NVIDIA platform enable the ecosystem to serve the latest models on Day 0 with high throughput and low cost per token.
Try the optimized model with NVIDIA Launchable Deploying with TensorRT-LLM is also available using the Python API in a JupyterLab notebook on the Open AI Cookbook as an NVIDIA Launchable directly in the build platform where developers can test out GPUs from multiple cloud platforms. You can deploy the optimized model with a single click in a pre-configured environment.
data-src=https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Brev-png.webp alt=The image shows the console at brev.dev for users to select which type of GPU option in the Select your Compute' page, the user can select between boxes in a row of H200, H100, A100, L40s, A10 and A100 shown. class=lazyload wp-image-104187 data-srcset=https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Brev-png.webp 1324w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Brev-300x113-png.webp 300w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Brev-625x236-png.webp 625w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Brev-179x68-png.webp 179w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Brev-768x290-png.webp 768w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Brev-645x244-png.webp 645w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Brev-500x189-png.webp 500w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Brev-160x60-png.webp 160w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Brev-362x137-png.webp 362w, https://developer-blogs.nvidia.com/wp-content/uploads/2025/08/Brev-291x110-png.webp 291w, https://developer-blogs.nvidia.com/wp-content/uploads/202
More from Nvidia
28/08/2025
Brace yourself, COGs - the Locusts aren't the only thing rising up. The Coal...
28/08/2025
Last week at Gamescom, NVIDIA announced the winners of the NVIDIA and ModDB RTX ...
27/08/2025
AI models are advancing at a rapid rate and scale.
But what might they lack that (most) humans don't? Common sense: an understanding, developed through rea...
25/08/2025
Robots around the world are about to get a lot smarter as physical AI developers...
25/08/2025
As autonomous vehicle systems rapidly grow in complexity, equipped with reasonin...
22/08/2025
As the latest member of the NVIDIA Blackwell architecture family, the NVIDIA Blackwell Ultra GPU builds on core innovations to accelerate training and AI reason...
22/08/2025
AI reasoning, inference and networking will be top of mind for attendees of next...
21/08/2025
Japan is once again building a landmark high-performance computing system - not ...
21/08/2025
From AI assistants doing deep research to autonomous vehicles making split-second navigation decisions, AI adoption is exploding across industries.
Behind ever...
21/08/2025
Across the globe, AI factories are rising - massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Inte...
21/08/2025
Get a glimpse into the future of gaming.
The NVIDIA Blackwell RTX architecture is coming to GeForce NOW in September, marking the service's biggest upgrade...
20/08/2025
Editor's note: This blog is a part of Into the Omniverse, a series focused o...
18/08/2025
With over 175 games now supporting NVIDIA DLSS 4 - a suite of advanced, AI-power...
18/08/2025
At Gamescom, NVIDIA is releasing its first major update to Project G Assist - an...
15/08/2025
Of around 7,000 languages in the world, a tiny fraction are supported by AI lang...
14/08/2025
NVIDIA is partnering with the U.S. National Science Foundation (NSF) to create a...
14/08/2025
Warhammer 40,000: Dawn of War - Definitive Edition is marching onto GeForce NOW,...
13/08/2025
Black Forest Labs' FLUX.1 Kontext [dev] image editing model is now available as an NVIDIA NIM microservice.
FLUX.1 models allow users to edit existing imag...
11/08/2025
Using NVIDIA digital twin technologies, Amazon Devices & Services is powering bi...
11/08/2025
Packing the power of the NVIDIA Blackwell architecture in compact, energy-effici...
11/08/2025
Physical AI is becoming the foundation of smart cities, facilities and industria...
07/08/2025
This GFN Thursday brings an offer members can't refuse - 2K's highly ant...
05/08/2025
Two new open-weight AI reasoning models from OpenAI released today bring cutting...
05/08/2025
In collaboration with OpenAI, NVIDIA has optimized the company's new open-so...
05/08/2025
NVIDIA and OpenAI began pushing the boundaries of AI with the launch of NVIDIA D...
05/08/2025
NVIDIA GPUs are at the heart of modern computing. They're used across industries - from healthcare and finance to scientific research, autonomous systems an...
31/07/2025
August brings new levels of gaming excitement on GeForce NOW, with 2,300 titles now available to stream in the cloud.
Grab a controller and get ready for epic ...
31/07/2025
Interest in generative AI is continuing to grow, as new models include more capabilities. With the latest advancements, even enthusiasts without a developer bac...
29/07/2025
FourCastNet3 (FCN3) is the latest AI global weather forecasting system from NVID...
28/07/2025
The electrical grid is designed to support loads that are relatively steady, such as lighting, household appliances, and industrial machines that operate at con...
24/07/2025
For media company Black Mixture, AI isn't just a tool - it's an entire p...
24/07/2025
Sharpen the blade and brace for a journey steeped in myth and mystery. WUCHANG: Fallen Feathers has launched in the cloud.
Ride in style with skateboarding leg...
23/07/2025
In today's fast-evolving digital landscape, marketing teams face increasing ...
22/07/2025
Editor's note: This post is part of the AI On blog series, which explores th...
17/07/2025
Listen up citizens, the law is back and patrolling the cloud. Nacon's RoboCop Rogue City - Unfinished Business launches today in the cloud, bringing justice...
15/07/2025
Submissions for NVIDIA's Plug and Play: Project G-Assist Plug-In Hackathon a...
14/07/2025
This month, NVIDIA founder and CEO Jensen Huang promoted AI in both Washington, D.C. and Beijing - emphasizing the benefits that AI will bring to business and s...
11/07/2025
Ceramics - the humble mix of earth, fire and artistry - have been part of a global conversation for millennia.
From Tang Dynasty trade routes to Renaissance pa...
10/07/2025
In the race to understand our planet's changing climate, speed and accuracy are everything. But today's most widely used climate simulators often strugg...
10/07/2025
As one of the world's largest emerging markets, Indonesia is making strides toward its Golden 2045 Vision - an initiative tapping digital technologies and...
10/07/2025
Grab a friend and climb toward the clouds - PEAK is now available on GeForce NOW, enabling members to try the hugely popular indie hit on virtually any device.
...
10/07/2025
Coding assistants or copilots - AI-powered assistants that can suggest, explain and debug code - are fundamentally changing how software is developed for both e...
08/07/2025
Modern AI applications increasingly rely on models that combine huge parameter c...
03/07/2025
The forecast this month is showing a 100% chance of epic gaming. Catch the scorching lineup of 20 titles coming to the cloud, which gamers can play whether indo...
02/07/2025
Black Forest Labs, one of the world's leading AI research labs, just changed the game for image generation.
The lab's FLUX.1 image models have earned g...
01/07/2025
In many parts of the world, including major technology hubs in the U.S., there's a yearslong wait for AI factories to come online, pending the buildout of n...
26/06/2025
As of today, NVIDIA now supports the general availability of Gemma 3n on NVIDIA RTX and Jetson. Gemma, previewed by Google DeepMind at Google I/O last month, in...
26/06/2025
Editor's note: This blog is a part of Into the Omniverse, a series focused o...
26/06/2025
Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold ...