
The top 10 most intelligent open-source models all use a mixture-of-experts architecture.
Kimi K2 Thinking, DeepSeek-R1, Mistral Large 3 and others run 10x faster to enable one-tenth the cost per token on NVIDIA GB200 NVL72.
A look under the hood of virtually any frontier model today will reveal a mixture-of-experts (MoE) model architecture that mimics the efficiency of the human brain.
Just as the brain activates specific regions based on the task, MoE models divide work among specialized experts, activating only the relevant ones for every AI token. This results in faster, more efficient token generation without a proportional increase in compute.
The industry has already recognized this advantage. On the independent Artificial Analysis (AA) leaderboard, the top 10 most intelligent open-source models use an MoE architecture, including DeepSeek AI's DeepSeek-R1, Moonshot AI's Kimi K2 Thinking, OpenAI's gpt-oss-120B and Mistral AI's Mistral Large 3.
However, scaling MoE models in production while delivering high performance and low cost per token is notoriously difficult. The extreme codesign of NVIDIA GB200 NVL72 systems combines hardware and software optimizations for maximum performance and efficiency, making it practical and straightforward to scale MoE models.
The Kimi K2 Thinking MoE model - ranked as the most intelligent open-source model on the AA leaderboard - sees a 10x performance leap on the NVIDIA GB200 NVL72 rack-scale system compared with NVIDIA HGX H200. This 10x performance leap, also seen on other MoE models such as DeepSeek-R1 and Mistral Large 3, enables one-tenth the cost per token and underscores why NVIDIA's full-stack inference platform is the key to unlocking their full potential.
What Is MoE, and Why Has It Become the Standard for Frontier Models? Until recently, the industry standard for building smarter AI was simply building bigger, dense models that use all of their model parameters - often hundreds of billions for today's most capable models - to generate every token. While powerful, this approach requires immense computing power and energy, making it challenging to scale.
Much like the human brain relies on specific regions to handle different cognitive tasks - whether processing language, recognizing objects or solving a math problem - MoE models comprise several specialized experts. For any given token, only the most relevant ones are activated by a router. This design means that even though the overall model may contain hundreds of billions of parameters, generating a token involves using only a small subset - often just tens of billions.
Like the human brain uses specific regions for different tasks, mixture-of-experts models use a router to select only the most relevant experts to generate every token. By selectively engaging only the experts that matter most, MoE models achieve higher intelligence and adaptability without a matching rise in computational cost. This makes them the foundation for efficient AI systems optimized for performance per dollar and per watt - generating significantly more intelligence for every unit of energy and capital invested.
Given these advantages, it is no surprise that MoE has rapidly become the architecture of choice for frontier models, adopted by over 60% of open-source AI model releases this year. Since early 2023, it's enabled a nearly 70x increase in model intelligence - pushing the limits of AI capability.
Since early 2025, nearly all leading frontier models use MoE designs. Our pioneering work with OSS mixture-of-experts architecture, starting with Mixtral 8x7B two years ago, ensures advanced intelligence is both accessible and sustainable for a broad range of applications, said Guillaume Lample, cofounder and chief scientist at Mistral AI. Mistral Large 3's MoE architecture enables us to scale AI systems to greater performance and efficiency while dramatically lowering energy and compute demands.
Overcoming MoE Scaling Bottlenecks With Extreme Codesign Frontier MoE models are simply too large and complex to be deployed on a single GPU. To run them, experts must be distributed across multiple GPUs, a technique called expert parallelism. Even on powerful platforms such as the NVIDIA H200, deploying MoE models involves bottlenecks such as:
Memory limitations: For each token, GPUs must dynamically load the selected experts' parameters from high-bandwidth memory, causing frequent heavy pressure on memory bandwidth.
Latency: Experts must execute a near-instantaneous all-to-all communication pattern to exchange information and form a final, complete answer. However, on H200, spreading experts across more than eight GPUs requires them to communicate over higher-latency scale-out networking, limiting the benefits of expert parallelism.
The solution: extreme codesign.
NVIDIA GB200 NVL72 is a rack-scale system with 72 NVIDIA Blackwell GPUs working together as if they were one, delivering 1.4 exaflops of AI performance and 30TB of fast shared memory. The 72 GPUs are connected using NVLink Switch into a single, massive NVLink interconnect fabric, which allows every GPU to communicate with each other with 130 TB/s of NVLink connectivity.
MoE models can tap into this design to scale expert parallelism far beyond previous limits - distributing the experts across a much larger set of up to 72 GPUs.
This architectural approach directly resolves MoE scaling bottlenecks by:
Reducing the number of experts per GPU: Distributing experts across up to 72 GPUs reduces the number of experts per GPU, minimizing parameter-loading pressure on each GPU's high-bandwidth memory. Fewer experts per GPU also frees up memory space, allowing each GPU to serve more concurrent users and support longer input lengths.
Accelerating expert communication: Experts spread across GPUs can com
More from Nvidia
15/01/2026
NVIDIA kicked off the year at CES, where the crowd buzzed about the latest gaming announcements - including the native GeForce NOW app for Linux and Amazon Fire...
13/01/2026
NVIDIA and Lilly are putting together a blueprint for what is possible in the f...
09/01/2026
Every that was easy shopping moment is made possible by teams working to hit s...
08/01/2026
The next universal technology since the smartphone is on the horizon - and it ma...
08/01/2026
In the rolling hills of Berkeley, California, an AI agent is supporting high-stakes physics experiments at the Advanced Light Source (ALS) particle accelerator....
08/01/2026
NVIDIA is wrapping up a big week at the CES trade show with a set of GeForce NOW...
07/01/2026
AI has transformed retail and consumer packaged goods (CPG) operations, enhancin...
05/01/2026
At the CES trade show running this week in Las Vegas, NVIDIA announced that the ...
05/01/2026
Open-source AI is accelerating innovation across industries, and NVIDIA DGX Spar...
05/01/2026
NVIDIA DGX SuperPOD is paving the way for large-scale system deployments built on the NVIDIA Rubin platform - the next leap forward in AI computing.
At the CES...
05/01/2026
AI is powering breakthroughs across industries, helping enterprises operate with...
05/01/2026
NVIDIA founder and CEO Jensen Huang took the stage at the Fontainebleau Las Vega...
05/01/2026
At the CES trade show, NVIDIA today announced DLSS 4.5, which introduces Dynamic...
05/01/2026
2025 marked a breakout year for AI development on PC.
PC-class small language m...
05/01/2026
Announced at the CES trade show running this week in Las Vegas, NVIDIA is bringi...
01/01/2026
New year, new games, all with RTX 5080-powered cloud energy. GeForce NOW is kicking off 2026 by looking back at an unforgettable year of wins and wildly high fr...
25/12/2025
Holiday lights are twinkling, hot cocoa's on the stove and gamers are settling in for a well-earned break.
Whether staying in or heading on a winter getawa...
22/12/2025
The works of Plato state that when humans have an experience, some level of change occurs in their brain, which is powered by memory - specifically long-term me...
18/12/2025
NVIDIA will join the U.S. Department of Energy's (DOE) Genesis Mission as a ...
18/12/2025
Top-notch options for AI at the desktops of developers, engineers and designers ...
18/12/2025
Step out of the vault and into the future of gaming with Fallout: New Vegas streaming on GeForce NOW, just in time to celebrate the newest season of the hit Ama...
17/12/2025
The Hao AI Lab research team at the University of California San Diego - at the forefront of pioneering AI model innovation - recently received an NVIDIA DGX B...
17/12/2025
Editor's note: This post is part of Into the Omniverse, a series focused on ...
15/12/2025
NVIDIA today announced it has acquired SchedMD - the leading developer of Slurm, an open-source workload management system for high-performance computing (HPC) ...
15/12/2025
Modern workflows showcase the endless possibilities of generative and agentic AI on PCs.
Of many, some examples include tuning a chatbot to handle product-supp...
12/12/2025
In Las Vegas's T-Mobile Arena, fans of the Golden Knights are getting more than just hockey - they're getting a taste of the future. ADAM, a robot devel...
11/12/2025
Unveiling what it describes as the most capable model series yet for professional knowledge work, OpenAI launched GPT-5.2 today. The model was trained and deplo...
11/12/2025
Hunters, saddle up - adventure awaits in the cloud.
Journey into the world of M...
10/12/2025
The NVIDIA accelerated computing platform is leading supercomputing benchmarks once dominated by CPUs, enabling AI, science, business and computing efficiency w...
10/12/2025
The world's top-performing system for graph processing at scale was built on...
10/12/2025
As the scale and complexity of AI infrastructure grows, data center operators need continuous visibility into factors including performance, temperature and pow...
04/12/2025
Developers, researchers, hobbyists and students can take a byte out of holiday s...
04/12/2025
Editor's note: The Game Pass edition of Hogwarts Legacy' will also be supported on GeForce NOW when the Steam and Epic Games Store versions launch on t...
03/12/2025
The top 10 most intelligent open-source models all use a mixture-of-experts arch...
02/12/2025
Today, Mistral AI announced the Mistral 3 family of open-source multilingual, multimodal models, optimized across NVIDIA supercomputing and edge platforms.
M...
02/12/2025
At AWS re:Invent, NVIDIA and Amazon Web Services expanded their strategic collab...
01/12/2025
Researchers worldwide rely on open-source technologies as the foundation of their work. To equip the community with the latest advancements in digital and physi...
27/11/2025
Black Friday is leveling up. Get ready to score one of the biggest deals of the season - 50% off the first three months of a new GeForce NOW Ultimate membership...
25/11/2025
Black Forest Labs - the frontier AI research lab developing visual generative AI models - today released the FLUX.2 family of state-of-the-art image generation ...
24/11/2025
Editor's note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copi...
20/11/2025
Editor's note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows u...
20/11/2025
The NVIDIA Blackwell RTX upgrade is nearing the finish line, letting GeForce NOW Ultimate members across the globe experience true next-generation cloud gaming ...
20/11/2025
Tanya Berger-Wolf's first computational biology project started as a bet wit...
18/11/2025
Timed with the Microsoft Ignite conference running this week, NVIDIA is expandin...
18/11/2025
Today, Microsoft, NVIDIA and Anthropic announced new strategic partnerships. Anthropic is scaling its rapidly growing Claude AI model on Microsoft Azure, powere...
18/11/2025
AI agents have the potential to become indispensable tools for automating complex tasks. But bringing agents to production remains challenging.
According to Ga...
17/11/2025
NVIDIA Apollo - a family of open models for accelerating industrial and computat...
17/11/2025
To power future technologies including liquid-cooled data centers, high-resoluti...
17/11/2025
At SC25, NVIDIA unveiled advances across NVIDIA BlueField DPUs, next-generation networking, quantum computing, national research, AI physics and more - as accel...
17/11/2025
Across quantum physics, digital biology and climate research, the world's researchers are harnessing a universal scientific instrument to chart new frontier...