
The Hao AI Lab research team at the University of California San Diego - at the forefront of pioneering AI model innovation - recently received an NVIDIA DGX B200 system to elevate their critical work in large language model inference.
Many LLM inference platforms in production today, such as NVIDIA Dynamo, use research concepts that originated in the Hao AI Lab, including DistServe.
How Is Hao AI Lab Using the DGX B200? Members of the Hao AI Lab standing with the NVIDIA DGX B200 system. With the DGX B200 now fully accessible to the Hao AI Lab and broader UC San Diego community at the School of Computing, Information and Data Sciences' San Diego Supercomputer Center, the research opportunities are boundless.
DGX B200 is one of the most powerful AI systems from NVIDIA to date, which means that its performance is among the best in the world, said Hao Zhang, assistant professor in the Hal c o lu Data Science Institute and department of computer science and engineering at UC San Diego. It enables us to prototype and experiment much faster than using previous-generation hardware.
Two Hao AI Lab projects the DGX B200 is accelerating are FastVideo and the Lmgame benchmark.
FastVideo focuses on training a family of video generation models to produce a five-second video based on a given text prompt - in just five seconds.
The research phase of FastVideo taps into NVIDIA H200 GPUs in addition to the DGX B200 system.
Lmgame-bench is a benchmarking suite that puts LLMs to the test using popular online games including Tetris and Super Mario Bros. Users can test one model at a time or put two models up against each other to measure their performance.
The illustrated workflow of Hao AI Lab's Lmgame-Bench project. Other ongoing projects at Hao AI Labs explore new ways to achieve low-latency LLM serving, pushing large language models toward real-time responsiveness.
Our current research uses the DGX B200 to explore the next frontier of low-latency LLM-serving on the awesome hardware specs the system gives us, said Junda Chen, a doctoral candidate in computer science at UC San Diego.
How DistServe Influenced Disaggregated Serving Disaggregated inference is a way to ensure large-scale LLM-serving engines can achieve the optimal aggregate system throughput while maintaining acceptably low latency for user requests.
The benefit of disaggregated inference lies in optimizing what DistServe calls goodput instead of throughput in the LLM-serving engine.
Here's the difference:
Throughput is measured by the number of tokens per second that the entire system can generate. Higher throughput means lower cost to generate each token to serve the user. For a long time, throughput was the only metric used by LLM-serving engines to measure their performance against one another.
While throughput measures the aggregate performance of the system, it doesn't directly correlate to the latency that a user perceives. If a user demands lower latency to generate the tokens, the system has to sacrifice throughput.
This natural trade-off between throughput and latency is what led the DistServe team to propose a new metric, goodput : the measure of throughput while satisfying the user-specified latency objectives, usually called service-level objectives. In other words, goodput represents the overall health of a system while satisfying user experience.
DistServe shows that goodput is a much better metric for LLM-serving systems, as it factors in both cost and service quality. Goodput leads to optimal efficiency and ideal output from a model.
How Can Developers Achieve Optimal Goodput? When a user makes a request in an LLM system, the system takes the user input and generates the first token, known as prefill. Then, the system creates numerous output tokens, one after another, predicting each token's future behavior based on past requests' outcomes. This process is known as decode.
https://blogs.nvidia.com/wp-content/uploads/2025/12/distserve.mp4
Prefill and decode have historically run on the same GPU, but the researchers behind DistServe found that splitting them onto different GPUs maximizes goodput.
Previously, if you put these two jobs on a GPU, they would compete with each other for resources, which could make it slow from a user perspective, Chen said. Now, if I split the jobs onto two different sets of GPUs - one doing prefill, which is compute intensive, and the other doing decode, which is more memory intensive - we can fundamentally eliminate the interference between the two jobs, making both jobs run faster.
This process is called prefill/decode disaggregation, or separating the prefill from decode to get greater goodput.
Increasing goodput and using the disaggregated inference method enables the continuous scaling of workloads without compromising on low-latency or high-quality model responses.
NVIDIA Dynamo - an open-source framework designed to accelerate and scale generative AI models at the highest efficiency levels with the lowest cost - enables scaling disaggregated inference.
In addition to these projects, cross-departmental collaborations, such as in healthcare and biology, are underway at UC San Diego to further optimize an array of research projects using the NVIDIA DGX B200, as researchers continue exploring how AI platforms can accelerate innovation.
Learn more about the NVIDIA DGX B200 system.
More from Nvidia
17/03/2026
As AI native applications scale to more users, agents and devices, the telecommu...
17/03/2026
The features on social media apps like Snapchat evolve nearly as fast as what...
17/03/2026
The paradigm of consumer computing has revolved around the concept of a personal...
12/03/2026
Editor's note: This post is part of Into the Omniverse, a series focused on ...
12/03/2026
GeForce NOW is bringing the game to the Game Developers Conference (GDC), running this week in San Francisco. While developers build the future of gaming, GeFor...
11/03/2026
Launched today, NVIDIA Nemotron 3 Super is a 120 billion parameter open model with 12 billion active parameters designed to run complex agentic AI systems at sc...
10/03/2026
Game developers and artists are building cinematic worlds and iconic characters ...
10/03/2026
Game development teams are working across larger worlds, more complex pipelines and more distributed teams than ever. At the same time, many studios still rely ...
10/03/2026
The Cat 306 CR mini-excavator weighs just under eight tons and fits inside a standard shipping container. It's the machine a contractor rents when the job s...
10/03/2026
NVIDIA and Thinking Machines Lab announced today a multiyear strategic partnersh...
09/03/2026
AI is everywhere and accelerating everything - becoming essential infrastructure...
09/03/2026
ABB Robotics and NVIDIA today announced a breakthrough partnership that brings i...
05/03/2026
March is in full bloom, and that means a fresh wave of games heading to the cloud. 15 new titles are joining the GeForce NOW library this month.
Leading the Ma...
28/02/2026
AI-RAN is moving from lab to field, showing that a software-defined approach is ...
28/02/2026
Autonomous networks - intelligent, self-managing telecommunications operations -...
26/02/2026
GeForce NOW's anniversary celebration reaches a chilling crescendo as Capcom...
26/02/2026
GeForce NOW's anniversary celebration reaches a chilling crescendo as Capcom...
24/02/2026
AI is accelerating every aspect of healthcare - from radiology and drug discover...
23/02/2026
As technologies and systems become more digitalized and connected across the world, operational technology (OT) environments and industrial control systems (ICS...
19/02/2026
The GeForce NOW anniversary celebration keeps on rolling, and this week is all about the games that make it possible. With more than 4,500 titles supported in t...
19/02/2026
AI is accelerating the telecommunications industry's transformation, becomin...
17/02/2026
India is entering a new age of industrialization, as AI transforms how the world...
17/02/2026
Agentic AI is reshaping India's tech industry, delivering leaps in services ...
17/02/2026
India is the nexus of AI innovation this week as the host of the AI Impact Summit, which brings together global heads of state and industry to chart the future ...
16/02/2026
The NVIDIA Blackwell platform has been widely adopted by leading inference provi...
12/02/2026
At leading institutions across the globe, the NVIDIA DGX Spark desktop supercomputer is bringing data center class AI to lab benches, faculty offices and studen...
12/02/2026
A diagnostic insight in healthcare. A character's dialogue in an interactive...
12/02/2026
The GeForce NOW sixth-anniversary festivities roll on this February, continuing a monthlong celebration of NVIDIA's cloud gaming service.
This week brings ...
05/02/2026
Break out the cake and green sprinkles - GeForce NOW is turning six.
Since launch, members have streamed over 1 billion hours, and the party's just getting...
04/02/2026
Editor's note: This post is part of the Nemotron Labs blog series, which exp...
03/02/2026
At 3DEXPERIENCE World in Houston, NVIDIA founder and CEO Jensen Huang and Dassau...
29/01/2026
Mercedes-Benz is marking 140 years of automotive innovation with a new S-Class b...
29/01/2026
Editor's note: This post is part of Into the Omniverse, a series focused on ...
29/01/2026
Get ready to game - the native GeForce NOW app for Linux PCs is now available in beta, letting Linux desktops tap directly into GeForce RTX performance from the...
28/01/2026
Quantum technologies are rapidly emerging as foundational capabilities for economic competitiveness, national security and scientific leadership in the 21st cen...
22/01/2026
AI-powered driver assistance technologies are becoming standard equipment, funda...
22/01/2026
The wait is over, pilots. Flight control support - one of the most community-requested features for GeForce NOW - is live starting today, following its announce...
22/01/2026
AI has taken center stage in financial services, automating the research and exe...
22/01/2026
AI-powered content generation is now embedded in everyday tools like Adobe and Canva, with a slew of agencies and studios incorporating the technology into thei...
21/01/2026
From skilled trades to startups, AI's rapid expansion is the beginning of th...
21/01/2026
From skilled trades to startups, AI's rapid expansion is the beginning of th...
15/01/2026
NVIDIA kicked off the year at CES, where the crowd buzzed about the latest gaming announcements - including the native GeForce NOW app for Linux and Amazon Fire...
13/01/2026
NVIDIA and Lilly are putting together a blueprint for what is possible in the f...
09/01/2026
Every that was easy shopping moment is made possible by teams working to hit s...
08/01/2026
The next universal technology since the smartphone is on the horizon - and it ma...
08/01/2026
In the rolling hills of Berkeley, California, an AI agent is supporting high-stakes physics experiments at the Advanced Light Source (ALS) particle accelerator....
08/01/2026
NVIDIA is wrapping up a big week at the CES trade show with a set of GeForce NOW...
07/01/2026
AI has transformed retail and consumer packaged goods (CPG) operations, enhancin...
05/01/2026
At the CES trade show running this week in Las Vegas, NVIDIA announced that the ...
05/01/2026
Open-source AI is accelerating innovation across industries, and NVIDIA DGX Spar...