Sony Pixel Power calrec Sony

UC San Diego Lab Advances Generative AI Research With NVIDIA DGX B200 System

17/12/2025

The Hao AI Lab research team at the University of California San Diego - at the forefront of pioneering AI model innovation - recently received an NVIDIA DGX B200 system to elevate their critical work in large language model inference.

Many LLM inference platforms in production today, such as NVIDIA Dynamo, use research concepts that originated in the Hao AI Lab, including DistServe.

How Is Hao AI Lab Using the DGX B200? Members of the Hao AI Lab standing with the NVIDIA DGX B200 system. With the DGX B200 now fully accessible to the Hao AI Lab and broader UC San Diego community at the School of Computing, Information and Data Sciences' San Diego Supercomputer Center, the research opportunities are boundless.

DGX B200 is one of the most powerful AI systems from NVIDIA to date, which means that its performance is among the best in the world, said Hao Zhang, assistant professor in the Hal c o lu Data Science Institute and department of computer science and engineering at UC San Diego. It enables us to prototype and experiment much faster than using previous-generation hardware.

Two Hao AI Lab projects the DGX B200 is accelerating are FastVideo and the Lmgame benchmark.

FastVideo focuses on training a family of video generation models to produce a five-second video based on a given text prompt - in just five seconds.

The research phase of FastVideo taps into NVIDIA H200 GPUs in addition to the DGX B200 system.

Lmgame-bench is a benchmarking suite that puts LLMs to the test using popular online games including Tetris and Super Mario Bros. Users can test one model at a time or put two models up against each other to measure their performance.

The illustrated workflow of Hao AI Lab's Lmgame-Bench project. Other ongoing projects at Hao AI Labs explore new ways to achieve low-latency LLM serving, pushing large language models toward real-time responsiveness.

Our current research uses the DGX B200 to explore the next frontier of low-latency LLM-serving on the awesome hardware specs the system gives us, said Junda Chen, a doctoral candidate in computer science at UC San Diego.

How DistServe Influenced Disaggregated Serving Disaggregated inference is a way to ensure large-scale LLM-serving engines can achieve the optimal aggregate system throughput while maintaining acceptably low latency for user requests.

The benefit of disaggregated inference lies in optimizing what DistServe calls goodput instead of throughput in the LLM-serving engine.

Here's the difference:

Throughput is measured by the number of tokens per second that the entire system can generate. Higher throughput means lower cost to generate each token to serve the user. For a long time, throughput was the only metric used by LLM-serving engines to measure their performance against one another.

While throughput measures the aggregate performance of the system, it doesn't directly correlate to the latency that a user perceives. If a user demands lower latency to generate the tokens, the system has to sacrifice throughput.

This natural trade-off between throughput and latency is what led the DistServe team to propose a new metric, goodput : the measure of throughput while satisfying the user-specified latency objectives, usually called service-level objectives. In other words, goodput represents the overall health of a system while satisfying user experience.

DistServe shows that goodput is a much better metric for LLM-serving systems, as it factors in both cost and service quality. Goodput leads to optimal efficiency and ideal output from a model.

How Can Developers Achieve Optimal Goodput? When a user makes a request in an LLM system, the system takes the user input and generates the first token, known as prefill. Then, the system creates numerous output tokens, one after another, predicting each token's future behavior based on past requests' outcomes. This process is known as decode.

https://blogs.nvidia.com/wp-content/uploads/2025/12/distserve.mp4

Prefill and decode have historically run on the same GPU, but the researchers behind DistServe found that splitting them onto different GPUs maximizes goodput.

Previously, if you put these two jobs on a GPU, they would compete with each other for resources, which could make it slow from a user perspective, Chen said. Now, if I split the jobs onto two different sets of GPUs - one doing prefill, which is compute intensive, and the other doing decode, which is more memory intensive - we can fundamentally eliminate the interference between the two jobs, making both jobs run faster.

This process is called prefill/decode disaggregation, or separating the prefill from decode to get greater goodput.

Increasing goodput and using the disaggregated inference method enables the continuous scaling of workloads without compromising on low-latency or high-quality model responses.

NVIDIA Dynamo - an open-source framework designed to accelerate and scale generative AI models at the highest efficiency levels with the lowest cost - enables scaling disaggregated inference.

In addition to these projects, cross-departmental collaborations, such as in healthcare and biology, are underway at UC San Diego to further optimize an array of research projects using the NVIDIA DGX B200, as researchers continue exploring how AI platforms can accelerate innovation.

Learn more about the NVIDIA DGX B200 system.
LINK: https://blogs.nvidia.com/blog/ucsd-generative-ai-research-dgx-b200/...
See more stories from nvidia

More from Nvidia

17/12/2025

UC San Diego Lab Advances Generative AI Research With NVIDIA DGX B200 System

The Hao AI Lab research team at the University of California San Diego - at the forefront of pioneering AI model innovation - recently received an NVIDIA DGX B...

17/12/2025

Into the Omniverse: OpenUSD and NVIDIA Halos Accelerate Safety for Robotaxis, Physical AI Systems

Editor's note: This post is part of Into the Omniverse, a series focused on ...

15/12/2025

NVIDIA Acquires Open-Source Workload Management Provider SchedMD

NVIDIA today announced it has acquired SchedMD - the leading developer of Slurm, an open-source workload management system for high-performance computing (HPC) ...

15/12/2025

How to Fine-Tune an LLM on NVIDIA GPUs With Unsloth

Modern workflows showcase the endless possibilities of generative and agentic AI on PCs. Of many, some examples include tuning a chatbot to handle product-supp...

12/12/2025

Cheers to AI: ADAM Robot Bartender Makes Drinks at Vegas Golden Knights Game

In Las Vegas's T-Mobile Arena, fans of the Golden Knights are getting more than just hockey - they're getting a taste of the future. ADAM, a robot devel...

11/12/2025

As AI Grows More Complex, Model Builders Rely on NVIDIA

Unveiling what it describes as the most capable model series yet for professional knowledge work, OpenAI launched GPT-5.2 today. The model was trained and deplo...

11/12/2025

Ride Into Adventure With Capcom's Monster Hunter Stories' Series in the Cloud

Hunters, saddle up - adventure awaits in the cloud. Journey into the world of M...

10/12/2025

3 Ways NVIDIA Is Powering the Industrial Revolution

The NVIDIA accelerated computing platform is leading supercomputing benchmarks once dominated by CPUs, enabling AI, science, business and computing efficiency w...

10/12/2025

How NVIDIA H100 GPUs on CoreWeave's AI Cloud Platform Delivered a Record-Breaking Graph500 Run

The world's top-performing system for graph processing at scale was built on...

10/12/2025

Opt-In NVIDIA Software Enables Data Center Fleet Management

As the scale and complexity of AI infrastructure grows, data center operators need continuous visibility into factors including performance, temperature and pow...

04/12/2025

Robots' Holiday Wishes Come True: NVIDIA Jetson Platform Offers High-Performance Edge AI at Festive Prices

Developers, researchers, hobbyists and students can take a byte out of holiday s...

04/12/2025

Game the Halls: GeForce NOW Brings Holiday Cheer With 30 New Games in the Cloud

Editor's note: The Game Pass edition of Hogwarts Legacy' will also be supported on GeForce NOW when the Steam and Epic Games Store versions launch on t...

03/12/2025

Mixture of Experts Powers the Most Intelligent Frontier AI Models, Runs 10x Faster on NVIDIA Blackwell NVL72

The top 10 most intelligent open-source models all use a mixture-of-experts arch...

02/12/2025

NVIDIA Partners With Mistral AI to Accelerate New Family of Open Models

Today, Mistral AI announced the Mistral 3 family of open-source multilingual, multimodal models, optimized across NVIDIA supercomputing and edge platforms. M...

02/12/2025

NVIDIA and AWS Expand Full-Stack Partnership, Providing the Secure, High-Performance Compute Platform Vital for Future Innovation

At AWS re:Invent, NVIDIA and Amazon Web Services expanded their strategic collab...

01/12/2025

At NeurIPS, NVIDIA Advances Open Model Development for Digital and Physical AI

Researchers worldwide rely on open-source technologies as the foundation of their work. To equip the community with the latest advancements in digital and physi...

27/11/2025

The Ultimate Black Friday Deal Is Here

Black Friday is leveling up. Get ready to score one of the biggest deals of the season - 50% off the first three months of a new GeForce NOW Ultimate membership...

25/11/2025

FLUX.2 Image Generation Models Now Released, Optimized for NVIDIA RTX GPUs

Black Forest Labs - the frontier AI research lab developing visual generative AI models - today released the FLUX.2 family of state-of-the-art image generation ...

24/11/2025

AI On: 3 Ways Specialized AI Agents Are Reshaping Businesses

Editor's note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copi...

20/11/2025

Into the Omniverse: How Smart City AI Agents Transform Urban Operations

Editor's note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows u...

20/11/2025

Ultimate Cloud Gaming Is Everywhere With GeForce NOW

The NVIDIA Blackwell RTX upgrade is nearing the finish line, letting GeForce NOW Ultimate members across the globe experience true next-generation cloud gaming ...

20/11/2025

The Largest Digital Zoo: Biology Model Trained on NVIDIA GPUs Identifies Over a Million Species

Tanya Berger-Wolf's first computational biology project started as a bet wit...

18/11/2025

Powering AI Superfactories, NVIDIA and Microsoft Integrate Latest Technologies for Inference, Cybersecurity, Physical AI

Timed with the Microsoft Ignite conference running this week, NVIDIA is expandin...

18/11/2025

Microsoft, NVIDIA and Anthropic Announce Strategic Partnerships

Today, Microsoft, NVIDIA and Anthropic announced new strategic partnerships. Anthropic is scaling its rapidly growing Claude AI model on Microsoft Azure, powere...

18/11/2025

Delivering AI-Ready Enterprise Data With GPU-Accelerated AI Storage

AI agents have the potential to become indispensable tools for automating complex tasks. But bringing agents to production remains challenging. According to Ga...

17/11/2025

One Giant Leap for AI Physics: NVIDIA Apollo Unveiled as Open Model Family for Scientific Simulation

NVIDIA Apollo - a family of open models for accelerating industrial and computat...

17/11/2025

NVIDIA Accelerated Computing Enables Scientific Breakthroughs for Materials Discovery

To power future technologies including liquid-cooled data centers, high-resoluti...

17/11/2025

Accelerated Computing, Networking Drive Supercomputing in Age of AI

At SC25, NVIDIA unveiled advances across NVIDIA BlueField DPUs, next-generation networking, quantum computing, national research, AI physics and more - as accel...

17/11/2025

NVIDIA Accelerates AI for Over 80 New Science Systems Worldwide

Across quantum physics, digital biology and climate research, the world's researchers are harnessing a universal scientific instrument to chart new frontier...

17/11/2025

The Great Flip: How Accelerated Computing Redefined Scientific Systems - and What Comes Next

It used to be that computing power trickled down from hulking supercomputers to ...

14/11/2025

How to Unlock Accelerated AI Storage Performance With RDMA for S3-Compatible Storage

Today's AI workloads are data-intensive, requiring more scalable and afforda...

13/11/2025

AI On: 3 Ways to Bring Agentic AI to Computer Vision Applications

Editor's note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copi...

13/11/2025

GeForce NOW Enlists Call of Duty: Black Ops 7' for the Cloud

Chaos has entered the chat. It's GFN Thursday, and things are getting intense with the launch of Call of Duty: Black Ops 7, streaming at launch this week on...

12/11/2025

NVIDIA Wins Every MLPerf Training v5.1 Benchmark

In the age of AI reasoning, training smarter, more capable models is critical to scaling intelligence. Delivering the massive performance to meet this new age r...

12/11/2025

Faster Than a Click: Hyperlink Agent Search Now Available on NVIDIA RTX PCs

Large language model (LLM)-based AI assistants are powerful productivity tools, but without the right context and information, they can struggle to provide nuan...

10/11/2025

Think SMART: New NVIDIA Dynamo Integrations Simplify AI Inference at Data Center Scale

Editor's note: This post is part of Think SMART, a series focused on how lea...

06/11/2025

NVIDIA Founder and CEO Jensen Huang and Chief Scientist Bill Dally Awarded Prestigious Queen Elizabeth Prize for Engineering

NVIDIA founder and CEO Jensen Huang and chief scientist Bill Dally were honored ...

06/11/2025

Fall Into Gaming With 20+ Titles Joining GeForce NOW in November

Editor's note: This blog has been updated to reflect the correct launch date for Call of Duty: Black Ops 7', November 14. A crisp chill's in the...

04/11/2025

Deutsche Telekom and NVIDIA Launch Industrial AI Cloud - a New Era' for Germany's Industrial Transformation

In Berlin on Tuesday, Deutsche Telekom and NVIDIA unveiled the world's first...

04/11/2025

How NVIDIA GeForce RTX GPUs Power Modern Creative Workflows

When inspiration strikes, nothing kills momentum faster than a slow tool or a frozen timeline. Creative apps should feel fast and fluid - an extension of imagin...

03/11/2025

NVIDIA Partners Bring Physical AI, New Smart City Technologies to Dublin, Ho Chi Minh City, Raleigh and More

Two out of every three people are likely to be living in cities or other urban c...

31/10/2025

Korea Joins AI Industrial Revolution: NVIDIA CEO Jensen Huang Unveils Historic Partnership at APEC Summit

Amidst Gyeongju, South Korea's ancient temples and modern skylines, Jensen H...

30/10/2025

AI-Powered Mobile Clinics Deliver Breast Cancer Screening to India's Rural Communities

An unassuming van driving around rural India uses powerful AI technology that...

30/10/2025

Join the Resistance: ARC Raiders' Launches in the Cloud

Get ready, raiders - the wait is over. ARC Raiders is dropping onto GeForce NOW and bringing the fight from orbit to the screen. To celebrate the launch, gamer...

29/10/2025

Into the Omniverse: Open World Foundation Models Generate Synthetic Worlds for Physical AI Development

Editor's note: This post is part of Into the Omniverse, a series focused on ...

28/10/2025

NVIDIA and US Technology Leaders Unveil AI Factory Design to Modernize Government and Secure the Nation

Governments everywhere are racing to harness the power of AI - but legacy infras...

28/10/2025

NVIDIA IGX Thor Robotics Processor Brings Real-Time Physical AI to the Industrial and Medical Edge

AI is moving from the digital world into the physical one. Across factory floors...

28/10/2025

NVIDIA Open Sources Aerial Software to Accelerate AI-Native 6G

NVIDIA is delivering the telecom industry a major boost in open-source software for building AI-native 5G and 6G networks. NVIDIA Aerial software will soon be ...

28/10/2025

NVIDIA and General Atomics Advance Commercial Fusion Energy

The race to bottle a star now runs on AI. NVIDIA, General Atomics and a team of international partners have built a high-fidelity, AI-enabled digital twin for ...

28/10/2025

NVIDIA, NPS Commission the Navy's AI Flagship for Training Tomorrow's Leaders

Along the Pacific Ocean in Monterey, California, the Naval Postgraduate School (...