
The Hao AI Lab research team at the University of California San Diego - at the forefront of pioneering AI model innovation - recently received an NVIDIA DGX B200 system to elevate their critical work in large language model inference.
Many LLM inference platforms in production today, such as NVIDIA Dynamo, use research concepts that originated in the Hao AI Lab, including DistServe.
How Is Hao AI Lab Using the DGX B200? Members of the Hao AI Lab standing with the NVIDIA DGX B200 system. With the DGX B200 now fully accessible to the Hao AI Lab and broader UC San Diego community at the School of Computing, Information and Data Sciences' San Diego Supercomputer Center, the research opportunities are boundless.
DGX B200 is one of the most powerful AI systems from NVIDIA to date, which means that its performance is among the best in the world, said Hao Zhang, assistant professor in the Hal c o lu Data Science Institute and department of computer science and engineering at UC San Diego. It enables us to prototype and experiment much faster than using previous-generation hardware.
Two Hao AI Lab projects the DGX B200 is accelerating are FastVideo and the Lmgame benchmark.
FastVideo focuses on training a family of video generation models to produce a five-second video based on a given text prompt - in just five seconds.
The research phase of FastVideo taps into NVIDIA H200 GPUs in addition to the DGX B200 system.
Lmgame-bench is a benchmarking suite that puts LLMs to the test using popular online games including Tetris and Super Mario Bros. Users can test one model at a time or put two models up against each other to measure their performance.
The illustrated workflow of Hao AI Lab's Lmgame-Bench project. Other ongoing projects at Hao AI Labs explore new ways to achieve low-latency LLM serving, pushing large language models toward real-time responsiveness.
Our current research uses the DGX B200 to explore the next frontier of low-latency LLM-serving on the awesome hardware specs the system gives us, said Junda Chen, a doctoral candidate in computer science at UC San Diego.
How DistServe Influenced Disaggregated Serving Disaggregated inference is a way to ensure large-scale LLM-serving engines can achieve the optimal aggregate system throughput while maintaining acceptably low latency for user requests.
The benefit of disaggregated inference lies in optimizing what DistServe calls goodput instead of throughput in the LLM-serving engine.
Here's the difference:
Throughput is measured by the number of tokens per second that the entire system can generate. Higher throughput means lower cost to generate each token to serve the user. For a long time, throughput was the only metric used by LLM-serving engines to measure their performance against one another.
While throughput measures the aggregate performance of the system, it doesn't directly correlate to the latency that a user perceives. If a user demands lower latency to generate the tokens, the system has to sacrifice throughput.
This natural trade-off between throughput and latency is what led the DistServe team to propose a new metric, goodput : the measure of throughput while satisfying the user-specified latency objectives, usually called service-level objectives. In other words, goodput represents the overall health of a system while satisfying user experience.
DistServe shows that goodput is a much better metric for LLM-serving systems, as it factors in both cost and service quality. Goodput leads to optimal efficiency and ideal output from a model.
How Can Developers Achieve Optimal Goodput? When a user makes a request in an LLM system, the system takes the user input and generates the first token, known as prefill. Then, the system creates numerous output tokens, one after another, predicting each token's future behavior based on past requests' outcomes. This process is known as decode.
https://blogs.nvidia.com/wp-content/uploads/2025/12/distserve.mp4
Prefill and decode have historically run on the same GPU, but the researchers behind DistServe found that splitting them onto different GPUs maximizes goodput.
Previously, if you put these two jobs on a GPU, they would compete with each other for resources, which could make it slow from a user perspective, Chen said. Now, if I split the jobs onto two different sets of GPUs - one doing prefill, which is compute intensive, and the other doing decode, which is more memory intensive - we can fundamentally eliminate the interference between the two jobs, making both jobs run faster.
This process is called prefill/decode disaggregation, or separating the prefill from decode to get greater goodput.
Increasing goodput and using the disaggregated inference method enables the continuous scaling of workloads without compromising on low-latency or high-quality model responses.
NVIDIA Dynamo - an open-source framework designed to accelerate and scale generative AI models at the highest efficiency levels with the lowest cost - enables scaling disaggregated inference.
In addition to these projects, cross-departmental collaborations, such as in healthcare and biology, are underway at UC San Diego to further optimize an array of research projects using the NVIDIA DGX B200, as researchers continue exploring how AI platforms can accelerate innovation.
Learn more about the NVIDIA DGX B200 system.
North America Stories
17/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
17/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
17/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
17/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
17/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
17/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
17/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
17/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
17/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
17/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
17/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
17/12/2025
KPop Demon Hunters Stars Visit Berklee for Weeklong Celebration Andrew Choi and EJAE, who voiced the film's main characters and contributed to its soundtr...
17/12/2025
The Hao AI Lab research team at the University of California San Diego - at the forefront of pioneering AI model innovation - recently received an NVIDIA DGX B...
17/12/2025
Editor's note: This post is part of Into the Omniverse, a series focused on ...
16/12/2025
Three 12-kilowatt Advanced Electric Propulsion System thrusters, supplied by L3Harris Technologies, form the core of Gateway's propulsion system. Pictured i...
16/12/2025
The challenge facing America's defense industrial base is not just about speed - its about rebuilding the foundation that makes speed possible. Our nations ...
16/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
16/12/2025
SEVILLE, Spain Canal Sur, the public broadcasting service for Andalusia, Spain, has completed a total technology refresh based on Pebble's resilient, softwa...
16/12/2025
NEW YORK Teleprompting hardware provider Telescript International has acquired all software code and intellectual property previously owned by Telescript West. ...
16/12/2025
As cable operators face increased competition from 5G fixed wireless access providers, a new report from Ookla Research finds that T-Mobile is the FWA speed lea...
16/12/2025
Apple has announced a major upgrade to the Apple TV app for device owners outside the Apple ecosystem with news that the Apple TV app for Android now supports G...
16/12/2025
Back to All News
Emma Appleton, Fares Fares, Frida Gustavsson and Jakob Oftebro...
16/12/2025
Back to All News
Docu-reality My Korean Boyfriend Gets a Trailer and Premiere D...
15/12/2025
Harlem Globetrotters Celebrate 100th Anniversary With New Brand Campaign From Th...
15/12/2025
Top L-R: La Tierra Del Valor (The Home of the Brave), Mangittatuarjuk (The Gnawe...
15/12/2025
L3Harris will leverage 15 years of experience supporting the E-4B Nightwatch and...
15/12/2025
CONWAY, Ark. In a notable example of how the loss of federal funding is forcing public stations to make massive cuts and operational changes, the statewide pub...
15/12/2025
BOULDER, Colo. Public Media Venture Group (PMVG), Venture Technologies Group (VTG), and WQED have completed a multipart agreement that they say will significant...
15/12/2025
How Rivian s Design Puts Drivers First-And Why That Matters Published on Dec 15, 2025 Categories: Business Solutions
LinkedIn Corporate Communications
Sha...
15/12/2025
NVIDIA today announced it has acquired SchedMD - the leading developer of Slurm, an open-source workload management system for high-performance computing (HPC) ...
15/12/2025
Modern workflows showcase the endless possibilities of generative and agentic AI on PCs.
Of many, some examples include tuning a chatbot to handle product-supp...
13/12/2025
Powering Client Growth: Horizon Deepens Nielsen Partnership, Enabling More Effic...
13/12/2025
In a move that will help it offer more flexible and less costly programming options, YouTube TV has announced that it will be launching YouTube TV Plans with mo...
13/12/2025
SINGAPORE Magna Systems has designed, built and completed what is believed to be the first full UHD and IP-based OB truck in Southeast Asia for a Singapore medi...
12/12/2025
SVG Summit 2025 Preview: Everything You Need to Know for Next Week's Big Sho...
12/12/2025
Hailey Gates at the Atropia premiere (photo by George Pimentel / Shutterstock for Sundance Film Festival)...
12/12/2025
CONWAY, Ark. In a notable example of how the elimination of Federal federal funding is forcing public stations to make massive cuts and changes in the way they...
12/12/2025
Wisycom and DPA Microphones announce the appointment of Ren Moerch as Group Product Director, Wireless, a strategic leadership role that will guide the combine...
12/12/2025
SMPTE , the home of media professionals, technologists, and engineers, in conjuncture with the European Broadcasting Union (EBU) and the Entertainment Technolog...
12/12/2025
Keepit, the vendor-independent, cloud-native data protection provider, today announced a strategic go-to-market relationship in Poland with Ingram Micro, a lead...
12/12/2025
Atomos announced the immediate availability of a new firmware update for its Ninja TX GO and Ninja TX monitor-recorders, unlocking Open Gate 48P RAW recording w...
12/12/2025
Professional Wireless Systems (PWS) once again played a critical role in delivering flawless wireless coordination and support at the 2025 Latin Grammy Awards a...
12/12/2025
The Alliance for IP Media Solutions (AIMS), together with the Video Services Forum (VSF), the Advanced Media Workflow Association (AMWA) and the European Broadc...
12/12/2025
DHD audio will demonstrate the latest additions to its range of digital audio production solutions on Booth 321 in Hall B6 at Hamburg Open 2026. The show will b...
12/12/2025
Chaos today announces the release of V-Ray for Blender, update 2, bringing its award-winning rendering technology to even more Blender users by adding support f...
12/12/2025
Lighting specialist UltraLEDs has launched Precision LED Tape, a high-CRI lighting solution designed specifically for professional film, TV, and studio use.
P...
12/12/2025
Zixi, the Emmy Award-winning leader in live broadcast-quality video over IP, today announced that Roi Sasson has joined the company as Vice President, Engineer...
12/12/2025
BitFire (bitfire.tv), the leader in software-defined live production and IP transmission, today announced a strategic partnership with Appear, a leader in high-...
12/12/2025
LOS ANGELES The Hollywood Professional Association (HPA) today said futurist Robert Tercek, creative technologist Jessie Hughes from Leonardo.AI and Emmy-winnin...
12/12/2025
HUDSON, Mass. BitFire and Appear have struck a strategic partnership aimed at offering broadcasters, sports leagues and streaming platforms a faster, more flexi...