
In the rolling hills of Berkeley, California, an AI agent is supporting high-stakes physics experiments at the Advanced Light Source (ALS) particle accelerator.
Researchers at the Lawrence Berkeley National Laboratory ALS facility recently deployed the Accelerator Assistant, a large language model (LLM)-driven system to keep X-ray research on track.
The Accelerator Assistant - powered by an NVIDIA H100 GPU harnessing CUDA for accelerated inference - taps into institutional knowledge data from the ALS support team and routes requests through Gemini, Claude or ChatGPT. It writes Python and solves problems, either autonomously or with a human in the loop.
This is no small task. The ALS particle accelerator sends electrons traveling near the speed of light in a 200-yard circular path, emitting ultraviolet and X-ray light, which is directed through 40 beamlines for 1,700 scientific experiments per year. Scientists worldwide use this process to study materials science, biology, chemistry, physics and environmental science.
At the ALS, beam interruptions can last minutes, hours or days, depending on the complexity, halting concurrent scientific experiments in process. And much can go wrong: the ALS control system has more than 230,000 process variables.
It's really important for such a machine to be up, and when we go down, there are 40 beamlines that do X-ray experiments, and they are waiting, said Thorsten Hellert, staff scientist from the Accelerator Technology and Applied Physics Division at Berkeley Lab and lead author of a research paper on the groundbreaking work.
Until now, facility staff troubleshooting issues have had to quickly identify the areas, retrieve data and gather the right personnel for analysis under intense time pressure to get the system back up and running.
The novel approach offers a blueprint for securely and transparently applying large language model-driven systems to particle accelerators, nuclear and fusion reactor facilities, and other complex scientific infrastructures, said Hellert.
The research team demonstrated that the Accelerator Assistant can autonomously prepare and run a multistage physics experiment, cutting setup time and reducing efforts by 100x.
Applying Context Engineering Prompts to Accelerator Assistant The ALS operators interact with the system through either a command line interface or Open WebUI, which enables interaction with various LLMs and is accessible from control room stations, as well as remotely. Under the hood, the system uses Osprey, a framework developed at Berkeley Lab to apply agent-based AI safely in complex control systems.
Each user is authenticated and the framework maintains personalized context and memory across sessions, and multiple sessions can be managed simultaneously. This allows users to organize distinct tasks or experiments into separate threads. These inputs are routed through the Accelerator Assistant, which makes connections to the database of more than 230,000 process variables, a historical database archive service and Jupyter Notebook-based execution environments.
We try to engineer the context of every language model call with whatever prior knowledge we have from this execution up to this point, said Hellert.
Inference is done either locally - using Ollama, which is an open-source tool for running LLMs with a personal computer, on an H100 GPU node located within the control room network - or externally with the CBorg gateway, which is a lab-managed interface that routes requests to external tools such as ChatGPT, Claude or Gemini.
The hybrid architecture balances secure, low-latency, on-premises inference with access to the latest foundation models. Integration with EPICS (Experimental Physics and Industrial Control System) enables operator-standard safety constraints for direct interaction with accelerator hardware. EPICS is a distributed control system used in large-scale scientific facilities such as particle accelerators. Engineers can write Python code in Jupyter Notebook that can communicate with it.
Basically, conversational input is turned into a clear natural language task description for objectives without redundancy. External knowledge such as personalized memory tied to users, documentation and accelerator databases are integrated to assist with terminology and context.
It's a large facility with a lot of specialized expertise, said Hellert. Much of that knowledge is scattered across teams, so even finding something simple - like the address of a temperature sensor in one part of the machine - can take time.
Tapping Accelerator Assistant to Aid Engineers, Fusion Energy Development Using the Accelerator Assistant, engineers can start with a simple prompt describing their goal. Behind the scenes, the system draws on carefully prepared examples and keywords from accelerator operations to guide the LLM's reasoning.
Each prompt is engineered with relevant context from our facility, so the model already knows what kind of task it's dealing with, said Hellert.
Each agent is an expert in that field, he said.
Once the task is defined, the agent brings together its specialized capabilities - such as finding process variables or navigating the control system - and can automatically generate and run Python scripts to analyze data, visualize results or interact safely with the accelerator itself.
This is something that can save you serious time - in the paper, we say two orders of magnitude for such a prompt, said Hellert.
Looking ahead, Hellert aims to have the ALS engineers put together a wiki that documents the many processes that go on to support the experiments. These documents could help the agents run the facilities autonomously - with a human in the loop to approve the course of action.
On these high-stakes scientific experiments, even if it's just
More from Nvidia
08/01/2026
The next universal technology since the smartphone is on the horizon - and it ma...
08/01/2026
In the rolling hills of Berkeley, California, an AI agent is supporting high-stakes physics experiments at the Advanced Light Source (ALS) particle accelerator....
08/01/2026
NVIDIA is wrapping up a big week at the CES trade show with a set of GeForce NOW...
07/01/2026
AI has transformed retail and consumer packaged goods (CPG) operations, enhancin...
05/01/2026
At the CES trade show running this week in Las Vegas, NVIDIA announced that the ...
05/01/2026
Open-source AI is accelerating innovation across industries, and NVIDIA DGX Spar...
05/01/2026
NVIDIA DGX SuperPOD is paving the way for large-scale system deployments built on the NVIDIA Rubin platform - the next leap forward in AI computing.
At the CES...
05/01/2026
AI is powering breakthroughs across industries, helping enterprises operate with...
05/01/2026
NVIDIA founder and CEO Jensen Huang took the stage at the Fontainebleau Las Vega...
05/01/2026
At the CES trade show, NVIDIA today announced DLSS 4.5, which introduces Dynamic...
05/01/2026
2025 marked a breakout year for AI development on PC.
PC-class small language m...
05/01/2026
Announced at the CES trade show running this week in Las Vegas, NVIDIA is bringi...
01/01/2026
New year, new games, all with RTX 5080-powered cloud energy. GeForce NOW is kicking off 2026 by looking back at an unforgettable year of wins and wildly high fr...
25/12/2025
Holiday lights are twinkling, hot cocoa's on the stove and gamers are settling in for a well-earned break.
Whether staying in or heading on a winter getawa...
22/12/2025
The works of Plato state that when humans have an experience, some level of change occurs in their brain, which is powered by memory - specifically long-term me...
18/12/2025
NVIDIA will join the U.S. Department of Energy's (DOE) Genesis Mission as a ...
18/12/2025
Top-notch options for AI at the desktops of developers, engineers and designers ...
18/12/2025
Step out of the vault and into the future of gaming with Fallout: New Vegas streaming on GeForce NOW, just in time to celebrate the newest season of the hit Ama...
17/12/2025
The Hao AI Lab research team at the University of California San Diego - at the forefront of pioneering AI model innovation - recently received an NVIDIA DGX B...
17/12/2025
Editor's note: This post is part of Into the Omniverse, a series focused on ...
15/12/2025
NVIDIA today announced it has acquired SchedMD - the leading developer of Slurm, an open-source workload management system for high-performance computing (HPC) ...
15/12/2025
Modern workflows showcase the endless possibilities of generative and agentic AI on PCs.
Of many, some examples include tuning a chatbot to handle product-supp...
12/12/2025
In Las Vegas's T-Mobile Arena, fans of the Golden Knights are getting more than just hockey - they're getting a taste of the future. ADAM, a robot devel...
11/12/2025
Unveiling what it describes as the most capable model series yet for professional knowledge work, OpenAI launched GPT-5.2 today. The model was trained and deplo...
11/12/2025
Hunters, saddle up - adventure awaits in the cloud.
Journey into the world of M...
10/12/2025
The NVIDIA accelerated computing platform is leading supercomputing benchmarks once dominated by CPUs, enabling AI, science, business and computing efficiency w...
10/12/2025
The world's top-performing system for graph processing at scale was built on...
10/12/2025
As the scale and complexity of AI infrastructure grows, data center operators need continuous visibility into factors including performance, temperature and pow...
04/12/2025
Developers, researchers, hobbyists and students can take a byte out of holiday s...
04/12/2025
Editor's note: The Game Pass edition of Hogwarts Legacy' will also be supported on GeForce NOW when the Steam and Epic Games Store versions launch on t...
03/12/2025
The top 10 most intelligent open-source models all use a mixture-of-experts arch...
02/12/2025
Today, Mistral AI announced the Mistral 3 family of open-source multilingual, multimodal models, optimized across NVIDIA supercomputing and edge platforms.
M...
02/12/2025
At AWS re:Invent, NVIDIA and Amazon Web Services expanded their strategic collab...
01/12/2025
Researchers worldwide rely on open-source technologies as the foundation of their work. To equip the community with the latest advancements in digital and physi...
27/11/2025
Black Friday is leveling up. Get ready to score one of the biggest deals of the season - 50% off the first three months of a new GeForce NOW Ultimate membership...
25/11/2025
Black Forest Labs - the frontier AI research lab developing visual generative AI models - today released the FLUX.2 family of state-of-the-art image generation ...
24/11/2025
Editor's note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copi...
20/11/2025
Editor's note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows u...
20/11/2025
The NVIDIA Blackwell RTX upgrade is nearing the finish line, letting GeForce NOW Ultimate members across the globe experience true next-generation cloud gaming ...
20/11/2025
Tanya Berger-Wolf's first computational biology project started as a bet wit...
18/11/2025
Timed with the Microsoft Ignite conference running this week, NVIDIA is expandin...
18/11/2025
Today, Microsoft, NVIDIA and Anthropic announced new strategic partnerships. Anthropic is scaling its rapidly growing Claude AI model on Microsoft Azure, powere...
18/11/2025
AI agents have the potential to become indispensable tools for automating complex tasks. But bringing agents to production remains challenging.
According to Ga...
17/11/2025
NVIDIA Apollo - a family of open models for accelerating industrial and computat...
17/11/2025
To power future technologies including liquid-cooled data centers, high-resoluti...
17/11/2025
At SC25, NVIDIA unveiled advances across NVIDIA BlueField DPUs, next-generation networking, quantum computing, national research, AI physics and more - as accel...
17/11/2025
Across quantum physics, digital biology and climate research, the world's researchers are harnessing a universal scientific instrument to chart new frontier...
17/11/2025
It used to be that computing power trickled down from hulking supercomputers to ...
14/11/2025
Today's AI workloads are data-intensive, requiring more scalable and afforda...
13/11/2025
Editor's note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copi...