
A research paper released today describes ways generative AI can assist one of the most complex engineering efforts: designing semiconductors.
The work demonstrates how companies in highly specialized fields can train large language models (LLMs) on their internal data to build assistants that increase productivity.
Few pursuits are as challenging as semiconductor design. Under a microscope, a state-of-the-art chip like an NVIDIA H100 Tensor Core GPU (above) looks like a well-planned metropolis, built with tens of billions of transistors, connected on streets 10,000x thinner than a human hair.
Multiple engineering teams coordinate for as long as two years to construct one of these digital megacities.
Some groups define the chip's overall architecture, some craft and place a variety of ultra-small circuits, and others test their work. Each job requires specialized methods, software programs and computer languages.
A Broad Vision for LLMs I believe over time large language models will help all the processes, across the board, said Mark Ren, an NVIDIA Research director and lead author on the paper.
Bill Dally, NVIDIA's chief scientist, announced the paper today in a keynote at the International Conference on Computer-Aided Design, an annual gathering of hundreds of engineers working in the field called electronic design automation, or EDA.
This effort marks an important first step in applying LLMs to the complex work of designing semiconductors, said Dally at the event in San Francisco. It shows how even highly specialized fields can use their internal data to train useful generative AI models.
ChipNeMo Surfaces The paper details how NVIDIA engineers created for their internal use a custom LLM, called ChipNeMo, trained on the company's internal data to generate and optimize software and assist human designers.
Long term, engineers hope to apply generative AI to each stage of chip design, potentially reaping significant gains in overall productivity, said Ren, whose career spans more than 20 years in EDA.
After surveying NVIDIA engineers for possible use cases, the research team chose three to start: a chatbot, a code generator and an analysis tool.
Initial Use Cases The latter - a tool that automates the time-consuming tasks of maintaining updated descriptions of known bugs - has been the most well-received so far.
A prototype chatbot that responds to questions about GPU architecture and design helped many engineers quickly find technical documents in early tests.
A code generator will help designers write software for a chip design. A code generator in development (demonstrated above) already creates snippets of about 10-20 lines of software in two specialized languages chip designers use. It will be integrated with existing tools, so engineers have a handy assistant for designs in progress.
Customizing AI Models With NVIDIA NeMo The paper mainly focuses on the team's work gathering its design data and using it to create a specialized generative AI model, a process portable to any industry.
As its starting point, the team chose a foundation model and customized it with NVIDIA NeMo, a framework for building, customizing and deploying generative AI models that's included in the NVIDIA AI Enterprise software platform. The selected NeMo model sports 43 billion parameters, a measure of its capability to understand patterns. It was trained using more than a trillion tokens, the words and symbols in text and software.
ChipNeMo provides an example of how one deeply technical team refined a pretrained model with its own data. The team then refined the model in two training rounds, the first using about 24 billion tokens worth of its internal design data and the second on a mix of about 130,000 conversation and design examples.
The work is among several examples of research and proofs of concept of generative AI in the semiconductor industry, just beginning to emerge from the lab.
Sharing Lessons Learned One of the most important lessons Ren's team learned is the value of customizing an LLM.
On chip-design tasks, custom ChipNeMo models with as few as 13 billion parameters match or exceed performance of even much larger general-purpose LLMs like LLaMA2 with 70 billion parameters. In some use cases, ChipNeMo models were dramatically better.
Along the way, users need to exercise care in what data they collect and how they clean it for use in training, he added.
Finally, Ren advises users to stay abreast of the latest tools that can speed and simplify the work.
NVIDIA Research has hundreds of scientists and engineers worldwide focused on topics such as AI, computer graphics, computer vision, self-driving cars and robotics. Other recent projects in semiconductors include using AI to design smaller, faster circuits and to optimize placement of large blocks.
Enterprises looking to build their own custom LLMs can get started today using NeMo framework available from GitHub and NVIDIA NGC catalog.
More from Nvidia
12/12/2025
In Las Vegas's T-Mobile Arena, fans of the Golden Knights are getting more than just hockey - they're getting a taste of the future. ADAM, a robot devel...
11/12/2025
Unveiling what it describes as the most capable model series yet for professional knowledge work, OpenAI launched GPT-5.2 today. The model was trained and deplo...
11/12/2025
Hunters, saddle up - adventure awaits in the cloud.
Journey into the world of M...
10/12/2025
The NVIDIA accelerated computing platform is leading supercomputing benchmarks once dominated by CPUs, enabling AI, science, business and computing efficiency w...
10/12/2025
The world's top-performing system for graph processing at scale was built on...
10/12/2025
As the scale and complexity of AI infrastructure grows, data center operators need continuous visibility into factors including performance, temperature and pow...
04/12/2025
Developers, researchers, hobbyists and students can take a byte out of holiday s...
04/12/2025
Editor's note: The Game Pass edition of Hogwarts Legacy' will also be supported on GeForce NOW when the Steam and Epic Games Store versions launch on t...
03/12/2025
The top 10 most intelligent open-source models all use a mixture-of-experts arch...
02/12/2025
Today, Mistral AI announced the Mistral 3 family of open-source multilingual, multimodal models, optimized across NVIDIA supercomputing and edge platforms.
M...
02/12/2025
At AWS re:Invent, NVIDIA and Amazon Web Services expanded their strategic collab...
01/12/2025
Researchers worldwide rely on open-source technologies as the foundation of their work. To equip the community with the latest advancements in digital and physi...
27/11/2025
Black Friday is leveling up. Get ready to score one of the biggest deals of the season - 50% off the first three months of a new GeForce NOW Ultimate membership...
25/11/2025
Black Forest Labs - the frontier AI research lab developing visual generative AI models - today released the FLUX.2 family of state-of-the-art image generation ...
24/11/2025
Editor's note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copi...
20/11/2025
Editor's note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows u...
20/11/2025
The NVIDIA Blackwell RTX upgrade is nearing the finish line, letting GeForce NOW Ultimate members across the globe experience true next-generation cloud gaming ...
20/11/2025
Tanya Berger-Wolf's first computational biology project started as a bet wit...
18/11/2025
Timed with the Microsoft Ignite conference running this week, NVIDIA is expandin...
18/11/2025
Today, Microsoft, NVIDIA and Anthropic announced new strategic partnerships. Anthropic is scaling its rapidly growing Claude AI model on Microsoft Azure, powere...
18/11/2025
AI agents have the potential to become indispensable tools for automating complex tasks. But bringing agents to production remains challenging.
According to Ga...
17/11/2025
NVIDIA Apollo - a family of open models for accelerating industrial and computat...
17/11/2025
To power future technologies including liquid-cooled data centers, high-resoluti...
17/11/2025
At SC25, NVIDIA unveiled advances across NVIDIA BlueField DPUs, next-generation networking, quantum computing, national research, AI physics and more - as accel...
17/11/2025
Across quantum physics, digital biology and climate research, the world's researchers are harnessing a universal scientific instrument to chart new frontier...
17/11/2025
It used to be that computing power trickled down from hulking supercomputers to ...
14/11/2025
Today's AI workloads are data-intensive, requiring more scalable and afforda...
13/11/2025
Editor's note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copi...
13/11/2025
Chaos has entered the chat. It's GFN Thursday, and things are getting intense with the launch of Call of Duty: Black Ops 7, streaming at launch this week on...
12/11/2025
In the age of AI reasoning, training smarter, more capable models is critical to scaling intelligence. Delivering the massive performance to meet this new age r...
12/11/2025
Large language model (LLM)-based AI assistants are powerful productivity tools, but without the right context and information, they can struggle to provide nuan...
10/11/2025
Editor's note: This post is part of Think SMART, a series focused on how lea...
06/11/2025
NVIDIA founder and CEO Jensen Huang and chief scientist Bill Dally were honored ...
06/11/2025
Editor's note: This blog has been updated to reflect the correct launch date for Call of Duty: Black Ops 7', November 14.
A crisp chill's in the...
04/11/2025
In Berlin on Tuesday, Deutsche Telekom and NVIDIA unveiled the world's first...
04/11/2025
When inspiration strikes, nothing kills momentum faster than a slow tool or a frozen timeline. Creative apps should feel fast and fluid - an extension of imagin...
03/11/2025
Two out of every three people are likely to be living in cities or other urban c...
31/10/2025
Amidst Gyeongju, South Korea's ancient temples and modern skylines, Jensen H...
30/10/2025
An unassuming van driving around rural India uses powerful AI technology that...
30/10/2025
Get ready, raiders - the wait is over. ARC Raiders is dropping onto GeForce NOW and bringing the fight from orbit to the screen.
To celebrate the launch, gamer...
29/10/2025
Editor's note: This post is part of Into the Omniverse, a series focused on ...
28/10/2025
Governments everywhere are racing to harness the power of AI - but legacy infras...
28/10/2025
AI is moving from the digital world into the physical one. Across factory floors...
28/10/2025
NVIDIA is delivering the telecom industry a major boost in open-source software for building AI-native 5G and 6G networks.
NVIDIA Aerial software will soon be ...
28/10/2025
The race to bottle a star now runs on AI.
NVIDIA, General Atomics and a team of international partners have built a high-fidelity, AI-enabled digital twin for ...
28/10/2025
Along the Pacific Ocean in Monterey, California, the Naval Postgraduate School (...
28/10/2025
To democratize access to AI technology nationwide, AI education and deployment c...
28/10/2025
Leading technology companies in aerospace and automotive are accelerating their ...
26/10/2025
This year's ROSCon conference heads to Singapore, bringing together the global robotics developer community behind Robot Operating System (ROS) - the world&...
24/10/2025
Monday, Oct. 27, 12:30 p.m.
How Medium-Sized Cities Are Tackling AI Readiness
L to R: Mark Muro, senior fellow at Brookings Metro; Micah Runner, city manag...