
NVIDIA researchers are at the forefront of the rapidly advancing field of visual generative AI, developing new techniques to create and interpret images, videos and 3D environments.
More than 50 of these projects will be showcased at the Computer Vision and Pattern Recognition (CVPR) conference, taking place June 17-21 in Seattle. Two of the papers - one on the training dynamics of diffusion models and another on high-definition maps for autonomous vehicles - are finalists for CVPR's Best Paper Awards.
NVIDIA is also the winner of the CVPR Autonomous Grand Challenge's End-to-End Driving at Scale track - a significant milestone that demonstrates the company's use of generative AI for comprehensive self-driving models. The winning submission, which outperformed more than 450 entries worldwide, also received CVPR's Innovation Award.
NVIDIA's research at CVPR includes a text-to-image model that can be easily customized to depict a specific object or character, a new model for object pose estimation, a technique to edit neural radiance fields (NeRFs) and a visual language model that can understand memes. Additional papers introduce domain-specific innovations for industries including automotive, healthcare and robotics.
Collectively, the work introduces powerful AI models that could enable creators to more quickly bring their artistic visions to life, accelerate the training of autonomous robots for manufacturing, and support healthcare professionals by helping process radiology reports.
Artificial intelligence, and generative AI in particular, represents a pivotal technological advancement, said Jan Kautz, vice president of learning and perception research at NVIDIA. At CVPR, NVIDIA Research is sharing how we're pushing the boundaries of what's possible - from powerful image generation models that could supercharge professional creators to autonomous driving software that could help enable next-generation self-driving cars.
At CVPR, NVIDIA also announced NVIDIA Omniverse Cloud Sensor RTX, a set of microservices that enable physically accurate sensor simulation to accelerate the development of fully autonomous machines of every kind.
Forget Fine-Tuning: JeDi Simplifies Custom Image Generation Creators harnessing diffusion models, the most popular method for generating images based on text prompts, often have a specific character or object in mind - they may, for example, be developing a storyboard around an animated mouse or brainstorming an ad campaign for a specific toy.
Prior research has enabled these creators to personalize the output of diffusion models to focus on a specific subject using fine-tuning - where a user trains the model on a custom dataset - but the process can be time-consuming and inaccessible for general users.
JeDi, a paper by researchers from Johns Hopkins University, Toyota Technological Institute at Chicago and NVIDIA, proposes a new technique that allows users to easily personalize the output of a diffusion model within a couple of seconds using reference images. The team found that the model achieves state-of-the-art quality, significantly outperforming existing fine-tuning-based and fine-tuning-free methods.
JeDi can also be combined with retrieval-augmented generation, or RAG, to generate visuals specific to a database, such as a brand's product catalog.
https://blogs.nvidia.com/wp-content/uploads/2024/06/JeDi-cow-sculpture.mp4
New Foundation Model Perfects the Pose NVIDIA researchers at CVPR are also presenting FoundationPose, a foundation model for object pose estimation and tracking that can be instantly applied to new objects during inference, without the need for fine-tuning.
The model, which set a new record on a popular benchmark for object pose estimation, uses either a small set of reference images or a 3D representation of an object to understand its shape. It can then identify and track how that object moves and rotates in 3D across a video, even in poor lighting conditions or complex scenes with visual obstructions.
FoundationPose could be used in industrial applications to help autonomous robots identify and track the objects they interact with. It could also be used in augmented reality applications where an AI model is used to overlay visuals on a live scene.
NeRFDeformer Transforms 3D Scenes With a Single Snapshot A NeRF is an AI model that can render a 3D scene based on a series of 2D images taken from different positions in the environment. In fields like robotics, NeRFs can be used to generate immersive 3D renders of complex real-world scenes, such as a cluttered room or a construction site. However, to make any changes, developers would need to manually define how the scene has transformed - or remake the NeRF entirely.
Researchers from the University of Illinois Urbana-Champaign and NVIDIA have simplified the process with NeRFDeformer. The method, being presented at CVPR, can successfully transform an existing NeRF using a single RGB-D image, which is a combination of a normal photo and a depth map that captures how far each object in a scene is from the camera.
VILA Visual Language Model Gets the Picture A CVPR research collaboration between NVIDIA and the Massachusetts Institute of Technology is advancing the state of the art for vision language models, which are generative AI models that can process videos, images and text.
The group developed VILA, a family of open-source visual language models that outperforms prior neural networks on key benchmarks that test how well AI models answer questions about images. VILA's unique pretraining process unlocked new model capabilities, including enhanced world knowledge, stronger in-context learning and the ability to reason across multiple images.
VILA can understand memes and reason based on multiple images or video frames. The VILA model fa
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
04/08/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
04/07/2026
April 7 2026, 19:00 (PDT) Detective Conan: Fallen Angel of the Highway Opens in...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
DARTFORD, UNITED KINGDOM, MAY 5, 2026 When live cycling races and international marathons stretch for miles across cities and countryside, there is no margin ...
06/05/2026
Oberkochen/Germany, May 5, 2026
ZEISS announces the launch of CinCraft LensCore, a novel solution for creating physically based cinematic lens looks for visual...
06/05/2026
How changes to proteins can alter drug interactions for new precision therapies Scripps Research team maps how chemical modifications to proteins affect drug bi...
05/05/2026
Experts from the world of academia, tech, business, politics and media convened for a Thomson Talks at the Cambridge Disinformation Summit in April. It's th...
05/05/2026
Three phones were hardwired for power and transmission to the truck; camera feat...
05/05/2026
The creative studio behind campaigns for the NBA, Fanatics Sportsbook & Casino, ...
05/05/2026
Nielsen has announced results from a co-viewing pilot program covering February&...
05/05/2026
viztrick AiDi, an on-device AI solution developed by Nippon TV, delivered global...
05/05/2026
ARRI has announced Omnibar, a battery-powered, IP65-rated multi-color LED linear...
05/05/2026
Imagine Communications has announced that France T l visions is the first broadc...
05/05/2026
The Women's National Basketball Association (WNBA) and Bell Media today announced a multiyear agreement to broadcast and stream WNBA games in Canada beginni...
05/05/2026
SVG is proud to announce Warner Bros. Discovery's Techwood Studios in Atlant...
05/05/2026
With no operator required, AutoMic workflow automates talent identification on U...
05/05/2026
A crash in 2015 set the industry back, but this winter proved that drones are he...
05/05/2026
Another year, and more proof that Asia continues to shape some of the world's most exciting new sounds. This year's RADAR artists draw from deep local r...
05/05/2026
The Austin City Limits Music Fest 2026 lineup just dropped, and this year, Spoti...
05/05/2026
New drum machine book campaign incoming
Bjooks have announced that during Superbooth 2026, they will be launching a Kickstarter campaign to fund the product...
05/05/2026
Flagship all-in-one production bundle updated
The latest version of Native Instruments' flagship virtual instrument and plug-in bundle has just been ann...
05/05/2026
Rohde & Schwarz to host RF Testing Innovations Forum 2026, helping design engine...
05/05/2026
L3Harris provides communications, electronic warfare, sensors and mission systems that enable Virginia-class submarine crews to operate with confidence in conte...
05/05/2026
The company grew by 7.6% in net revenue and 16.3% in EBITDA, achieving a 33% inc...
05/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
05/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
05/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
05/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
05/05/2026
Project Marks First Major Broadcast Deployment of Latest Addition to SNP Lineup
Imagine Communications today announced that France T l visions is the first br...
05/05/2026
Shotoku Broadcast Systems Wins 2026 NAB Show Product of the Year Award
Shotoku Broadcast Systems announced today that its Swoop range of robotic cranes has be...
05/05/2026
DigitalGlue's creative.space Intelligence Wins Future's Best of Show Award, Presented by TV Tech
creative.space Intelligence (CSI), part of the creativ...
05/05/2026
Zixi, a leader in live video delivery and workflow orchestration, will showcase next-generation broadcast workflows at the Media Production and Technology Show ...
05/05/2026
Stingr marks its launch with a new approach to second-screen interactivity
Brie Clayton May 5, 2026
0 Comments
Huge leap forward in revenues and engag...
05/05/2026
Shotoku Broadcast Systems Wins 2026 NAB Show Product of the Year Award
Brie Clayton May 5, 2026
0 Comments
Shotoku Broadcast Systems announced today tha...
05/05/2026
Following a successful NAB Show in Las Vegas, DHD will promote examples from its wide range of broadcast-quality audio production equipment at the May 13th-14th...
05/05/2026
LucidLink today announced its programme for MPTS 2026, where it will exhibit at Stand M59 at Olympia London, 13 to 14 May. The company will showcase its latest ...
05/05/2026
Limecraft today announces the release of Limecraft 2026.3, the third platform update in its 2026 release cycle. Limecraft is an AI-powered production platform t...
05/05/2026
Huge leap forward in revenues and engagement...
05/05/2026
Broadcast Solutions, a leading system integrator and provider of innovative solutions for the broadcast media industry, has taken another significant step in st...
05/05/2026
Operative today announced the appointment of Dang Ly as Chief Product Officer, signaling the company's accelerating commitment to delivering the next genera...
05/05/2026
The Media Talent Manifesto (MTM) today announces the return of the World Skills Caf at IBC2026, positioning the event as a critical industry forum to confront ...
05/05/2026
ARRI unveils Omnibar: compact, modular, battery-powered IP65 LED bars with preci...
05/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
05/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
05/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
05/05/2026
Digital Domain Welcomes Award-Nominated VFX Supervisor Jelmer Boskma
Brie Clayton May 4, 2026
0 Comments
Digital Domain, a global leader in visual eff...
05/05/2026
Enterprise AI has learned to generate. It has learned to reason. Now companies are asking the next question: How should AI act?
Early agent systems have shown ...