
NVIDIA researchers are at the forefront of the rapidly advancing field of visual generative AI, developing new techniques to create and interpret images, videos and 3D environments.
More than 50 of these projects will be showcased at the Computer Vision and Pattern Recognition (CVPR) conference, taking place June 17-21 in Seattle. Two of the papers - one on the training dynamics of diffusion models and another on high-definition maps for autonomous vehicles - are finalists for CVPR's Best Paper Awards.
NVIDIA is also the winner of the CVPR Autonomous Grand Challenge's End-to-End Driving at Scale track - a significant milestone that demonstrates the company's use of generative AI for comprehensive self-driving models. The winning submission, which outperformed more than 450 entries worldwide, also received CVPR's Innovation Award.
NVIDIA's research at CVPR includes a text-to-image model that can be easily customized to depict a specific object or character, a new model for object pose estimation, a technique to edit neural radiance fields (NeRFs) and a visual language model that can understand memes. Additional papers introduce domain-specific innovations for industries including automotive, healthcare and robotics.
Collectively, the work introduces powerful AI models that could enable creators to more quickly bring their artistic visions to life, accelerate the training of autonomous robots for manufacturing, and support healthcare professionals by helping process radiology reports.
Artificial intelligence, and generative AI in particular, represents a pivotal technological advancement, said Jan Kautz, vice president of learning and perception research at NVIDIA. At CVPR, NVIDIA Research is sharing how we're pushing the boundaries of what's possible - from powerful image generation models that could supercharge professional creators to autonomous driving software that could help enable next-generation self-driving cars.
At CVPR, NVIDIA also announced NVIDIA Omniverse Cloud Sensor RTX, a set of microservices that enable physically accurate sensor simulation to accelerate the development of fully autonomous machines of every kind.
Forget Fine-Tuning: JeDi Simplifies Custom Image Generation Creators harnessing diffusion models, the most popular method for generating images based on text prompts, often have a specific character or object in mind - they may, for example, be developing a storyboard around an animated mouse or brainstorming an ad campaign for a specific toy.
Prior research has enabled these creators to personalize the output of diffusion models to focus on a specific subject using fine-tuning - where a user trains the model on a custom dataset - but the process can be time-consuming and inaccessible for general users.
JeDi, a paper by researchers from Johns Hopkins University, Toyota Technological Institute at Chicago and NVIDIA, proposes a new technique that allows users to easily personalize the output of a diffusion model within a couple of seconds using reference images. The team found that the model achieves state-of-the-art quality, significantly outperforming existing fine-tuning-based and fine-tuning-free methods.
JeDi can also be combined with retrieval-augmented generation, or RAG, to generate visuals specific to a database, such as a brand's product catalog.
https://blogs.nvidia.com/wp-content/uploads/2024/06/JeDi-cow-sculpture.mp4
New Foundation Model Perfects the Pose NVIDIA researchers at CVPR are also presenting FoundationPose, a foundation model for object pose estimation and tracking that can be instantly applied to new objects during inference, without the need for fine-tuning.
The model, which set a new record on a popular benchmark for object pose estimation, uses either a small set of reference images or a 3D representation of an object to understand its shape. It can then identify and track how that object moves and rotates in 3D across a video, even in poor lighting conditions or complex scenes with visual obstructions.
FoundationPose could be used in industrial applications to help autonomous robots identify and track the objects they interact with. It could also be used in augmented reality applications where an AI model is used to overlay visuals on a live scene.
NeRFDeformer Transforms 3D Scenes With a Single Snapshot A NeRF is an AI model that can render a 3D scene based on a series of 2D images taken from different positions in the environment. In fields like robotics, NeRFs can be used to generate immersive 3D renders of complex real-world scenes, such as a cluttered room or a construction site. However, to make any changes, developers would need to manually define how the scene has transformed - or remake the NeRF entirely.
Researchers from the University of Illinois Urbana-Champaign and NVIDIA have simplified the process with NeRFDeformer. The method, being presented at CVPR, can successfully transform an existing NeRF using a single RGB-D image, which is a combination of a normal photo and a depth map that captures how far each object in a scene is from the camera.
VILA Visual Language Model Gets the Picture A CVPR research collaboration between NVIDIA and the Massachusetts Institute of Technology is advancing the state of the art for vision language models, which are generative AI models that can process videos, images and text.
The group developed VILA, a family of open-source visual language models that outperforms prior neural networks on key benchmarks that test how well AI models answer questions about images. VILA's unique pretraining process unlocked new model capabilities, including enhanced world knowledge, stronger in-context learning and the ability to reason across multiple images.
VILA can understand memes and reason based on multiple images or video frames. The VILA model fa
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
04/08/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
04/07/2026
April 7 2026, 19:00 (PDT) Detective Conan: Fallen Angel of the Highway Opens in...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
07/05/2026
January 8, 2024
Colorfront (colorfront.com), a leader in high-performance, on-s...
07/05/2026
February 9, 2024
Colorfront (colorfront.com), the multi-award-winning pioneer i...
07/05/2026
March 20, 2024
NAB 2024, Las Vegas - Colorfront (colorfront.com), the multi-awa...
07/05/2026
April 1, 2025
CINEMACON, APRIL 1, 2025 - Colorfront (colorfront.com), the multi-award-winning developer of high-performance dailies/transcoding/streaming syste...
07/05/2026
June 15, 2025
Colorfront (colorfront.com), an Academy and Emmy Award-winning de...
07/05/2026
June 15, 2025
Colorfront participated in the ICTA Barcelona Cinema Technology Summit on Sunday, June 15, 2025. Held at The Phenomena Experience, the event feat...
07/05/2026
July 1, 2025
Colorfront (colorfront.com), the multi-award-winning developer of high-performance dailies/transcoding/streaming systems for motion pictures, OTT,...
07/05/2026
July 1, 2025
Passion and dedication will take you places. Come with us on a short trip to the heart of India, where Annapurna Studios is living-up to the inspi...
07/05/2026
July 3, 2025
Colorfront (colorfront.com), the multi-award-winning developer of high-performance dailies/transcoding/streaming systems for motion pictures, OTT,...
07/05/2026
September 1, 2025
IBC 2025, Amsterdam - Colorfront (colorfront.com) - the Acade...
07/05/2026
April 17, 2026
LOS ANGELES - April 17, 2026 - Colorfront today announced Colorf...
07/05/2026
April 23, 2026
NAB 2026, Las Vegas - the Academy and Emmy Award-winning develop...
07/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
07/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
07/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
07/05/2026
Recreating the 1974 Doctor Who Time Tunnel in After Effects
Graham Quince May 6, 2026
0 Comments
The Time Tunnel from Doctor Who titles is one of th...
06/05/2026
Gravity Media Chief RF Communications Engineer Glenn Willems uses Wisycom RF over Fiber and wireless solutions across major cycling events and international mar...
06/05/2026
A Sennheiser Spectera module is now available in Bitfocus Companion and Buttons, enabling direct integration of Spectera with the two software platforms. The mo...
06/05/2026
Ted Turner, the visionary media entrepreneur whose appetite for disruption helpe...
06/05/2026
Peacock is going all-in on the beautiful game - streaming all 104 FIFA World Cup...
06/05/2026
New pricing tiers for vocal/dialogue restoration tool
NoiseWorks Audio's AI-powered vocal and dialogue processing plug-in is now available in three diff...
06/05/2026
Popular mixing & routing software overhauled
Following a recent public beta test, RME have launched the final release version of the powerful mixing and rou...
06/05/2026
New SOS Video Feature
Focusrites ISA C8X is a milestone product that brings together the companys analogue heritage and their expertise in digital audio. Yo...
06/05/2026
Polands Miecznik-class frigates are part of the largest contract in Polish shipbuilding history. (Image Credit: PGZ Stocznia Wojenna)...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
Riedel Communications today announced the expansion of its leadership structure as part of a strategic initiative to strengthen both its operational management ...
06/05/2026
For nearly three decades, Veteran Production Sound Mixer and Five-time Emmy Award Winner Dirk Sciarrotta has helped define the sonic identity of the long-runnin...
06/05/2026
ZEISS CinCraft LensCore: Cinema Lens Looks for Compositing
Brie Clayton May 6, 2026
0 Comments
ZEISS announces the launch of CinCraft LensCore, a nove...
06/05/2026
Wisycom Solves Extreme RF Challenges Across Miles of Live Action for Gravity Med...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
Narrative Entertainment has partnered with Encompass to deliver high-quality subtitling of its Great! network content using the Altitude Intelligence AI assiste...
06/05/2026
SipRadius, widely recognized for making content processing and connectivity secure and seamless, is proud to launch a dramatic new approach to AI content creati...
06/05/2026
When the broadband and media industry gathers at ANGA COM in Cologne from May 19 to 21, Big Blue Marble will be at the forefront. The international broadcast an...
06/05/2026
Cinegy GmbH, a leading developer of software-defined television technology, is proud to exhibit at MPTS for the first time. Visitors to the stand will discover ...
06/05/2026
Val Jeanty Receives 2026 Doris Duke Artist Award Jeanty, a composer, percussionist, and turntablist, is the fourth Berklee recipient of the prestigious award ...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
06/05/2026
When live cycling races and international marathons stretch for miles across cities and countryside, there is no margin for RF failure in live broadcast. As Chi...
06/05/2026
Oberkochen/Germany, May 5, 2026
ZEISS announces the launch of CinCraft LensCore, a novel solution for creating physically based cinematic lens looks for visual...
06/05/2026
06 May 2026
VEON's Kyivstar Authorized to Resell Starlink for Businesses & ...
06/05/2026
UKTV has secured the exclusive rights to the early back catalogue of iconic Australian drama series Neighbours, following a landmark content deal with Fremantle...
06/05/2026
Wednesday 6 May 2026
Sky commissions feature documentary to mark 10th anniversa...
06/05/2026
Wednesday 6 May 2026
Sky and Formula 1 agree long-term partnership across UK, Ireland and Italy
Sky in the UK & Ireland to remain the home of Formula 1 until ...