
NVIDIA researchers are at the forefront of the rapidly advancing field of visual generative AI, developing new techniques to create and interpret images, videos and 3D environments.
More than 50 of these projects will be showcased at the Computer Vision and Pattern Recognition (CVPR) conference, taking place June 17-21 in Seattle. Two of the papers - one on the training dynamics of diffusion models and another on high-definition maps for autonomous vehicles - are finalists for CVPR's Best Paper Awards.
NVIDIA is also the winner of the CVPR Autonomous Grand Challenge's End-to-End Driving at Scale track - a significant milestone that demonstrates the company's use of generative AI for comprehensive self-driving models. The winning submission, which outperformed more than 450 entries worldwide, also received CVPR's Innovation Award.
NVIDIA's research at CVPR includes a text-to-image model that can be easily customized to depict a specific object or character, a new model for object pose estimation, a technique to edit neural radiance fields (NeRFs) and a visual language model that can understand memes. Additional papers introduce domain-specific innovations for industries including automotive, healthcare and robotics.
Collectively, the work introduces powerful AI models that could enable creators to more quickly bring their artistic visions to life, accelerate the training of autonomous robots for manufacturing, and support healthcare professionals by helping process radiology reports.
Artificial intelligence, and generative AI in particular, represents a pivotal technological advancement, said Jan Kautz, vice president of learning and perception research at NVIDIA. At CVPR, NVIDIA Research is sharing how we're pushing the boundaries of what's possible - from powerful image generation models that could supercharge professional creators to autonomous driving software that could help enable next-generation self-driving cars.
At CVPR, NVIDIA also announced NVIDIA Omniverse Cloud Sensor RTX, a set of microservices that enable physically accurate sensor simulation to accelerate the development of fully autonomous machines of every kind.
Forget Fine-Tuning: JeDi Simplifies Custom Image Generation Creators harnessing diffusion models, the most popular method for generating images based on text prompts, often have a specific character or object in mind - they may, for example, be developing a storyboard around an animated mouse or brainstorming an ad campaign for a specific toy.
Prior research has enabled these creators to personalize the output of diffusion models to focus on a specific subject using fine-tuning - where a user trains the model on a custom dataset - but the process can be time-consuming and inaccessible for general users.
JeDi, a paper by researchers from Johns Hopkins University, Toyota Technological Institute at Chicago and NVIDIA, proposes a new technique that allows users to easily personalize the output of a diffusion model within a couple of seconds using reference images. The team found that the model achieves state-of-the-art quality, significantly outperforming existing fine-tuning-based and fine-tuning-free methods.
JeDi can also be combined with retrieval-augmented generation, or RAG, to generate visuals specific to a database, such as a brand's product catalog.
https://blogs.nvidia.com/wp-content/uploads/2024/06/JeDi-cow-sculpture.mp4
New Foundation Model Perfects the Pose NVIDIA researchers at CVPR are also presenting FoundationPose, a foundation model for object pose estimation and tracking that can be instantly applied to new objects during inference, without the need for fine-tuning.
The model, which set a new record on a popular benchmark for object pose estimation, uses either a small set of reference images or a 3D representation of an object to understand its shape. It can then identify and track how that object moves and rotates in 3D across a video, even in poor lighting conditions or complex scenes with visual obstructions.
FoundationPose could be used in industrial applications to help autonomous robots identify and track the objects they interact with. It could also be used in augmented reality applications where an AI model is used to overlay visuals on a live scene.
NeRFDeformer Transforms 3D Scenes With a Single Snapshot A NeRF is an AI model that can render a 3D scene based on a series of 2D images taken from different positions in the environment. In fields like robotics, NeRFs can be used to generate immersive 3D renders of complex real-world scenes, such as a cluttered room or a construction site. However, to make any changes, developers would need to manually define how the scene has transformed - or remake the NeRF entirely.
Researchers from the University of Illinois Urbana-Champaign and NVIDIA have simplified the process with NeRFDeformer. The method, being presented at CVPR, can successfully transform an existing NeRF using a single RGB-D image, which is a combination of a normal photo and a depth map that captures how far each object in a scene is from the camera.
VILA Visual Language Model Gets the Picture A CVPR research collaboration between NVIDIA and the Massachusetts Institute of Technology is advancing the state of the art for vision language models, which are generative AI models that can process videos, images and text.
The group developed VILA, a family of open-source visual language models that outperforms prior neural networks on key benchmarks that test how well AI models answer questions about images. VILA's unique pretraining process unlocked new model capabilities, including enhanced world knowledge, stronger in-context learning and the ability to reason across multiple images.
VILA can understand memes and reason based on multiple images or video frames. The VILA model fa
North America Stories
08/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
08/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
08/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
08/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
08/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
08/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
08/04/2026
Marshall Electronics introduces its latest POV camera with the new CV376 Compact NDI |HX3 HDMI POV Camera at NAB 2026 (Booth C8339). Designed for versatile 4K c...
08/04/2026
Zixi, a leader in live video delivery and workflow orchestration, enables interoperable live video workflows across multi-vendor environments, helping broadcast...
08/04/2026
Scripps Research scientists uncover new mechanism cancer cells use to survive DNA damage Discovery reshapes understanding of how tumor cells repair broken DNA, ...
07/04/2026
Ikegami USA will launch a refinement to the UHK-X700 and UHK-X750 3-CMOS -in. UHD cameras in the UNICAM XE series plus two new 7-in. viewfinders at NAB 2026 in...
07/04/2026
The NAB Leadership Foundation and NAB PILOT will host a career mixer for technology students at NAB Show 2026 on April 18, 5-6:30 p.m., North Hall, Room N225/22...
07/04/2026
Audinate Group Limited and Futuresource Consulting have published results from a...
07/04/2026
Eluvio has announced the commercial availability of its Content Fabric Bucharest...
07/04/2026
Eluvio has unveiled a new architecture for video AI and an updated Eluvio Video Intelligence Editor (EVIE) ahead of NAB Show 2026. Eluvio AI runs analysis and i...
07/04/2026
The Professional Fighters League (PFL) has announced a deal with Sky New Zealand...
07/04/2026
The NBA and Enjoy Basketball, the digital media company co-founded by YouTube cr...
07/04/2026
The 4,000-sq.-ft. space in Frisco, TX, has produced live and packaged programmin...
07/04/2026
What began as a referee training tool is evolving into a powerful production ass...
07/04/2026
Manifold Technologies will announce at NAB Show 2026 (Booth C.1808) that its manifold CLOUD platform will be available as a deployable application within NEP Pl...
07/04/2026
Shaquille O'Neal, Authentic Brands Group, and TNT Sports, in partnership wit...
07/04/2026
Telos Alliance will debut the Omnia XII, a new FM/HD/DAB audio processor, at NAB Show 2026 in Las Vegas.
Built on a 2RU hardware platform, Omnia XII features a...
07/04/2026
Amazon Web Services (AWS) will exhibit at NAB Show 2026 (April 18-22, Las Vegas Convention Center, Booth W1701), with demonstrations, speaking sessions, and int...
07/04/2026
Synamedia will demonstrate AI by Quortex at NAB Show 2026, a framework that applies AI capabilities to video workflows on demand rather than continuously. The s...
07/04/2026
TVNewsCheck will present its 15th annual Women in Technology Awards on Tuesday, April 21, at 5 p.m. PT at NAB Show, in the Media and Entertainment Theater (W146...
07/04/2026
Akta will demonstrate its AI video platform at NAB Show 2026, highlighting new capabilities in media processing and vertical video formatting alongside its exis...
07/04/2026
ESPN and Disney have announced the launch of ESPN on Disney in Europe and select Asia-Pacific markets, bringing the offering to 53 countries and territories a...
07/04/2026
LOS ANGELES, CA, April 7, 2026 - The nonprofit Sundance Institute announced today the fellows selected for the 2026 Native Lab, the signature initiative of the ...
07/04/2026
The November 2025 YEP Newsletter highlights a recent YEP Coffee Chat, offering members the chance to connect with industry professionals in an informal setting ...
07/04/2026
The December 2025 YEP Newsletter includes a Spotlight on Emily Vail, showcasing her career journey and work in the industry, alongside Mentorship Reflections by...
07/04/2026
Ground-Based Electro-Optical Deep Space Surveillance (GEODSS) telescope operated...
07/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
07/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
07/04/2026
Catgut Sound Owner and Production Sound Mixer Chris Welcker, CAS, has built a career at the intersection of music and film. A former musician and composer, Welc...
07/04/2026
SDVI Corporation today announced the next generation of its Rally media supply chain management platform, introducing a redesigned orchestration engine that rep...
07/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
07/04/2026
Frequency, the engine behind many of the world's leading streaming television channels, at NAB 2026 will be launching new Studio services to help content ow...
07/04/2026
Ikegami USA has chosen NAB 2026 in Las Vegas as the launch platform for new additions to its range of broadcast-quality television production equipment. These w...
07/04/2026
April 19, 2026, Las Vegas Kiloview, an innovative provider of AV-over-IP technologies, will showcase its latest broadcast IP solutions at NAB 2026, presenting...
07/04/2026
Bitmovin has announced support for Server-Guided Ad Insertion (SGAI) across its playback products using HLS interstitials, enabling more advanced ad-supported s...
07/04/2026
Synamedia is unveiling AI by Quortex at The NAB show, a just-in-time AI plugin framework that applies intelligence only when needed across video processing, dis...
07/04/2026
Cuez will showcase four additions to its cloud-based newsroom, rundown and automation platform at NAB Show 2026 (April 18 22, Las Vegas, Booth N1867):
Cuez ...
07/04/2026
CueScript to Showcase Innovations at NAB that Add Mobility Options, Ease Editing...
07/04/2026
Barix Extends Transport Options for Multi-Engine IP Encoder
Brie Clayton April 7, 2026
0 Comments
New for NAB, Barix adds SRT and RIST support to Mult...
07/04/2026
Elite Media Technologies Selects Interra Systems' BATON File-Based QC Soluti...
07/04/2026
Tightrope Media Systems to Debut Cablecast LiveBridge for Simultaneous Streaming...
07/04/2026
Cuez Brings Four New Innovations to NAB 2026: From Story-Centric Newsroom to Ope...
07/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
07/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
07/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
07/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...