Mile-High AI: NVIDIA Research to Present Advancements in Simulation and Gen AI at SIGGRAPH Aaron Lefohn July 12, 2024
0 Comments
In Denver, NVIDIA researchers will share how simulation research is improving AI models - and how AI models are improving simulation technology.
NVIDIA is taking an array of advancements in rendering, simulation and generative AI to SIGGRAPH 2024, the premier computer graphics conference, which will take place July 28 Aug. 1 in Denver.
More than 20 papers from NVIDIA Research introduce innovations advancing synthetic data generators and inverse rendering tools that can help train next-generation models. NVIDIA's AI research is making simulation better by boosting image quality and unlocking new ways to create 3D representations of real or imagined worlds.
The papers focus on diffusion models for visual generative AI, physics-based simulation and increasingly realistic AI-powered rendering. They include two technical Best Paper Award winners and collaborations with universities across the U.S., Canada, China, Israel and Japan as well as researchers at companies including Adobe and Roblox.
These initiatives will help create tools that developers and businesses can use to generate complex virtual objects, characters and environments. Synthetic data generation can then be harnessed to tell powerful visual stories, aid scientists' understanding of natural phenomena or assist in simulation-based training of robots and autonomous vehicles.
Some contents or functionalities here are not available due to your cookie preferences!
This happens because the functionality/content marked as Vimeo framework uses cookies that you choosed to keep disabled. In order to view this content or use this functionality, please enable cookies: click here to open your cookie preferences.
Diffusion Models Improve Texture Painting, Text-to-Image Generation Diffusion models, a popular tool for transforming text prompts into images, can help artists, designers and other creators rapidly generate visuals for storyboards or production, reducing the time it takes to bring ideas to life.
Two NVIDIA-authored papers are advancing the capabilities of these generative AI models.
ConsiStory, a collaboration between researchers at NVIDIA and Tel Aviv University, makes it easier to generate multiple images with a consistent main character - an essential capability for storytelling use cases such as illustrating a comic strip or developing a storyboard. The researchers' approach introduces a technique called subject-driven shared attention, which reduces the time it takes to generate consistent imagery from 13 minutes to around 30 seconds.
NVIDIA researchers last year won the Best in Show award at SIGGRAPH's Real-Time Live event for AI models that turn text or image prompts into custom textured materials. This year, they're presenting a paper that applies 2D generative diffusion models to interactive texture painting on 3D meshes, enabling artists to paint in real time with complex textures based on any reference image.
Kick-Starting Developments in Physics-Based Simulation Graphics researchers are narrowing the gap between physical objects and their virtual representations with physics-based simulation - a range of techniques to make digital objects and characters move the same way they would in the real world.
Several NVIDIA Research papers feature breakthroughs in the field, including SuperPADL, a project that tackles the challenge of simulating complex human motions based on text prompts.
Some contents or functionalities here are not available due to your cookie preferences!
This happens because the functionality/content marked as Vimeo framework uses cookies that you choosed to keep disabled. In order to view this content or use this functionality, please enable cookies: click here to open your cookie preferences.
Using a combination of reinforcement learning and supervised learning, the researchers demonstrated how the SuperPADL framework can be trained to reproduce the motion of more than 5,000 skills - and can run in real time on a consumer-grade NVIDIA GPU.
Another NVIDIA paper features a neural physics method that applies AI to learn how objects - whether represented as a 3D mesh, a NeRF or a solid object generated by a text-to-3D model - would behave as they are moved in an environment.
A paper written in collaboration with Carnegie Mellon University researchers develops a new kind of renderer - one that, instead of modeling physical light, can perform thermal analysis, electrostatics and fluid mechanics. Named one of five best papers at SIGGRAPH, the method is easy to parallelize and doesn't require cumbersome model cleanup, offering new opportunities for speeding up engineering design cycles.
Additional simulation papers introduce a more efficient technique for modeling hair strands and a pipeline that accelerates fluid simulation by 10x.
Raising the Bar for Rendering Realism, Diffraction Simulation Another set of NVIDIA-authored papers present new techniques to model visible light up to 25x faster and simulate diffraction effects - such as those used in radar simulation for training self-driving cars - up to 1,000x faster.
A paper by NVIDIA and University of Waterloo researchers tackles free-space diffraction, an optical phenomenon where light spreads out or bends around the edges of objects. The team's method can integrate with path-tracing workflows to increase the efficiency of simulating diffraction in complex scenes, offering up to 1,000x acceleration. Beyond rendering visible light, the model could also be used to simulate the longer wavelengths of radar, sound or radio waves.
Path tracing samples numerous paths - multi-bounce light rays traveling through a scene - to create a photorealistic picture. Two SIGGRAPH papers imp










