NVIDIA’s Latest Graphics Research Papers to Push the Boundaries of Generative AI at SIGGRAPH

2 min


NVIDIA has revealed a series of cutting-edge AI research advancements in generative AI and neural graphics that will assist developers and artists in bringing their ideas to life in a variety of forms, including 2D, 3D, and hyperrealistic or fantastical. NVIDIA will display around 20 research papers at SIGGRAPH 2023, the leading computer graphics conference, in collaboration with more than a dozen universities throughout the United States, Europe, and Israel. The papers cover a wide range of research, including generative AI models that can convert text into personalized images, inverse rendering tools that can turn still images into 3D objects, neural physics models that can use AI to simulate complex 3D elements with stunning realism, and neural rendering models that can unlock new capabilities for generating real-time, AI-powered visual details.

https://blogs.nvidia.com/wp-content/uploads/2023/05/SIGGRAPH.mov

NVIDIA’s advancements are usually shared with developers on GitHub and included into products, such as the NVIDIA Omniverse platform, which is used for building and operating metaverse applications, and NVIDIA Picasso, a foundry for custom generative AI models for visual design. NVIDIA graphics research has helped to bring film-style rendering to video games, like Cyberpunk 2077 Ray Tracing: Overdrive Mode, the world’s first path-traced AAA title.

The research presented at SIGGRAPH 2023 will help developers and enterprises generate synthetic data rapidly to populate virtual worlds for robotics and autonomous vehicle training. It will also enable creators in art, architecture, graphic design, game development, and film to produce high-quality visuals quickly for storyboarding, previsualization, and even production.

One of the breakthroughs is generative AI models that transform text into images, which are powerful tools to create concept art or storyboards for films, video games, and 3D virtual worlds. NVIDIA and Tel Aviv University have two papers at SIGGRAPH that enable users to customize the output by providing image examples that the model quickly learns from. Another breakthrough is the creation of photorealistic 3D head-and-shoulders models based on a single 2D portrait. The University of California, San Diego, collaborated with NVIDIA to develop this technology, which allows for accessible 3D avatar creation and 3D video conferencing using AI.

https://blogs.nvidia.com/wp-content/uploads/2023/05/Tennis.mp4?_=1

To bring 3D characters to life with realistic movement, NVIDIA Research collaborated with Stanford University to create an AI system that can learn a range of tennis skills from 2D video recordings of real tennis matches and apply this motion to 3D characters. The simulated tennis players can accurately hit the ball to target positions on a virtual court and play extended rallies with other characters. The research tackles the difficult challenge of producing 3D characters that can perform diverse skills with realistic movement without using expensive motion-capture data.

Hair simulation in animated 3D objects and characters is a computationally expensive challenge for animators, as there are typically 100,000 hairs on a human head that move dynamically in response to an individual’s movement and surrounding environment. A paper presented at SIGGRAPH reveals a new approach using neural physics, an AI technique that predicts how an object would move in the real world, to simulate tens of thousands of hairs in high resolution and real-time using modern GPUs. This technique offers significant performance improvements compared to state-of-the-art, CPU-based solvers, and enables both accurate and interactive physically-based hair grooming.

Finally, real-time rendering simulates the physics of light to add details to the 3D objects and characters in the environment. The neural rendering models developed by NVIDIA Research produce film-quality detail in real-time graphics, and the team has made significant progress in reducing the time it takes to render a single frame. This breakthrough enables creators to achieve film-quality graphics in real-time graphics, which was previously impossible.


Like it? Share with your friends!

0 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Send this to a friend
Hi, this may be interesting you: NVIDIA's Latest Graphics Research Papers to Push the Boundaries of Generative AI at SIGGRAPH! This is the link: https://allboutgpt.com/2023/05/05/nvidias-latest-graphics-research-papers-to-push-the-boundaries-of-generative-ai-at-siggraph/