1. DiffusionRenderer: Neural Inverse and Forward Rendering with Video Diffusion Models
- Author
-
Liang, Ruofan, Gojcic, Zan, Ling, Huan, Munkberg, Jacob, Hasselgren, Jon, Lin, Zhi-Hao, Gao, Jun, Keller, Alexander, Vijaykumar, Nandita, Fidler, Sanja, and Wang, Zian
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Graphics - Abstract
Understanding and modeling lighting effects are fundamental tasks in computer vision and graphics. Classic physically-based rendering (PBR) accurately simulates the light transport, but relies on precise scene representations--explicit 3D geometry, high-quality material properties, and lighting conditions--that are often impractical to obtain in real-world scenarios. Therefore, we introduce DiffusionRenderer, a neural approach that addresses the dual problem of inverse and forward rendering within a holistic framework. Leveraging powerful video diffusion model priors, the inverse rendering model accurately estimates G-buffers from real-world videos, providing an interface for image editing tasks, and training data for the rendering model. Conversely, our rendering model generates photorealistic images from G-buffers without explicit light transport simulation. Experiments demonstrate that DiffusionRenderer effectively approximates inverse and forwards rendering, consistently outperforming the state-of-the-art. Our model enables practical applications from a single video input--including relighting, material editing, and realistic object insertion., Comment: Project page: research.nvidia.com/labs/toronto-ai/DiffusionRenderer/
- Published
- 2025