Back to Search Start Over

AutoDecoding Latent 3D Diffusion Models

Authors :
Ntavelis, Evangelos
Siarohin, Aliaksandr
Olszewski, Kyle
Wang, Chaoyang
Van Gool, Luc
Tulyakov, Sergey
Publication Year :
2023

Abstract

We present a novel approach to the generation of static and articulated 3D assets that has a 3D autodecoder at its core. The 3D autodecoder framework embeds properties learned from the target dataset in the latent space, which can then be decoded into a volumetric representation for rendering view-consistent appearance and geometry. We then identify the appropriate intermediate volumetric latent space, and introduce robust normalization and de-normalization operations to learn a 3D diffusion from 2D images or monocular videos of rigid or articulated objects. Our approach is flexible enough to use either existing camera supervision or no camera information at all -- instead efficiently learning it during training. Our evaluations demonstrate that our generation results outperform state-of-the-art alternatives on various benchmark datasets and metrics, including multi-view image datasets of synthetic objects, real in-the-wild videos of moving people, and a large-scale, real video dataset of static objects.<br />Comment: Project page: https://snap-research.github.io/3DVADER/

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2307.05445
Document Type :
Working Paper