Back to Search Start Over

VividDream: Generating 3D Scene with Ambient Dynamics

Authors :
Lee, Yao-Chih
Chen, Yi-Ting
Wang, Andrew
Liao, Ting-Hsuan
Feng, Brandon Y.
Huang, Jia-Bin
Publication Year :
2024

Abstract

We introduce VividDream, a method for generating explorable 4D scenes with ambient dynamics from a single input image or text prompt. VividDream first expands an input image into a static 3D point cloud through iterative inpainting and geometry merging. An ensemble of animated videos is then generated using video diffusion models with quality refinement techniques and conditioned on renderings of the static 3D scene from the sampled camera trajectories. We then optimize a canonical 4D scene representation using an animated video ensemble, with per-video motion embeddings and visibility masks to mitigate inconsistencies. The resulting 4D scene enables free-view exploration of a 3D scene with plausible ambient scene dynamics. Experiments demonstrate that VividDream can provide human viewers with compelling 4D experiences generated based on diverse real images and text prompts.<br />Comment: Project page: https://vivid-dream-4d.github.io

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.20334
Document Type :
Working Paper