Back to Search Start Over

3D-free meets 3D priors: Novel View Synthesis from a Single Image with Pretrained Diffusion Guidance

Authors :
Kang, Taewon
Kothandaraman, Divya
Manocha, Dinesh
Lin, Ming C.
Publication Year :
2024

Abstract

Recent 3D novel view synthesis (NVS) methods are limited to single-object-centric scenes and struggle with complex environments. They often require extensive 3D data for training, lacking generalization beyond the training distribution. Conversely, 3D-free methods can generate text-controlled views of complex, in-the-wild scenes using a pretrained stable diffusion model without the need for a large amount of 3D-based training data, but lack camera control. In this paper, we introduce a method capable of generating camera-controlled viewpoints from a single input image, by combining the benefits of 3D-free and 3D-based approaches. Our method excels in handling complex and diverse scenes without extensive training or additional 3D and multiview data. It leverages widely available pretrained NVS models for weak guidance, integrating this knowledge into a 3D-free view synthesis approach to achieve the desired results. Experimental results demonstrate that our method outperforms existing models in both qualitative and quantitative evaluations, providing high-fidelity and consistent novel view synthesis at desired camera angles across a wide variety of scenes.<br />Comment: 13 pages, 12 figures, v3: corrected typos in figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.06157
Document Type :
Working Paper