Back to Search Start Over

ATT3D: Amortized Text-to-3D Object Synthesis

Authors :
Lorraine, Jonathan
Xie, Kevin
Zeng, Xiaohui
Lin, Chen-Hsuan
Takikawa, Towaki
Sharp, Nicholas
Lin, Tsung-Yi
Liu, Ming-Yu
Fidler, Sanja
Lucas, James
Publication Year :
2023

Abstract

Text-to-3D modelling has seen exciting progress by combining generative text-to-image models with image-to-3D methods like Neural Radiance Fields. DreamFusion recently achieved high-quality results but requires a lengthy, per-prompt optimization to create 3D objects. To address this, we amortize optimization over text prompts by training on many prompts simultaneously with a unified model, instead of separately. With this, we share computation across a prompt set, training in less time than per-prompt optimization. Our framework - Amortized text-to-3D (ATT3D) - enables knowledge-sharing between prompts to generalize to unseen setups and smooth interpolations between text for novel assets and simple animations.<br />Comment: 22 pages, 20 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2306.07349
Document Type :
Working Paper