Back to Search Start Over

DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via Diffusion Models

Authors :
Cao, Yukang
Cao, Yan-Pei
Han, Kai
Shan, Ying
Wong, Kwan-Yee K.
Publication Year :
2023

Abstract

We present DreamAvatar, a text-and-shape guided framework for generating high-quality 3D human avatars with controllable poses. While encouraging results have been produced by recent methods on text-guided 3D common object generation, generating high-quality human avatars remains an open challenge due to the complexity of the human body's shape, pose, and appearance. We propose DreamAvatar to tackle this challenge, which utilizes a trainable NeRF for predicting density and color features for 3D points and a pre-trained text-to-image diffusion model for providing 2D self-supervision. Specifically, we leverage SMPL models to provide rough pose and shape guidance for the generation. We introduce a dual space design that comprises a canonical space and an observation space, which are related by a learnable deformation field through the NeRF, allowing for the transfer of well-optimized texture and geometry from the canonical space to the target posed avatar. Additionally, we exploit a normal-consistency regularization to allow for more vivid generation with detailed geometry and texture. Through extensive evaluations, we demonstrate that DreamAvatar significantly outperforms existing methods, establishing a new state-of-the-art for text-and-shape guided 3D human generation.<br />Comment: 19 pages, 19 figures. Project page: https://yukangcao.github.io/DreamAvatar/

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2304.00916
Document Type :
Working Paper