Back to Search Start Over

FAMOUS: High-Fidelity Monocular 3D Human Digitization Using View Synthesis

Authors :
Hema, Vishnu Mani
Aich, Shubhra
Haene, Christian
Bazin, Jean-Charles
de la Torre, Fernando
Publication Year :
2024

Abstract

The advancement in deep implicit modeling and articulated models has significantly enhanced the process of digitizing human figures in 3D from just a single image. While state-of-the-art methods have greatly improved geometric precision, the challenge of accurately inferring texture remains, particularly in obscured areas such as the back of a person in frontal-view images. This limitation in texture prediction largely stems from the scarcity of large-scale and diverse 3D datasets, whereas their 2D counterparts are abundant and easily accessible. To address this issue, our paper proposes leveraging extensive 2D fashion datasets to enhance both texture and shape prediction in 3D human digitization. We incorporate 2D priors from the fashion dataset to learn the occluded back view, refined with our proposed domain alignment strategy. We then fuse this information with the input image to obtain a fully textured mesh of the given person. Through extensive experimentation on standard 3D human benchmarks, we demonstrate the superior performance of our approach in terms of both texture and geometry. Code and dataset is available at https://github.com/humansensinglab/FAMOUS.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.09690
Document Type :
Working Paper
Full Text :
https://doi.org/10.1007/978-3-031-73007-8_4