Back to Search Start Over

MeshLRM: Large Reconstruction Model for High-Quality Mesh

Authors :
Wei, Xinyue
Zhang, Kai
Bi, Sai
Tan, Hao
Luan, Fujun
Deschaintre, Valentin
Sunkavalli, Kalyan
Su, Hao
Xu, Zexiang
Publication Year :
2024

Abstract

We propose MeshLRM, a novel LRM-based approach that can reconstruct a high-quality mesh from merely four input images in less than one second. Different from previous large reconstruction models (LRMs) that focus on NeRF-based reconstruction, MeshLRM incorporates differentiable mesh extraction and rendering within the LRM framework. This allows for end-to-end mesh reconstruction by fine-tuning a pre-trained NeRF LRM with mesh rendering. Moreover, we improve the LRM architecture by simplifying several complex designs in previous LRMs. MeshLRM's NeRF initialization is sequentially trained with low- and high-resolution images; this new LRM training strategy enables significantly faster convergence and thereby leads to better quality with less compute. Our approach achieves state-of-the-art mesh reconstruction from sparse-view inputs and also allows for many downstream applications, including text-to-3D and single-image-to-3D generation. Project page: https://sarahweiii.github.io/meshlrm/

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.12385
Document Type :
Working Paper