Back to Search Start Over

Self-supervised Neural Articulated Shape and Appearance Models

Authors :
Wei, Fangyin
Chabra, Rohan
Ma, Lingni
Lassner, Christoph
Zollhöfer, Michael
Rusinkiewicz, Szymon
Sweeney, Chris
Newcombe, Richard
Slavcheva, Mira
Publication Year :
2022

Abstract

Learning geometry, motion, and appearance priors of object classes is important for the solution of a large variety of computer vision problems. While the majority of approaches has focused on static objects, dynamic objects, especially with controllable articulation, are less explored. We propose a novel approach for learning a representation of the geometry, appearance, and motion of a class of articulated objects given only a set of color images as input. In a self-supervised manner, our novel representation learns shape, appearance, and articulation codes that enable independent control of these semantic dimensions. Our model is trained end-to-end without requiring any articulation annotations. Experiments show that our approach performs well for different joint types, such as revolute and prismatic joints, as well as different combinations of these joints. Compared to state of the art that uses direct 3D supervision and does not output appearance, we recover more faithful geometry and appearance from 2D observations only. In addition, our representation enables a large variety of applications, such as few-shot reconstruction, the generation of novel articulations, and novel view-synthesis.<br />Comment: 15 pages. CVPR 2022. Project page available at https://weify627.github.io/nasam/

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2205.08525
Document Type :
Working Paper