Back to Search Start Over

MannequinChallenge: Learning the Depths of Moving People by Watching Frozen People.

Authors :
Li, Zhengqi
Dekel, Tali
Cole, Forrester
Tucker, Richard
Snavely, Noah
Liu, Ce
Freeman, William T.
Source :
IEEE Transactions on Pattern Analysis & Machine Intelligence. Dec2021, Vol. 43 Issue 12, p4229-4241. 13p.
Publication Year :
2021

Abstract

We present a method for predicting dense depth in scenarios where both a monocular camera and people in the scene are freely moving (right). Existing methods for recovering depth for dynamic, non-rigid objects from monocular video impose strong assumptions on the objects’ motion and may only recover sparse depth. In this paper, we take a data-driven approach and learn human depth priors from a new source of data: thousands of Internet videos of people imitating mannequins, i.e., freezing in diverse, natural poses, while a hand-held camera tours the scene (left). Because people are stationary, geometric constraints hold, thus training data can be generated using multi-view stereo reconstruction. At inference time, our method uses motion parallax cues from the static areas of the scenes to guide the depth prediction. We evaluate our method on real-world sequences of complex human actions captured by a moving hand-held camera, show improvement over state-of-the-art monocular depth prediction methods, and demonstrate various 3D effects produced using our predicted depth. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
01628828
Volume :
43
Issue :
12
Database :
Academic Search Index
Journal :
IEEE Transactions on Pattern Analysis & Machine Intelligence
Publication Type :
Academic Journal
Accession number :
153710062
Full Text :
https://doi.org/10.1109/TPAMI.2020.2974454