Back to Search Start Over

TalkinNeRF: Animatable Neural Fields for Full-Body Talking Humans

Authors :
Chatziagapi, Aggelina
Chaudhuri, Bindita
Kumar, Amit
Ranjan, Rakesh
Samaras, Dimitris
Sarafianos, Nikolaos
Publication Year :
2024

Abstract

We introduce a novel framework that learns a dynamic neural radiance field (NeRF) for full-body talking humans from monocular videos. Prior work represents only the body pose or the face. However, humans communicate with their full body, combining body pose, hand gestures, as well as facial expressions. In this work, we propose TalkinNeRF, a unified NeRF-based network that represents the holistic 4D human motion. Given a monocular video of a subject, we learn corresponding modules for the body, face, and hands, that are combined together to generate the final result. To capture complex finger articulation, we learn an additional deformation field for the hands. Our multi-identity representation enables simultaneous training for multiple subjects, as well as robust animation under completely unseen poses. It can also generalize to novel identities, given only a short video as input. We demonstrate state-of-the-art performance for animating full-body talking humans, with fine-grained hand articulation and facial expressions.<br />Comment: Accepted by ECCVW 2024. Project page: https://aggelinacha.github.io/TalkinNeRF/

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.16666
Document Type :
Working Paper