Back to Search Start Over

Multi-task Deep Learning for Real-Time 3D Human Pose Estimation and Action Recognition

Authors :
Diogo C. Luvizon
Hedi Tabia
David Picard
Equipes Traitement de l'Information et Systèmes (ETIS - UMR 8051)
Ecole Nationale Supérieure de l'Electronique et de ses Applications (ENSEA)-Centre National de la Recherche Scientifique (CNRS)-CY Cergy Paris Université (CY)
IMAGINE [Marne-la-Vallée]
Laboratoire d'Informatique Gaspard-Monge (LIGM)
École des Ponts ParisTech (ENPC)-Centre National de la Recherche Scientifique (CNRS)-Université Gustave Eiffel-École des Ponts ParisTech (ENPC)-Centre National de la Recherche Scientifique (CNRS)-Université Gustave Eiffel
École des Ponts ParisTech (ENPC)-Centre National de la Recherche Scientifique (CNRS)-Université Gustave Eiffel
Informatique, BioInformatique, Systèmes Complexes (IBISC)
Université d'Évry-Val-d'Essonne (UEVE)-Université Paris-Saclay
Brazilian National Council for Scientific and Technological Development
CY Cergy Paris Université (CY)-Centre National de la Recherche Scientifique (CNRS)-Ecole Nationale Supérieure de l'Electronique et de ses Applications (ENSEA)
Source :
IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43 (8), pp.2752--2764. ⟨10.1109/TPAMI.2020.2976014⟩, IEEE Transactions on Pattern Analysis and Machine Intelligence, Institute of Electrical and Electronics Engineers, 2021, 43 (8), pp.2752--2764. ⟨10.1109/TPAMI.2020.2976014⟩
Publication Year :
2020
Publisher :
Institute of Electrical and Electronics Engineers (IEEE), 2020.

Abstract

Human pose estimation and action recognition are related tasks since both problems are strongly dependent on the human body representation and analysis. Nonetheless, most recent methods in the literature handle the two problems separately. In this work, we propose a multi-task framework for jointly estimating 2D or 3D human poses from monocular color images and classifying human actions from video sequences. We show that a single architecture can be used to solve both problems in an efficient way and still achieves state-of-the-art or comparable results at each task while running at more than 100 frames per second. The proposed method benefits from high parameters sharing between the two tasks by unifying still images and video clips processing in a single pipeline, allowing the model to be trained with data from different categories simultaneously and in a seamlessly way. Additionally, we provide important insights for end-to-end training the proposed multi-task model by decoupling key prediction parts, which consistently leads to better accuracy on both tasks. The reported results on four datasets (MPII, Human3.6M, Penn Action and NTU RGB+D) demonstrate the effectiveness of our method on the targeted tasks. Our source code and trained weights are publicly available at https://github.com/dluvizon/deephar.<br />Comment: Accepted to TPAMI. arXiv admin note: text overlap with arXiv:1802.09232

Details

ISSN :
19393539 and 01628828
Database :
OpenAIRE
Journal :
IEEE Transactions on Pattern Analysis and Machine Intelligence
Accession number :
edsair.doi.dedup.....4bd52582f23847ef77798f6970561454
Full Text :
https://doi.org/10.1109/tpami.2020.2976014