1. Depth Sequential Information Entropy Maps and Multi-Label Subspace Learning for Human Action Recognition
- Author
-
Jiuzhen Liang, Xin Chao, Yuwan Gu, Tianjin Yang, and Zhenjie Hou
- Subjects
Sequence ,General Computer Science ,business.industry ,Computer science ,General Engineering ,020207 software engineering ,Pattern recognition ,02 engineering and technology ,Reduction (complexity) ,Image stitching ,Redundancy (information theory) ,multi-label subspace learning ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,Key (cryptography) ,020201 artificial intelligence & image processing ,General Materials Science ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,business ,lcsh:TK1-9971 ,Spatial analysis ,depth sequential information entropy maps ,Subspace topology ,Multimodal feature - Abstract
Human action recognition plays a key role in human-computer interaction in complex environments. However, similar actions will lead to poor feature sequence extraction and result in a reduction in recognition accuracy. This paper proposes a method (Action-Fusion: Multi-label subspace Learning (MLSL)) from depth maps called Depth Sequential Information Entropy Maps (DSIEM) and skeleton data for human action recognition in multiple modal features. The DSIEM describe the spatial information of human motion with information entropy, and describe the temporal information through stitching. DSIEM can reduce the redundancy of depth sequences and effectively capture spatial motion states. MLSL studies the relationship between different modalities and the inherent connection between different labels. The method is evaluated on three public datasets: Microsoft action 3D dataset (MSR Action3D), University of Texas at Dallas-multimodal human action dataset (UTD-MHAD), UTD MHAD- Kinect Version-2 (UTD-MHAD-Kinect V2). Experimental results show that the proposed MLSL model obtains new state-of-the-art results, including achieving the average rate of the MSR Action3D to 93.55%, the average rate of the UTD-MHAD to 88.37% and the average rate of the UTD-MHAD-Kinect V2 to 90.66%.
- Published
- 2020
- Full Text
- View/download PDF