Back to Search Start Over

Feature learning based on SAE–PCA network for human gesture recognition in RGBD images.

Authors :
Li, Shao-Zi
Yu, Bin
Wu, Wei
Su, Song-Zhi
Ji, Rong-Rong
Source :
Neurocomputing. Mar2015 Part 2, Vol. 151, p565-573. 9p.
Publication Year :
2015

Abstract

Coming with the emerging of depth sensors link Microsoft Kinect, human hand gesture recognition has received ever increasing research interests recently. A successful gesture recognition system has usually heavily relied on having a good feature representation of data, which is expected to be task-dependent as well as coping with the challenges and opportunities induced by depth sensor. In this paper, a feature learning approach based on sparse auto-encoder (SAE) and principle component analysis is proposed for recognizing human actions, i.e. finger-spelling or sign language, for RGB-D inputs. The proposed model of feature learning is consisted of two components: First, features are learned respectively from the RGB and depth channels, using sparse auto-encoder with convolutional neural networks. Second, the learned features from both channels is concatenated and fed into a multiple layer PCA to get the final feature. Experimental results on American sign language (ASL) dataset demonstrate that the proposed feature learning model is significantly effective, which improves the recognition rate from 75% to 99.05% and outperforms the state-of-the-art. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
151
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
99827632
Full Text :
https://doi.org/10.1016/j.neucom.2014.06.086