Back to Search
Start Over
Attention-Based 3D-CNNs for Large-Vocabulary Sign Language Recognition.
- Source :
-
IEEE Transactions on Circuits & Systems for Video Technology . Sep2019, Vol. 29 Issue 9, p2822-2832. 11p. - Publication Year :
- 2019
-
Abstract
- Sign language recognition (SLR) is an important and challenging research topic in the multimedia field. Conventional techniques for SLR rely on hand-crafted features, which achieve limited success. In this paper, we present attention-based 3D-convolutional neural networks (3D-CNNs) for SLR. The framework has two advantages: 3D-CNNs learn spatio-temporal features from raw video without prior knowledge and the attention mechanism helps to select the clue. When training 3D-CNN for capturing spatio-temporal features, spatial attention is incorporated into the network to focus on the areas of interest. After feature extraction, temporal attention is utilized to select the significant motions for classification. The proposed method is evaluated on two large scale sign language data sets. The first one, collected by ourselves, is a Chinese sign language data set that consists of 500 categories. The other is the ChaLearn14 benchmark. The experiment results demonstrate the effectiveness of our approach compared with state-of-the-art algorithms. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 10518215
- Volume :
- 29
- Issue :
- 9
- Database :
- Academic Search Index
- Journal :
- IEEE Transactions on Circuits & Systems for Video Technology
- Publication Type :
- Academic Journal
- Accession number :
- 138481326
- Full Text :
- https://doi.org/10.1109/TCSVT.2018.2870740