Back to Search
Start Over
Event-Stream Representation for Human Gaits Identification Using Deep Neural Networks.
- Source :
- IEEE Transactions on Pattern Analysis & Machine Intelligence; Jul2022, Vol. 44 Issue 7, p3436-3449, 14p
- Publication Year :
- 2022
-
Abstract
- Dynamic vision sensors (event cameras) have recently been introduced to solve a number of different vision tasks such as object recognition, activities recognition, tracking, etc. Compared with the traditional RGB sensors, the event cameras have many unique advantages such as ultra low resources consumption, high temporal resolution and much larger dynamic range. However, these cameras only produce noisy and asynchronous events of intensity changes, i.e., event-streams rather than frames, where conventional computer vision algorithms can't be directly applied. In our opinion the key challenge for improving the performance of event cameras in vision tasks is finding the appropriate representations of the event-streams so that cutting-edge learning approaches can be applied to fully uncover the spatio-temporal information contained in the event-streams. In this paper, we focus on the event-based human gait identification task and investigate the possible representations of the event-streams when deep neural networks are applied as the classifier. We propose new event-based gait recognition approaches basing on two different representations of the event-stream, i.e., graph and image-like representations, and use graph-based convolutional network (GCN) and convolutional neural networks (CNN) respectively to recognize gait from the event-streams. The two approaches are termed as EV-Gait-3DGraph and EV-Gait-IMG. To evaluate the performance of the proposed approaches, we collect two event-based gait datasets, one from real-world experiments and the other by converting the publicly available RGB gait recognition benchmark CASIA-B. Extensive experiments show that EV-Gait-3DGraph achieves significantly higher recognition accuracy than other competing methods when sufficient training samples are available. However, EV-Gait-IMG converges more quickly than graph-based approaches while training and shows good accuracy with only few number of training samples (less than ten). So image-like presentation is preferable when the amount of training data is limited. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 01628828
- Volume :
- 44
- Issue :
- 7
- Database :
- Complementary Index
- Journal :
- IEEE Transactions on Pattern Analysis & Machine Intelligence
- Publication Type :
- Academic Journal
- Accession number :
- 157258412
- Full Text :
- https://doi.org/10.1109/TPAMI.2021.3054886