Back to Search Start Over

Rolling-Unrolling LSTMs for Action Anticipation from First-Person Video.

Authors :
Furnari, Antonino
Farinella, Giovanni Maria
Source :
IEEE Transactions on Pattern Analysis & Machine Intelligence. Nov2021, Vol. 43 Issue 11, p4021-4036. 16p.
Publication Year :
2021

Abstract

In this paper, we tackle the problem of egocentric action anticipation, i.e., predicting what actions the camera wearer will perform in the near future and which objects they will interact with. Specifically, we contribute Rolling-Unrolling LSTM, a learning architecture to anticipate actions from egocentric videos. The method is based on three components: 1) an architecture comprised of two LSTMs to model the sub-tasks of summarizing the past and inferring the future, 2) a Sequence Completion Pre-Training technique which encourages the LSTMs to focus on the different sub-tasks, and 3) a Modality ATTention (MATT) mechanism to efficiently fuse multi-modal predictions performed by processing RGB frames, optical flow fields and object-based features. The proposed approach is validated on EPIC-Kitchens, EGTEA Gaze+ and ActivityNet. The experiments show that the proposed architecture is state-of-the-art in the domain of egocentric videos, achieving top performances in the 2019 EPIC-Kitchens egocentric action anticipation challenge. The approach also achieves competitive performance on ActivityNet with respect to methods not based on unsupervised pre-training and generalizes to the tasks of early action recognition and action recognition. To encourage research on this challenging topic, we made our code, trained models, and pre-extracted features available at our web page: http://iplab.dmi.unict.it/rulstm. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
01628828
Volume :
43
Issue :
11
Database :
Academic Search Index
Journal :
IEEE Transactions on Pattern Analysis & Machine Intelligence
Publication Type :
Academic Journal
Accession number :
153710039
Full Text :
https://doi.org/10.1109/TPAMI.2020.2992889