Back to Search Start Over

Self-Regulated Learning for Egocentric Video Activity Anticipation

Authors :
Qi, Zhaobo
Wang, Shuhui
Su, Chi
Su, Li
Huang, Qingming
Tian, Qi
Publication Year :
2021

Abstract

Future activity anticipation is a challenging problem in egocentric vision. As a standard future activity anticipation paradigm, recursive sequence prediction suffers from the accumulation of errors. To address this problem, we propose a simple and effective Self-Regulated Learning framework, which aims to regulate the intermediate representation consecutively to produce representation that (a) emphasizes the novel information in the frame of the current time-stamp in contrast to previously observed content, and (b) reflects its correlation with previously observed frames. The former is achieved by minimizing a contrastive loss, and the latter can be achieved by a dynamic reweighing mechanism to attend to informative frames in the observed content with a similarity comparison between feature of the current frame and observed frames. The learned final video representation can be further enhanced by multi-task learning which performs joint feature learning on the target activity labels and the automatically detected action and object class tokens. SRL sharply outperforms existing state-of-the-art in most cases on two egocentric video datasets and two third-person video datasets. Its effectiveness is also verified by the experimental fact that the action and object concepts that support the activity semantics can be accurately identified.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2111.11631
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/TPAMI.2021.3059923