Back to Search Start Over

Action2Vec: A Crossmodal Embedding Approach to Action Learning

Authors :
Hahn, Meera
Silva, Andrew
Rehg, James M.
Publication Year :
2019

Abstract

We describe a novel cross-modal embedding space for actions, named Action2Vec, which combines linguistic cues from class labels with spatio-temporal features derived from video clips. Our approach uses a hierarchical recurrent network to capture the temporal structure of video features. We train our embedding using a joint loss that combines classification accuracy with similarity to Word2Vec semantics. We evaluate Action2Vec by performing zero shot action recognition and obtain state of the art results on three standard datasets. In addition, we present two novel analogy tests which quantify the extent to which our joint embedding captures distributional semantics. This is the first joint embedding space to combine verbs and action videos, and the first to be thoroughly evaluated with respect to its distributional semantics.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1901.00484
Document Type :
Working Paper