Back to Search Start Over

TA2N: Two-Stage Action Alignment Network for Few-shot Action Recognition

Authors :
Shuyuan Li
Huabin Liu
Rui Qian
Yuxi Li
John See
Mengjuan Fei
Xiaoyuan Yu
Weiyao Lin
Publication Year :
2021

Abstract

Few-shot action recognition aims to recognize novel action classes (query) using just a few samples (support). The majority of current approaches follow the metric learning paradigm, which learns to compare the similarity between videos. Recently, it has been observed that directly measuring this similarity is not ideal since different action instances may show distinctive temporal distribution, resulting in severe misalignment issues across query and support videos. In this paper, we arrest this problem from two distinct aspects -- action duration misalignment and action evolution misalignment. We address them sequentially through a Two-stage Action Alignment Network (TA2N). The first stage locates the action by learning a temporal affine transform, which warps each video feature to its action duration while dismissing the action-irrelevant feature (e.g. background). Next, the second stage coordinates query feature to match the spatial-temporal action evolution of support by performing temporally rearrange and spatially offset prediction. Extensive experiments on benchmark datasets show the potential of the proposed method in achieving state-of-the-art performance for few-shot action recognition.The code of this project can be found at https://github.com/R00Kie-Liu/TA2N<br />Published in AAAI 2022

Details

Language :
English
Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....188d041b2619d10f8ae4a7a96119fb1e