Back to Search
Start Over
CLIP-guided Prototype Modulating for Few-shot Action Recognition.
- Source :
- International Journal of Computer Vision; Jun2024, Vol. 132 Issue 6, p1899-1912, 14p
- Publication Year :
- 2024
-
Abstract
- Learning from large-scale contrastive language-image pre-training like CLIP has shown remarkable success in a wide range of downstream tasks recently, but it is still under-explored on the challenging few-shot action recognition (FSAR) task. In this work, we aim to transfer the powerful multimodal knowledge of CLIP to alleviate the inaccurate prototype estimation issue due to data scarcity, which is a critical problem in low-shot regimes. To this end, we present a CLIP-guided prototype modulating framework called CLIP-FSAR, which consists of two key components: a video-text contrastive objective and a prototype modulation. Specifically, the former bridges the task discrepancy between CLIP and the few-shot video task by contrasting videos and corresponding class text descriptions. The latter leverages the transferable textual concepts from CLIP to adaptively refine visual prototypes with a temporal Transformer. By this means, CLIP-FSAR can take full advantage of the rich semantic priors in CLIP to obtain reliable prototypes and achieve accurate few-shot classification. Extensive experiments on five commonly used benchmarks demonstrate the effectiveness of our proposed method, and CLIP-FSAR significantly outperforms existing state-of-the-art methods under various settings. The source code and models are publicly available at https://github.com/alibaba-mmai-research/CLIP-FSAR. [ABSTRACT FROM AUTHOR]
- Subjects :
- PROTOTYPES
SOURCE code
RECOGNITION (Psychology)
VIDEO excerpts
Subjects
Details
- Language :
- English
- ISSN :
- 09205691
- Volume :
- 132
- Issue :
- 6
- Database :
- Complementary Index
- Journal :
- International Journal of Computer Vision
- Publication Type :
- Academic Journal
- Accession number :
- 177595854
- Full Text :
- https://doi.org/10.1007/s11263-023-01917-4