Back to Search Start Over

Disentangling Spatial and Temporal Learning for Efficient Image-to-Video Transfer Learning

Authors :
Qing, Zhiwu
Zhang, Shiwei
Huang, Ziyuan
Zhang, Yingya
Gao, Changxin
Zhao, Deli
Sang, Nong
Publication Year :
2023

Abstract

Recently, large-scale pre-trained language-image models like CLIP have shown extraordinary capabilities for understanding spatial contents, but naively transferring such models to video recognition still suffers from unsatisfactory temporal modeling capabilities. Existing methods insert tunable structures into or in parallel with the pre-trained model, which either requires back-propagation through the whole pre-trained model and is thus resource-demanding, or is limited by the temporal reasoning capability of the pre-trained structure. In this work, we present DiST, which disentangles the learning of spatial and temporal aspects of videos. Specifically, DiST uses a dual-encoder structure, where a pre-trained foundation model acts as the spatial encoder, and a lightweight network is introduced as the temporal encoder. An integration branch is inserted between the encoders to fuse spatio-temporal information. The disentangled spatial and temporal learning in DiST is highly efficient because it avoids the back-propagation of massive pre-trained parameters. Meanwhile, we empirically show that disentangled learning with an extra network for integration benefits both spatial and temporal understanding. Extensive experiments on five benchmarks show that DiST delivers better performance than existing state-of-the-art methods by convincing gaps. When pre-training on the large-scale Kinetics-710, we achieve 89.7% on Kinetics-400 with a frozen ViT-L model, which verifies the scalability of DiST. Codes and models can be found in https://github.com/alibaba-mmai-research/DiST.<br />Comment: ICCV2023. Code: https://github.com/alibaba-mmai-research/DiST

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2309.07911
Document Type :
Working Paper