Back to Search Start Over

vid-TLDR: Training Free Token merging for Light-weight Video Transformer

Authors :
Choi, Joonmyung
Lee, Sanghyeok
Chu, Jaewon
Choi, Minhyuk
Kim, Hyunwoo J.
Publication Year :
2024

Abstract

Video Transformers have become the prevalent solution for various video downstream tasks with superior expressive power and flexibility. However, these video transformers suffer from heavy computational costs induced by the massive number of tokens across the entire video frames, which has been the major barrier to training the model. Further, the patches irrelevant to the main contents, e.g., backgrounds, degrade the generalization performance of models. To tackle these issues, we propose training free token merging for lightweight video Transformer (vid-TLDR) that aims to enhance the efficiency of video Transformers by merging the background tokens without additional training. For vid-TLDR, we introduce a novel approach to capture the salient regions in videos only with the attention map. Further, we introduce the saliency-aware token merging strategy by dropping the background tokens and sharpening the object scores. Our experiments show that vid-TLDR significantly mitigates the computational complexity of video Transformers while achieving competitive performance compared to the base model without vid-TLDR. Code is available at https://github.com/mlvlab/vid-TLDR.<br />Comment: Conference on Computer Vision and Pattern Recognition (CVPR), 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.13347
Document Type :
Working Paper