Back to Search Start Over

Efficient transformer tracking with adaptive attention

Authors :
Dingkun Xiao
Zhenzhong Wei
Guangjun Zhang
Source :
IET Computer Vision, Vol 18, Iss 8, Pp 1338-1350 (2024)
Publication Year :
2024
Publisher :
Wiley, 2024.

Abstract

Abstract Recently, several trackers utilising Transformer architecture have shown significant performance improvement. However, the high computational cost of multi‐head attention, a core component in the Transformer, has limited real‐time running speed, which is crucial for tracking tasks. Additionally, the global mechanism of multi‐head attention makes it susceptible to distractors with similar semantic information to the target. To address these issues, the authors propose a novel adaptive attention that enhances features through the spatial sparse attention mechanism with less than 1/4 of the computational complexity of multi‐head attention. Our adaptive attention sets a perception range around each element in the feature map based on the target scale in the previous tracking result and adaptively searches for the information of interest. This allows the module to focus on the target region rather than background distractors. Based on adaptive attention, the authors build an efficient transformer tracking framework. It can perform deep interaction between search and template features to activate target information and aggregate multi‐level interaction features to enhance the representation ability. The evaluation results on seven benchmarks show that the authors’ tracker achieves outstanding performance with a speed of 43 fps and significant advantages in hard circumstances.

Details

Language :
English
ISSN :
17519640 and 17519632
Volume :
18
Issue :
8
Database :
Directory of Open Access Journals
Journal :
IET Computer Vision
Publication Type :
Academic Journal
Accession number :
edsdoj.280f51fe0dd84b54907d11f0f32799a9
Document Type :
article
Full Text :
https://doi.org/10.1049/cvi2.12315