Back to Search Start Over

Online learning and joint optimization of combined spatial-temporal models for robust visual tracking.

Authors :
Zhou, Tao
Bhaskar, Harish
Liu, Fanghui
Yang, Jie
Cai, Ping
Source :
Neurocomputing. Feb2017, Vol. 226, p221-237. 17p.
Publication Year :
2017

Abstract

Visual tracking is highly challenged by factors such as occlusion, background clutter, an abrupt target motion, illumination variation, and changes in scale and orientation. In this paper, an integrated framework for online learning of a fused temporal appearance and spatial constraint models for robust and accurate visual target tracking is proposed. The temporal appearance model aims to encapsulate historical appearance information of the target in order to cope with variations due to illumination changes and motion dynamics. On the other hand, the spatial constraint model exploits the relationships between the target and its neighbors to handle occlusion and deal with a cluttered background. For the purposes of reducing the computational complexity of the state estimation algorithm and in order to emphasize the importance of the different basis vectors, a K -nearest Local Smooth Algorithm (KLSA) is used to describe the spatial state model. Further, a customized Accelerated Proximal Gradient (APG) method is implemented for iteratively obtaining an optimal solution using KLSA. Finally, the optimal state estimate is obtained by using weighted samples within a particle filtering framework. Experimental results on large-scale benchmark sequences show that the proposed tracker achieves favorable performance compared to state-of-the-art methods. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
226
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
120321037
Full Text :
https://doi.org/10.1016/j.neucom.2016.11.055