Back to Search Start Over

Long-Short Temporal–Spatial Clues Excited Network for Robust Person Re-identification.

Authors :
Li, Shuai
Song, Wenfeng
Fang, Zheng
Shi, Jiaying
Hao, Aimin
Zhao, Qinping
Qin, Hong
Source :
International Journal of Computer Vision. Dec2020, Vol. 128 Issue 12, p2936-2961. 26p.
Publication Year :
2020

Abstract

Directly benefiting from the rapid advancement of deep learning methods, person re-identification (Re-ID) applications have been widespread with remarkable successes in recent years. Nevertheless, cross-scene Re-ID is still hindered by large view variation, since it is challenging to effectively exploit and leverage the temporal clues due to heavy computational burden and the difficulty in flexibly incorporating discriminative features. To alleviate, we articulate a long-short temporal–spatial clues excited network (LSTS-NET) for robust person Re-ID across different scenes. In essence, our LSTS-NET comprises a motion appearance model and a motion-refinement aggregating scheme. Of which, the former abstracts temporal clues based on multi-range low-rank analysis both in consecutive frames and in cross-camera videos, which can augment the person-related features with details while suppressing the clutter background across different scenes. In addition, to aggregate the temporal clues with spatial features, the latter is proposed to automatically activate the person-specific features by incorporating personalized motion-refinement layers and several motion-excitation CNN blocks into deep networks, which expedites the extraction and learning of discriminative features from different temporal clues. As a result, our LSTS-NET can robustly distinguish persons across different scenes. To verify the improvement of our LSTS-NET, we conduct extensive experiments and make comprehensive evaluations on 8 widely-recognized public benchmarks. All the experiments confirm that, our LSTS-NET can significantly boost the Re-ID performance of existing deep learning methods, and outperforms the state-of-the-art methods in terms of robustness and accuracy. [ABSTRACT FROM AUTHOR]

Subjects

Subjects :
*DEEP learning
*TEMPORAL databases

Details

Language :
English
ISSN :
09205691
Volume :
128
Issue :
12
Database :
Academic Search Index
Journal :
International Journal of Computer Vision
Publication Type :
Academic Journal
Accession number :
146150848
Full Text :
https://doi.org/10.1007/s11263-020-01349-4