Back to Search Start Over

Semi-Supervised Cross-View Projection-Based Dictionary Learning for Video-Based Person Re-Identification.

Authors :
Zhu, Xiaoke
Jing, Xiao-Yuan
Yang, Liang
You, Xinge
Chen, Dan
Gao, Guangwei
Wang, Yunhong
Source :
IEEE Transactions on Circuits & Systems for Video Technology. Oct2018, Vol. 28 Issue 10, p2599-2611. 13p.
Publication Year :
2018

Abstract

Video-based person re-identification (re-id) has attracted a lot of research interest. When facing dramatic growth in new pedestrian videos, existing video-based person re-id methods usually need large quantities of labeled pedestrian videos to train a discriminative model. In practice, labeling large quantities of pedestrian videos is a costly and time-consuming task, which will limit the application of these methods in the real environment. Therefore, it is valuable and necessary to investigate how to learn a discriminative re-id model by using limited labeled training pedestrian videos. In this paper, we propose a semi-supervised cross-view projection-based dictionary learning (SCPDL) approach for video-based person re-id. Specifically, SCPDL jointly learns a pair of feature projection matrices and a pair of dictionaries by integrating the information contained in labeled and unlabeled pedestrian videos. With the learned feature projection matrices, the influence of variations within each video to the re-id can be reduced. With the learned dictionary pair, pedestrian videos from two different cameras can be converted into coding coefficients in a common representation space, such that the differences between different cameras can be bridged. In the learning process, the labeled pedestrian videos are used to ensure that the learned dictionaries have favorable discriminability; the large quantities of unlabeled pedestrian videos are used to ensure that SCPDL can better capture the variations between pedestrian videos, such that the learned dictionaries can own stronger representative capability. Experiments on two public pedestrian sequence data sets (iLIDS-VID and PRID 2011) demonstrate the effectiveness of the proposed approach. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10518215
Volume :
28
Issue :
10
Database :
Academic Search Index
Journal :
IEEE Transactions on Circuits & Systems for Video Technology
Publication Type :
Academic Journal
Accession number :
132683744
Full Text :
https://doi.org/10.1109/TCSVT.2017.2718036