Back to Search Start Over

Co-segmentation assisted cross-modality person re-identification.

Authors :
Huang, Nianchang
Xing, Baichao
Zhang, Qiang
Han, Jungong
Huang, Jin
Source :
Information Fusion. Apr2024, Vol. 104, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

We present a deep learning-based method for Visible-Infrared person Re-Identification (VI-ReID). The major contribution lies in the incorporation of co-segmentation into a multi-task learning framework for VI-ReID, where the co-segmentation concept aids in making the distributions of RGB images and IR images the same for the same identity but diverse for different identities. Accordingly, a novel multi-task learning based model, i.e., co-segmentation assisted VI-ReID (CSVI), is proposed in this paper. Specifically, the co-segmentation network first takes as the inputs the modality-shared features extracted from a set of RGB and IR images by using the VI-ReID model. Then, it exploits their semantic similarities for predicting the person masks of the common identities within the input RGB and IR images by using a cross-modality center based weight generation module and a segmentation decoder. Doing so enables the VI-ReID model to extract more additional modality-shared shape features for boosting performance. Meanwhile, the co-segmentation network implicitly establishes the interactions among the set of RGB and IR images, thus further bridging the large modality discrepancies. Our model's effectiveness and superiority are verified through experimental comparisons with state-of-the-art algorithms on several benchmark datasets. • Using the co-segmentation to assist the VI-ReID in a multi-task learning framework. • Co-segmenting the same identity from a set of input images with different modalities. • Providing theoretical comparisons between our proposed model and existing models. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15662535
Volume :
104
Database :
Academic Search Index
Journal :
Information Fusion
Publication Type :
Academic Journal
Accession number :
174642090
Full Text :
https://doi.org/10.1016/j.inffus.2023.102194