Back to Search Start Over

Channel semantic mutual learning for visible-thermal person re-identification.

Authors :
Zhu, Yingjie
Yang, Wenzhong
Source :
PLoS ONE. 1/19/2024, Vol. 19 Issue 1, p1-14. 14p.
Publication Year :
2024

Abstract

Visible-infrared person re-identification (VI-ReID) is a cross-modality retrieval issue aiming to match the same pedestrian between visible and infrared cameras. Thus, the modality discrepancy presents a significant challenge for this task. Most methods employ different networks to extract features that are invariant between modalities. While we propose a novel channel semantic mutual learning network (CSMN), which attributes the difference in semantics between modalities to the difference at the channel level, it optimises the semantic consistency between channels from two perspectives: the local inter-channel semantics and the global inter-modal semantics. Meanwhile, we design a channel-level auto-guided double metric loss (CADM) to learn modality-invariant features and the sample distribution in a fine-grained manner. We conducted experiments on RegDB and SYSU-MM01, and the experimental results validate the superiority of CSMN. Especially on RegDB datasets, CSMN improves the current best performance by 3.43% and 0.5% on the Rank-1 score and mINP value, respectively. The code is available at https://github.com/013zyj/CSMN. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
19326203
Volume :
19
Issue :
1
Database :
Academic Search Index
Journal :
PLoS ONE
Publication Type :
Academic Journal
Accession number :
174911106
Full Text :
https://doi.org/10.1371/journal.pone.0293498