Back to Search Start Over

ICLR: Instance Credibility-Based Label Refinement for label noisy person re-identification.

Authors :
Zhong, Xian
Han, Xiyu
Jia, Xuemei
Huang, Wenxin
Liu, Wenxuan
Su, Shuaipeng
Yu, Xiaohan
Ye, Mang
Source :
Pattern Recognition. Apr2024, Vol. 148, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

Person re-identification (Re-ID) has demonstrated remarkable performance when trained on accurately annotated data. However, in practical applications, the presence of annotation errors is unavoidable, which can undermine the accuracy and robustness of the Re-ID model training. To address the adverse impacts of label noise, especially in scenarios with limited training samples for each identity (ID), a common approach is to utilize all the available sample labels. Unfortunately, these labels contain incorrect labels, leading to the model being influenced by noise and compromising its performance. In this paper, we propose an Instance Credibility-based Label Refinement and Re-weighting (ICLR) framework to exploit partially credible labels to refine and re-weight incredible labels effectively. Specifically, the Label-Incredibility Optimization (LIO) module is proposed to optimize incredible labels before model training, which partitions the samples into credible and incredible samples and propagates credible labels to others. Furthermore, we design an Incredible Instance Re-weight (I 2 R) strategy, aiming to emphasize instances that contribute more significantly and dynamically adjust the weight of each instance. The proposed method seamlessly reinforces accuracy without requiring additional information or discarding any samples. Extensive experimental results conducted on Market-1501 and Duke-MTMC datasets demonstrate the effectiveness of our proposed method, leading to a substantial improvement in performance under both random noise and pattern noise settings. Code will be available at https://github.com/whut16/ReID-Label-Noise. • Inevitable label noise affects the performance of Re-ID. • All samples are partitioned and optimized before training, emphasizing the cleanliness of the data. • Dynamically adjusting the weight of each instance fosters the reuse and re-weighting of all available samples. • The improvement achieved by our proposal under random and pattern noise is noteworthy. [ABSTRACT FROM AUTHOR]

Subjects

Subjects :
*RANDOM noise theory
*HYGIENE

Details

Language :
English
ISSN :
00313203
Volume :
148
Database :
Academic Search Index
Journal :
Pattern Recognition
Publication Type :
Academic Journal
Accession number :
174791786
Full Text :
https://doi.org/10.1016/j.patcog.2023.110168