Back to Search Start Over

DualFocus: Integrating Plausible Descriptions in Text-based Person Re-identification

Authors :
Deng, Yuchuan
Hu, Zhanpeng
Han, Jiakun
Deng, Chuang
Zhao, Qijun
Publication Year :
2024

Abstract

Text-based Person Re-identification (TPR) aims to retrieve specific individual images from datasets based on textual descriptions. Existing TPR methods primarily focus on recognizing explicit and positive characteristics, often overlooking the role of negative descriptions. This oversight can lead to false positives-images that meet positive criteria but should be excluded based on negative descriptions. To address these limitations, we introduce DualFocus, a unified framework that integrates plausible descriptions to enhance the interpretative accuracy of vision-language models in TPR tasks. DualFocus leverages Dual (Positive/Negative) Attribute Prompt Learning (DAPL), which incorporates Dual Image-Attribute Contrastive (DIAC) Learning and Sensitive Image-Attributes Matching (SIAM) Learning, enabling the detection of non-existent attributes and reducing false positives. To achieve a balance between coarse and fine-grained alignment of visual and textual embeddings, we propose the Dynamic Tokenwise Similarity (DTS) loss, which refines the representation of both matching and non-matching descriptions, thereby improving the matching process through detailed and adaptable similarity assessments. The comprehensive experiments on CUHK-PEDES, ICFG-PEDES, and RSTPReid, DualFocus demonstrates superior performance over state-of-the-art methods, significantly enhancing both precision and robustness in TPR.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.07459
Document Type :
Working Paper