Back to Search Start Over

Co-Learning Meets Stitch-Up for Noisy Multi-label Visual Recognition

Authors :
Liang, Chao
Yang, Zongxin
Zhu, Linchao
Yang, Yi
Source :
IEEE Transactions on Image Processing, vol. 32, pp. 2508-2519, 2023
Publication Year :
2023

Abstract

In real-world scenarios, collected and annotated data often exhibit the characteristics of multiple classes and long-tailed distribution. Additionally, label noise is inevitable in large-scale annotations and hinders the applications of learning-based models. Although many deep learning based methods have been proposed for handling long-tailed multi-label recognition or label noise respectively, learning with noisy labels in long-tailed multi-label visual data has not been well-studied because of the complexity of long-tailed distribution entangled with multi-label correlation. To tackle such a critical yet thorny problem, this paper focuses on reducing noise based on some inherent properties of multi-label classification and long-tailed learning under noisy cases. In detail, we propose a Stitch-Up augmentation to synthesize a cleaner sample, which directly reduces multi-label noise by stitching up multiple noisy training samples. Equipped with Stitch-Up, a Heterogeneous Co-Learning framework is further designed to leverage the inconsistency between long-tailed and balanced distributions, yielding cleaner labels for more robust representation learning with noisy long-tailed data. To validate our method, we build two challenging benchmarks, named VOC-MLT-Noise and COCO-MLT-Noise, respectively. Extensive experiments are conducted to demonstrate the effectiveness of our proposed method. Compared to a variety of baselines, our method achieves superior results.<br />Comment: accepted by TIP 2023, code is at https://github.com/VamosC/CoLearning-meet-StitchUp

Details

Database :
arXiv
Journal :
IEEE Transactions on Image Processing, vol. 32, pp. 2508-2519, 2023
Publication Type :
Report
Accession number :
edsarx.2307.00880
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/TIP.2023.3270103