Back to Search Start Over

SplitNet: Learnable Clean-Noisy Label Splitting for Learning with Noisy Labels

Authors :
Kim, Daehwan
Ryoo, Kwangrok
Cho, Hansang
Kim, Seungryong
Publication Year :
2022

Abstract

Annotating the dataset with high-quality labels is crucial for performance of deep network, but in real world scenarios, the labels are often contaminated by noise. To address this, some methods were proposed to automatically split clean and noisy labels, and learn a semi-supervised learner in a Learning with Noisy Labels (LNL) framework. However, they leverage a handcrafted module for clean-noisy label splitting, which induces a confirmation bias in the semi-supervised learning phase and limits the performance. In this paper, we for the first time present a learnable module for clean-noisy label splitting, dubbed SplitNet, and a novel LNL framework which complementarily trains the SplitNet and main network for the LNL task. We propose to use a dynamic threshold based on a split confidence by SplitNet to better optimize semi-supervised learner. To enhance SplitNet training, we also present a risk hedging method. Our proposed method performs at a state-of-the-art level especially in high noise ratio settings on various LNL benchmarks.<br />Comment: project page link: https://ku-cvlab.github.io/SplitNet/

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2211.11753
Document Type :
Working Paper