Back to Search Start Over

On the Importance of Diversity When Training Deep Learning Segmentation Models with Error-Prone Pseudo-Labels

Authors :
Nana Yang
Charles Rongione
Anne-Laure Jacquemart
Xavier Draye
Christophe De Vleeschouwer
Source :
Applied Sciences, Vol 14, Iss 12, p 5156 (2024)
Publication Year :
2024
Publisher :
MDPI AG, 2024.

Abstract

The key to training deep learning (DL) segmentation models lies in the collection of annotated data. The annotation process is, however, generally expensive in human resources. Our paper leverages deep or traditional machine learning methods trained on a small set of manually labeled data to automatically generate pseudo-labels on large datasets, which are then used to train so-called data-reinforced deep learning models. The relevance of the approach is demonstrated in two applicative scenarios that are distinct both in terms of task and pseudo-label generation procedures, enlarging the scope of the outcomes of our study. Our experiments reveal that (i) data reinforcement helps, even with error-prone pseudo-labels, (ii) convolutional neural networks have the capability to regularize their training with respect to labeling errors, and (iii) there is an advantage to increasing diversity when generating the pseudo-labels, either by enriching the manual annotation through accurate annotation of singular samples, or by considering soft pseudo-labels per sample when prior information is available about their certainty.

Details

Language :
English
ISSN :
20763417
Volume :
14
Issue :
12
Database :
Directory of Open Access Journals
Journal :
Applied Sciences
Publication Type :
Academic Journal
Accession number :
edsdoj.b0f359a3859b4d639b9838a978cbefff
Document Type :
article
Full Text :
https://doi.org/10.3390/app14125156