Back to Search Start Over

Semi-supervised training of deep convolutional neural networks with heterogeneous data and few local annotations: An experiment on prostate histopathology image classification.

Authors :
Marini, Niccolò
Otálora, Sebastian
Müller, Henning
Atzori, Manfredo
Source :
Medical Image Analysis. Oct2021, Vol. 73, pN.PAG-N.PAG. 1p.
Publication Year :
2021

Abstract

• Improved classification performance on several prostate datasets using pseudo-labels • Generalization of the models on several high heterogeneous datasets • Few locally annotated data used to generate large amount of pseudo-labels • Overfitting in transfer learning limited despite data come from several sources • Three training variants that combine strongly and weakly annotated data are proposed [Display omitted] Convolutional neural networks (CNNs) are state-of-the-art computer vision techniques for various tasks, particularly for image classification. However, there are domains where the training of classification models that generalize on several datasets is still an open challenge because of the highly heterogeneous data and the lack of large datasets with local annotations of the regions of interest, such as histopathology image analysis. Histopathology concerns the microscopic analysis of tissue specimens processed in glass slides to identify diseases such as cancer. Digital pathology concerns the acquisition, management and automatic analysis of digitized histopathology images that are large, having in the order of 100 ′ 000 2 pixels per image. Digital histopathology images are highly heterogeneous due to the variability of the image acquisition procedures. Creating locally labeled regions (required for the training) is time-consuming and often expensive in the medical field, as physicians usually have to annotate the data. Despite the advances in deep learning, leveraging strongly and weakly annotated datasets to train classification models is still an unsolved problem, mainly when data are very heterogeneous. Large amounts of data are needed to create models that generalize well. This paper presents a novel approach to train CNNs that generalize to heterogeneous datasets originating from various sources and without local annotations. The data analysis pipeline targets Gleason grading on prostate images and includes two models in sequence, following a teacher/student training paradigm. The teacher model (a high-capacity neural network) automatically annotates a set of pseudo-labeled patches used to train the student model (a smaller network). The two models are trained with two different teacher/student approaches: semi-supervised learning and semi-weekly supervised learning. For each of the two approaches, three student training variants are presented. The baseline is provided by training the student model only with the strongly annotated data. Classification performance is evaluated on the student model at the patch level (using the local annotations of the Tissue Micro-Arrays Zurich dataset) and at the global level (using the TCGA-PRAD, The Cancer Genome Atlas-PRostate ADenocarcinoma, whole slide image Gleason score). The teacher/student paradigm allows the models to better generalize on both datasets, despite the inter-dataset heterogeneity and the small number of local annotations used. The classification performance is improved both at the patch-level (up to κ = 0.6127 ± 0.0133 from κ = 0.5667 ± 0.0285), at the TMA core-level (Gleason score) (up to κ = 0.7645 ± 0.0231 from κ = 0.7186 ± 0.0306) and at the WSI-level (Gleason score) (up to κ = 0.4529 ± 0.0512 from κ = 0.2293 ± 0.1350). The results show that with the teacher/student paradigm, it is possible to train models that generalize on datasets from entirely different sources, despite the inter-dataset heterogeneity and the lack of large datasets with local annotations. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
13618415
Volume :
73
Database :
Academic Search Index
Journal :
Medical Image Analysis
Publication Type :
Academic Journal
Accession number :
152467741
Full Text :
https://doi.org/10.1016/j.media.2021.102165