Back to Search Start Over

A generic deep learning framework to classify thyroid and breast lesions in ultrasound images.

Authors :
Zhu, Yi-Cheng
AlZoubi, Alaa
Jassim, Sabah
Jiang, Quan
Zhang, Yuan
Wang, Yong-Bing
Ye, Xian-De
DU, Hongbo
Source :
Ultrasonics. Feb2021, Vol. 110, pN.PAG-N.PAG. 1p.
Publication Year :
2021

Abstract

• Breast and thyroid cancers are the two common cancers to affect women worldwide. • Both of them share common genetic features affected by similar hormone families. • A generic deep learning model to classify breast and thyroid cancers in US images. • Achieved 86.7% sensitivity and 87.7% specificity in classifying thyroid nodules. • Achieved 88.57% sensitivity and 84.6% specificity in classifying breast lesions. • Our model learns features commonly shared by breast lesions and thyroid nodules. • Our model achieved comparable or higher accuracy than that of radiologists. Breast and thyroid cancers are the two common cancers to affect women worldwide. Ultrasonography (US) is a commonly used non-invasive imaging modality to detect breast and thyroid cancers, but its clinical diagnostic accuracy for these cancers is controversial. Both thyroid and breast cancers share some similar high frequency ultrasound characteristics such as taller-than-wide shape ratio, hypo-echogenicity, and ill-defined margins. This study aims to develop an automatic scheme for classifying thyroid and breast lesions in ultrasound images using deep convolutional neural networks (DCNN). In particular, we propose a generic DCNN architecture with transfer learning and the same architectural parameter settings to train models for thyroid and breast cancers (TNet and BNet) respectively, and test the viability of such a generic approach with ultrasound images collected from clinical practices. In addition, the potentials of the thyroid model in learning the common features and its performance of classifying both breast and thyroid lesions are investigated. A retrospective dataset of 719 thyroid and 672 breast images captured from US machines of different makes between October 2016 and December 2018 is used in this study. Test results show that both TNet and BNet built on the same DCNN architecture have achieved good classification results (86.5% average accuracy for TNet and 89% for BNet). Furthermore, we used TNet to classify breast lesions and the model achieves sensitivity of 86.6% and specificity of 87.1%, indicating its capability in learning features commonly shared by thyroid and breast lesions. We further tested the diagnostic performance of the TNet model against that of three radiologists. The area under curve (AUC) for thyroid nodule classification is 0.861 (95% CI: 0.792–0.929) for the TNet model and 0.757–0.854 (95% CI: 0.658–0.934) for the three radiologists. The AUC for breast cancer classification is 0.875 (95% CI: 0.804–0.947) for the TNet model and 0.698–0.777 (95% CI: 0.593–0.872) for the radiologists, indicating the model's potential in classifying both breast and thyroid cancers with a higher level of accuracy than that of radiologists. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
0041624X
Volume :
110
Database :
Academic Search Index
Journal :
Ultrasonics
Publication Type :
Academic Journal
Accession number :
147458970
Full Text :
https://doi.org/10.1016/j.ultras.2020.106300