3 results on '"Du, Hongbo"'
Search Results
2. Explainable DCNN Decision Framework for Breast Lesion Classification from Ultrasound Images Based on Cancer Characteristics.
- Author
-
AlZoubi, Alaa, Eskandari, Ali, Yu, Harry, and Du, Hongbo
- Subjects
BREAST ,ULTRASONIC imaging ,CONVOLUTIONAL neural networks ,IMAGE analysis ,CLASSIFICATION ,IMAGE recognition (Computer vision) ,DIAGNOSTIC ultrasonic imaging - Abstract
In recent years, deep convolutional neural networks (DCNNs) have shown promising performance in medical image analysis, including breast lesion classification in 2D ultrasound (US) images. Despite the outstanding performance of DCNN solutions, explaining their decisions remains an open investigation. Yet, the explainability of DCNN models has become essential for healthcare systems to accept and trust the models. This paper presents a novel framework for explaining DCNN classification decisions of lesions in ultrasound images using the saliency maps linking the DCNN decisions to known cancer characteristics in the medical domain. The proposed framework consists of three main phases. First, DCNN models for classification in ultrasound images are built. Next, selected methods for visualization are applied to obtain saliency maps on the input images of the DCNN models. In the final phase, the visualization outputs and domain-known cancer characteristics are mapped. The paper then demonstrates the use of the framework for breast lesion classification from ultrasound images. We first follow the transfer learning approach and build two DCNN models. We then analyze the visualization outputs of the trained DCNN models using the EGrad-CAM and Ablation-CAM methods. We map the DCNN model decisions of benign and malignant lesions through the visualization outputs to the characteristics such as echogenicity, calcification, shape, and margin. A retrospective dataset of 1298 US images collected from different hospitals is used to evaluate the effectiveness of the framework. The test results show that these characteristics contribute differently to the benign and malignant lesions' decisions. Our study provides the foundation for other researchers to explain the DCNN classification decisions of other cancer types. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. A generic deep learning framework to classify thyroid and breast lesions in ultrasound images.
- Author
-
Zhu, Yi-Cheng, AlZoubi, Alaa, Jassim, Sabah, Jiang, Quan, Zhang, Yuan, Wang, Yong-Bing, Ye, Xian-De, and DU, Hongbo
- Subjects
- *
CONVOLUTIONAL neural networks , *THYROID gland , *DEEP learning , *ULTRASONIC imaging , *BREAST ultrasound , *BREAST , *DIAGNOSTIC ultrasonic imaging , *NIPPLE (Anatomy) - Abstract
• Breast and thyroid cancers are the two common cancers to affect women worldwide. • Both of them share common genetic features affected by similar hormone families. • A generic deep learning model to classify breast and thyroid cancers in US images. • Achieved 86.7% sensitivity and 87.7% specificity in classifying thyroid nodules. • Achieved 88.57% sensitivity and 84.6% specificity in classifying breast lesions. • Our model learns features commonly shared by breast lesions and thyroid nodules. • Our model achieved comparable or higher accuracy than that of radiologists. Breast and thyroid cancers are the two common cancers to affect women worldwide. Ultrasonography (US) is a commonly used non-invasive imaging modality to detect breast and thyroid cancers, but its clinical diagnostic accuracy for these cancers is controversial. Both thyroid and breast cancers share some similar high frequency ultrasound characteristics such as taller-than-wide shape ratio, hypo-echogenicity, and ill-defined margins. This study aims to develop an automatic scheme for classifying thyroid and breast lesions in ultrasound images using deep convolutional neural networks (DCNN). In particular, we propose a generic DCNN architecture with transfer learning and the same architectural parameter settings to train models for thyroid and breast cancers (TNet and BNet) respectively, and test the viability of such a generic approach with ultrasound images collected from clinical practices. In addition, the potentials of the thyroid model in learning the common features and its performance of classifying both breast and thyroid lesions are investigated. A retrospective dataset of 719 thyroid and 672 breast images captured from US machines of different makes between October 2016 and December 2018 is used in this study. Test results show that both TNet and BNet built on the same DCNN architecture have achieved good classification results (86.5% average accuracy for TNet and 89% for BNet). Furthermore, we used TNet to classify breast lesions and the model achieves sensitivity of 86.6% and specificity of 87.1%, indicating its capability in learning features commonly shared by thyroid and breast lesions. We further tested the diagnostic performance of the TNet model against that of three radiologists. The area under curve (AUC) for thyroid nodule classification is 0.861 (95% CI: 0.792–0.929) for the TNet model and 0.757–0.854 (95% CI: 0.658–0.934) for the three radiologists. The AUC for breast cancer classification is 0.875 (95% CI: 0.804–0.947) for the TNet model and 0.698–0.777 (95% CI: 0.593–0.872) for the radiologists, indicating the model's potential in classifying both breast and thyroid cancers with a higher level of accuracy than that of radiologists. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.