7 results on '"AlZoubi, Alaa"'
Search Results
2. A generic deep learning framework to classify thyroid and breast lesions in ultrasound images
- Author
-
Zhu, Yi-Cheng, AlZoubi, Alaa, Jassim, Sabah, Jiang, Quan, Zhang, Yuan, Wang, Yong-Bing, Ye, Xian-De, and DU, Hongbo
- Published
- 2021
- Full Text
- View/download PDF
3. Explainable DCNN Decision Framework for Breast Lesion Classification from Ultrasound Images Based on Cancer Characteristics.
- Author
-
AlZoubi, Alaa, Eskandari, Ali, Yu, Harry, and Du, Hongbo
- Subjects
- *
BREAST , *ULTRASONIC imaging , *CONVOLUTIONAL neural networks , *IMAGE analysis , *CLASSIFICATION , *IMAGE recognition (Computer vision) , *DIAGNOSTIC ultrasonic imaging - Abstract
In recent years, deep convolutional neural networks (DCNNs) have shown promising performance in medical image analysis, including breast lesion classification in 2D ultrasound (US) images. Despite the outstanding performance of DCNN solutions, explaining their decisions remains an open investigation. Yet, the explainability of DCNN models has become essential for healthcare systems to accept and trust the models. This paper presents a novel framework for explaining DCNN classification decisions of lesions in ultrasound images using the saliency maps linking the DCNN decisions to known cancer characteristics in the medical domain. The proposed framework consists of three main phases. First, DCNN models for classification in ultrasound images are built. Next, selected methods for visualization are applied to obtain saliency maps on the input images of the DCNN models. In the final phase, the visualization outputs and domain-known cancer characteristics are mapped. The paper then demonstrates the use of the framework for breast lesion classification from ultrasound images. We first follow the transfer learning approach and build two DCNN models. We then analyze the visualization outputs of the trained DCNN models using the EGrad-CAM and Ablation-CAM methods. We map the DCNN model decisions of benign and malignant lesions through the visualization outputs to the characteristics such as echogenicity, calcification, shape, and margin. A retrospective dataset of 1298 US images collected from different hospitals is used to evaluate the effectiveness of the framework. The test results show that these characteristics contribute differently to the benign and malignant lesions' decisions. Our study provides the foundation for other researchers to explain the DCNN classification decisions of other cancer types. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. ENAS-B: Combining ENAS With Bayesian Optimization for Automatic Design of Optimal CNN Architectures for Breast Lesion Classification From Ultrasound Images.
- Author
-
Ahmed, Mohammed, Du, Hongbo, and AlZoubi, Alaa
- Abstract
Efficient Neural Architecture Search (ENAS) is a recent development in searching for optimal cell structures for Convolutional Neural Network (CNN) design. It has been successfully used in various applications including ultrasound image classification for breast lesions. However, the existing ENAS approach only optimizes cell structures rather than the whole CNN architecture nor its trainable hyperparameters. This paper presents a novel framework for automatic design of CNN architectures by combining strengths of ENAS and Bayesian Optimization in two-folds. Firstly, we use ENAS to search for optimal normal and reduction cells. Secondly, with the optimal cells and a suitable hyperparameter search space, we adopt Bayesian Optimization to find the optimal depth of the network and optimal configuration of the trainable hyperparameters. To test the validity of the proposed framework, a dataset of 1522 breast lesion ultrasound images is used for the searching and modeling. We then evaluate the robustness of the proposed approach by testing the optimized CNN model on three external datasets consisting of 727 benign and 506 malignant lesion images. We further compare the CNN model with the default ENAS-based CNN model, and then with CNN models based on the state-of-the-art architectures. The results (error rate of no more than 20.6% on internal tests and 17.3% on average of external tests) show that the proposed framework generates robust and light CNN models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Automatic Detection of Thyroid Nodule Characteristics From 2D Ultrasound Images.
- Author
-
Han, Dongxu, Ibrahim, Nasir, Lu, Feng, Zhu, Yicheng, Du, Hongbo, and AlZoubi, Alaa
- Abstract
Thyroid cancer is one of the common types of cancer worldwide, and Ultrasound (US) imaging is a modality normally used for thyroid cancer diagnostics. The American College of Radiology Thyroid Imaging Reporting and Data System (ACR TIRADS) has been widely adopted to identify and classify US image characteristics for thyroid nodules. This paper presents novel methods for detecting the characteristic descriptors derived from TIRADS. Our methods return descriptions of the nodule margin irregularity, margin smoothness, calcification as well as shape and echogenicity using conventional computer vision and deep learning techniques. We evaluate our methods using datasets of 471 US images of thyroid nodules acquired from US machines of different makes and labeled by multiple radiologists. The proposed methods achieved overall accuracies of 88.00%, 93.18%, and 89.13% in classifying nodule calcification, margin irregularity, and margin smoothness respectively. Further tests with limited data also show a promising overall accuracy of 90.60% for echogenicity and 100.00% for nodule shape. This study provides an automated annotation of thyroid nodule characteristics from 2D ultrasound images. The experimental results showed promising performance of our methods for thyroid nodule analysis. The automatic detection of correct characteristics not only offers supporting evidence for diagnosis, but also generates patient reports rapidly, thereby decreasing the workload of radiologists and enhancing productivity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Machine Learning Assisted Doppler Features for Enhancing Thyroid Cancer Diagnosis: A Multi‐Cohort Study.
- Author
-
Zhu, Yi‐Cheng, Du, Hongbo, Jiang, Quan, Zhang, Tao, Huang, Xu‐Juan, Zhang, Yuan, Shi, Xiu‐Rong, Shan, Jun, and AlZoubi, Alaa
- Subjects
CANCER diagnosis ,ARTIFICIAL neural networks ,DOPPLER ultrasonography ,MACHINE learning ,THYROID nodules ,TUMOR classification ,DIAGNOSTIC ultrasonic imaging - Abstract
Background: This pilot study aims at exploiting machine learning techniques to extract color Doppler ultrasound (CDUS) features and to build an artificial neural network (ANN) model based on these CDUS features for improving the diagnostic performance of thyroid cancer classification. Methods: A total of 674 patients with 712 thyroid nodules (TNs) (512 from internal dataset and 200 from external dataset) were randomly selected in this retrospective study. We used ANN to build a model (TDUS‐Net) for classifying malignant and benign TNs using both the automatically extracted quantitative CDUS features (whole ratio, intranodular ratio, peripheral ratio, and number of vessels) and gray‐scale ultrasound (US) features defined by the American College of Radiology (ACR) Thyroid Imaging Reporting and Data System (TI‐RADS). Then, we compared the diagnostic performance of the model, the performance of another ANN model based on the gray‐scale US features alone (TUS‐Net), and that of radiologists. Results: The TDUS‐Net (0.898, 95% CI: 0.868–0.922) achieved a higher area under the curve (AUC) than that of TUS‐Net (0.881, 95% CI: 0.850–0.908) in the internal tests. Compared with radiologists, TDUS‐Net (AUC: 0.925, 95% CI: 0.880–0.958) performed better than radiologists (AUC: 0.810, 95% CI: 0.749–0.862) in the external tests. Conclusions: Applying a machine learning model by combining both gray‐scale US features and CDUS features can achieve comparable or even higher performance than radiologists in classifying TNs. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. Pair-Activity Analysis From Video Using Qualitative Trajectory Calculus.
- Author
-
AlZoubi, Alaa, Al-Diri, Bashir, Pike, Tom, Kleinhappel, Tanja, and Dickinson, Patrick
- Subjects
- *
VIDEO coding , *IMAGE recognition (Computer vision) , *DOCUMENT clustering , *MACHINE learning , *ROBUST control - Abstract
The automated analysis of interacting objects or people from video has many uses, including the recognition of activities, and the identification of prototypical or unusual behaviors. Existing techniques generally use temporal sequences of quantifiable real-valued features, such as object position or orientation; however, more recently, qualitative representations have been proposed. In this paper, we present a novel and robust qualitative method, which can be used for both the classification and the clustering of pair-activities. We use qualitative trajectory calculus ( $QTC$ ) to represent the relative motion between two objects and encode their interactions as a trajectory of $QTC$ states. A key element is a general and robust means of determining the sequence similarity, which we term Normalized Weighted Sequence Alignment; we show that this is an effective metric for both recognition and clustering problems. We have evaluated our method across three different data sets, and have shown that it outperforms the state-of-the-art quantitative methods, achieving an error rate of no more than 4.1% for recognition, and cluster purities higher than 90%. Our motivation originates from an interest in automated analysis of animal behaviors, and we present a comprehensive video data set of fish behaviors (Gasterosteus aculeatus), collected from lab-based experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.