Back to Search
Start Over
Integrating explainability into deep learning-based models for white blood cells classification.
- Source :
-
Computers & Electrical Engineering . Sep2023, Vol. 110, pN.PAG-N.PAG. 1p. - Publication Year :
- 2023
-
Abstract
- White blood cells (WBCs) are crucial constituents of the blood that protect the human body against infections and viruses. The classification of WBCs in a blood smear image is used to diagnose a range of haematological disorders. The manual identification of WBCs can result in potential errors, necessitating the need for automated systems that can assist in classification. Recently, various deep learning models, such as DenseNet121, Xception, MobileNetV2, ResNet50, and VGG16, have been used to classify WBCs. However, the available classification models are black boxes because their decisions are difficult for humans to understand without further exploration. The interpretability and explainability of these models are essential, as their decisions can have severe consequences for patients. In this paper, we integrate an explainable AI (XAI) technique called local interpretable model-agnostic explanations (LIME) with the DenseNet121 classification model for WBC classification. Interpretable results would allow the users to understand and verify the model's predictions, enhancing their confidence in the automated diagnosis. • Comparison of deep learning models for leukocyte classification. • Inclusion of explainability to visually explain the final decision. • The results show high explainability of the prediction mechanism. [Display omitted] [ABSTRACT FROM AUTHOR]
- Subjects :
- *DEEP learning
*LEUCOCYTES
*HUMAN body
*CLASSIFICATION
Subjects
Details
- Language :
- English
- ISSN :
- 00457906
- Volume :
- 110
- Database :
- Academic Search Index
- Journal :
- Computers & Electrical Engineering
- Publication Type :
- Academic Journal
- Accession number :
- 170745256
- Full Text :
- https://doi.org/10.1016/j.compeleceng.2023.108913