Back to Search
Start Over
Looking in the Right place for Anomalies: Explainable AI through Automatic Location Learning
- Source :
- 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI)
- Publication Year :
- 2020
-
Abstract
- Deep learning has now become the de facto approach to the recognition of anomalies in medical imaging. Their 'black box' way of classifying medical images into anomaly labels poses problems for their acceptance, particularly with clinicians. Current explainable AI methods offer justifications through visualizations such as heat maps but cannot guarantee that the network is focusing on the relevant image region fully containing the anomaly. In this paper, we develop an approach to explainable AI in which the anomaly is assured to be overlapping the expected location when present. This is made possible by automatically extracting location-specific labels from textual reports and learning the association of expected locations to labels using a hybrid combination of Bi-Directional Long Short-Term Memory Recurrent Neural Networks (Bi-LSTM) and DenseNet-121. Use of this expected location to bias the subsequent attention-guided inference network based on ResNet101 results in the isolation of the anomaly at the expected location when present. The method is evaluated on a large chest X-ray dataset.<br />Comment: 5 pages, Paper presented as a poster at the International Symposium on Biomedical Imaging, 2020, Paper Number 655
Details
- Database :
- arXiv
- Journal :
- 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI)
- Publication Type :
- Report
- Accession number :
- edsarx.2008.00363
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.1109/ISBI45749.2020.9098370