1. Evaluating the Explainable AI Method Grad-CAM for Breath Classification on Newborn Time Series Data
- Author
-
Oprea, Camelia, Grüne, Mike, Buglowski, Mateusz, Olivier, Lena, Orlikowsky, Thorsten, Kowalewski, Stefan, Schoberer, Mark, and Stollenwerk, André
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Computers and Society ,Computer Science - Machine Learning ,I.2.1 ,K.4.m ,J.3 - Abstract
With the digitalization of health care systems, artificial intelligence becomes more present in medicine. Especially machine learning shows great potential for complex tasks such as time series classification, usually at the cost of transparency and comprehensibility. This leads to a lack of trust by humans and thus hinders its active usage. Explainable artificial intelligence tries to close this gap by providing insight into the decision-making process, the actual usefulness of its different methods is however unclear. This paper proposes a user study based evaluation of the explanation method Grad-CAM with application to a neural network for the classification of breaths in time series neonatal ventilation data. We present the perceived usefulness of the explainability method by different stakeholders, exposing the difficulty to achieve actual transparency and the wish for more in-depth explanations by many of the participants., Comment: \c{opyright} 2024 The authors. This work has been accepted to IFAC for publication under a Creative Commons Licence CC-BY-NC-ND. Accepted for the 12th IFAC Symposium on Biological and Medical Systems. 6 pages, 7 figures
- Published
- 2024