Back to Search
Start Over
Explainable AI techniques for sepsis diagnosis : Evaluating LIME and SHAP through a user study
- Publication Year :
- 2021
- Publisher :
- Högskolan i Skövde, Institutionen för informationsteknologi, 2021.
-
Abstract
- Articial intelligence has had a large impact on many industries and transformed some domains quite radically. There is tremendous potential in applying AI to the eld of medical diagnostics. A major issue with applying these techniques to some domains is an inability for AI models to provide an explanation or justication for their predictions. This creates a problem wherein a user may not trust an AI prediction, or there are legal requirements for justifying decisions that are not met. This thesis overviews how two explainable AI techniques (Shapley Additive Explanations and Local Interpretable Model-Agnostic Explanations) can establish a degree of trust for the user in the medical diagnostics eld. These techniques are evaluated through a user study. User study results suggest that supplementing classications or predictions with a post-hoc visualization increases interpretability by a small margin. Further investigation and research utilizing a user study surveyor interview is suggested to increase interpretability and explainability of machine learning results.
Details
- Language :
- English
- Database :
- OpenAIRE
- Accession number :
- edsair.dedup.wf.001..84596ae6fd4b7c4d054b138e480486fc