Back to Search Start Over

Transparent AI: Developing an Explainable Interface for Predicting Postoperative Complications

Authors :
Ren, Yuanfang
Tripathi, Chirayu
Guan, Ziyuan
Zhu, Ruilin
Hougha, Victoria
Ma, Yingbo
Hu, Zhenhong
Balch, Jeremy
Loftus, Tyler J.
Rashidi, Parisa
Shickel, Benjamin
Ozrazgat-Baslanti, Tezcan
Bihorac, Azra
Publication Year :
2024

Abstract

Given the sheer volume of surgical procedures and the significant rate of postoperative fatalities, assessing and managing surgical complications has become a critical public health concern. Existing artificial intelligence (AI) tools for risk surveillance and diagnosis often lack adequate interpretability, fairness, and reproducibility. To address this, we proposed an Explainable AI (XAI) framework designed to answer five critical questions: why, why not, how, what if, and what else, with the goal of enhancing the explainability and transparency of AI models. We incorporated various techniques such as Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), counterfactual explanations, model cards, an interactive feature manipulation interface, and the identification of similar patients to address these questions. We showcased an XAI interface prototype that adheres to this framework for predicting major postoperative complications. This initial implementation has provided valuable insights into the vast explanatory potential of our XAI framework and represents an initial step towards its clinical adoption.<br />Comment: 32 pages, 7 figures, 4 supplement figures and 1 supplement table

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.16064
Document Type :
Working Paper