Back to Search Start Over

A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion.

Authors :
Albahri, A.S.
Duhaim, Ali M.
Fadhel, Mohammed A.
Alnoor, Alhamzah
Baqer, Noor S.
Alzubaidi, Laith
Albahri, O.S.
Alamoodi, A.H.
Bai, Jinshuai
Salhi, Asma
Santamaría, Jose
Ouyang, Chun
Gupta, Ashish
Gu, Yuantong
Deveci, Muhammet
Source :
Information Fusion. Aug2023, Vol. 96, p156-191. 36p.
Publication Year :
2023

Abstract

• Identify gaps in state-of-the-art research to support healthcare AI trustworthiness. • Explore the explainable AI and fusion in healthcare. • Explore legitimacy, morality, and robustness standards for AI policymakers. • Assessment quality, bias risk, and data fusion for medical AI trustworthiness. • Examine eight research gaps to guide future study and assist researchers. In the last few years, the trend in health care of embracing artificial intelligence (AI) has dramatically changed the medical landscape. Medical centres have adopted AI applications to increase the accuracy of disease diagnosis and mitigate health risks. AI applications have changed rules and policies related to healthcare practice and work ethics. However, building trustworthy and explainable AI (XAI) in healthcare systems is still in its early stages. Specifically, the European Union has stated that AI must be human-centred and trustworthy, whereas in the healthcare sector, low methodological quality and high bias risk have become major concerns. This study endeavours to offer a systematic review of the trustworthiness and explainability of AI applications in healthcare, incorporating the assessment of quality, bias risk, and data fusion to supplement previous studies and provide more accurate and definitive findings. Likewise, 64 recent contributions on the trustworthiness of AI in healthcare from multiple databases (i.e., ScienceDirect, Scopus, Web of Science, and IEEE Xplore) were identified using a rigorous literature search method and selection criteria. The considered papers were categorised into a coherent and systematic classification including seven categories: explainable robotics, prediction, decision support, blockchain, transparency, digital health, and review. In this paper, we have presented a systematic and comprehensive analysis of earlier studies and opened the door to potential future studies by discussing in depth the challenges, motivations, and recommendations. In this study a systematic science mapping analysis in order to reorganise and summarise the results of earlier studies to address the issues of trustworthiness and objectivity was also performed. Moreover, this work has provided decisive evidence for the trustworthiness of AI in health care by presenting eight current state-of-the-art critical analyses regarding those more relevant research gaps. In addition, to the best of our knowledge, this study is the first to investigate the feasibility of utilising trustworthy and XAI applications in healthcare, by incorporating data fusion techniques and connecting various important pieces of information from available healthcare datasets and AI algorithms. The analysis of the revised contributions revealed crucial implications for academics and practitioners, and then potential methodological aspects to enhance the trustworthiness of AI applications in the medical sector were reviewed. Successively, the theoretical concept and current use of 17 XAI methods in health care were addressed. Finally, several objectives and guidelines were provided to policymakers to establish electronic health-care systems focused on achieving relevant features such as legitimacy, morality, and robustness. Several types of information fusion in healthcare were focused on in this study, including data, feature, image, decision, multimodal, hybrid, and temporal. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15662535
Volume :
96
Database :
Academic Search Index
Journal :
Information Fusion
Publication Type :
Academic Journal
Accession number :
163261060
Full Text :
https://doi.org/10.1016/j.inffus.2023.03.008