Back to Search Start Over

A Review of Multimodal Explainable Artificial Intelligence: Past, Present and Future

Authors :
Sun, Shilin
An, Wenbin
Tian, Feng
Nan, Fang
Liu, Qidong
Liu, Jun
Shah, Nazaraf
Chen, Ping
Publication Year :
2024

Abstract

Artificial intelligence (AI) has rapidly developed through advancements in computational power and the growth of massive datasets. However, this progress has also heightened challenges in interpreting the "black-box" nature of AI models. To address these concerns, eXplainable AI (XAI) has emerged with a focus on transparency and interpretability to enhance human understanding and trust in AI decision-making processes. In the context of multimodal data fusion and complex reasoning scenarios, the proposal of Multimodal eXplainable AI (MXAI) integrates multiple modalities for prediction and explanation tasks. Meanwhile, the advent of Large Language Models (LLMs) has led to remarkable breakthroughs in natural language processing, yet their complexity has further exacerbated the issue of MXAI. To gain key insights into the development of MXAI methods and provide crucial guidance for building more transparent, fair, and trustworthy AI systems, we review the MXAI methods from a historical perspective and categorize them across four eras: traditional machine learning, deep learning, discriminative foundation models, and generative LLMs. We also review evaluation metrics and datasets used in MXAI research, concluding with a discussion of future challenges and directions. A project related to this review has been created at https://github.com/ShilinSun/mxai_review.<br />Comment: This work has been submitted to the IEEE for possible publication

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.14056
Document Type :
Working Paper