Back to Search Start Over

Interpretability of Machine Learning Methods Applied to Neuroimaging

Authors :
Elina Thibeau-Sutre
Sasha Collin
Ninon Burgos
Olivier Colliot
Colliot, Olivier
PaRis Artificial Intelligence Research InstitutE - - PRAIRIE2019 - ANR-19-P3IA-0001 - P3IA - VALID
Institut de Neurosciences Translationnelles de Paris - - IHU-A-ICM2010 - ANR-10-IAHU-0006 - IAHU - VALID
Olivier Colliot
Algorithms, models and methods for images and signals of the human brain (ARAMIS)
Sorbonne Université (SU)-Inria de Paris
Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Institut du Cerveau = Paris Brain Institute (ICM)
Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Institut National de la Santé et de la Recherche Médicale (INSERM)-CHU Pitié-Salpêtrière [AP-HP]
Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Sorbonne Université (SU)-Sorbonne Université (SU)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Institut National de la Santé et de la Recherche Médicale (INSERM)-CHU Pitié-Salpêtrière [AP-HP]
Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Sorbonne Université (SU)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)
ANR-19-P3IA-0001,PRAIRIE,PaRis Artificial Intelligence Research InstitutE(2019)
ANR-10-IAHU-0006,IHU-A-ICM,Institut de Neurosciences Translationnelles de Paris(2010)
Source :
Machine Learning for Brain Disorders, Olivier Colliot. Machine Learning for Brain Disorders, Springer, In press, HAL, Olivier Colliot. Machine Learning for Brain Disorders, Springer, inPress
Publication Year :
2023
Publisher :
HAL CCSD, 2023.

Abstract

International audience; Deep learning methods have become very popular for the processing of natural images, and were then successfully adapted to the neuroimaging field. As these methods are non-transparent, interpretability methods are needed to validate them and ensure their reliability. Indeed, it has been shown that deep learning models may obtain high performance even when using irrelevant features, by exploiting biases in the training set. Such undesirable situations can potentially be detected by using interpretability methods. Recently, many methods have been proposed to interpret neural networks. However, this domain is not mature yet. Machine learning users face two major issues when aiming to interpret their models: which method to choose, and how to assess its reliability? Here, we aim at providing answers to these questions by presenting the most common interpretability methods and metrics developed to assess their reliability, as well as their applications and benchmarks in the neuroimaging context. Note that this is not an exhaustive survey: we aimed to focus on the studies which we found to be the most representative and relevant.

Details

Language :
English
Database :
OpenAIRE
Journal :
Machine Learning for Brain Disorders, Olivier Colliot. Machine Learning for Brain Disorders, Springer, In press, HAL, Olivier Colliot. Machine Learning for Brain Disorders, Springer, inPress
Accession number :
edsair.doi.dedup.....b237c4b99308879ca58b45fc2c7b3e96