Back to Search Start Over

An analysis of explainability methods for convolutional neural networks.

Authors :
Haar, Lynn Vonder
Elvira, Timothy
Ochoa, Omar
Source :
Engineering Applications of Artificial Intelligence. Jan2023:Part A, Vol. 117, pN.PAG-N.PAG. 1p.
Publication Year :
2023

Abstract

Deep learning models have gained a reputation of high accuracy in many domains. Convolutional Neural Networks (CNN) are specialized towards image recognition and have high accuracy in classifying objects within images. However, CNNs are an example of a black box model, meaning that experts are unsure how they work internally to reach a classification decision. Without knowing the reasoning behind a decision, there is low confidence that CNNs will continue to make accurate decisions, so it is unsafe to use them in high-risk or safety–critical​ fields without first developing methods to explain their decisions. This paper is a survey and analysis of the available explainability methods for showing the reasoning behind CNN decisions. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09521976
Volume :
117
Database :
Academic Search Index
Journal :
Engineering Applications of Artificial Intelligence
Publication Type :
Academic Journal
Accession number :
160692567
Full Text :
https://doi.org/10.1016/j.engappai.2022.105606