Back to Search
Start Over
Conceptual Explanations of Neural Network Prediction for Time Series
- Source :
- IJCNN
- Publication Year :
- 2020
- Publisher :
- IEEE, 2020.
-
Abstract
- Deep neural networks are black boxes by construction. Explanation and interpretation methods therefore are pivotal for a trustworthy application. Existing methods are mostly based on heatmapping and focus on locally determining the relevant input parts triggering the network prediction. However, these methods struggle to uncover global causes. While this is a rare case in the image or NLP modality, it is of high relevance in the time series domain.This paper presents a novel framework, i.e. Conceptual Explanation, designed to evaluate the effect of abstract (local or global) input features on the model behavior. The method is model-agnostic and allows utilizing expert knowledge. On three time series datasets Conceptual Explanation demonstrates its ability to pinpoint the causes inherent to the data to trigger the correct model prediction.
- Subjects :
- Modality (human–computer interaction)
Interpretation (logic)
Artificial neural network
Computer science
business.industry
02 engineering and technology
010501 environmental sciences
Machine learning
computer.software_genre
01 natural sciences
Data modeling
020204 information systems
0202 electrical engineering, electronic engineering, information engineering
Anomaly detection
Relevance (information retrieval)
Artificial intelligence
Time series
Focus (optics)
business
computer
0105 earth and related environmental sciences
Subjects
Details
- Database :
- OpenAIRE
- Journal :
- 2020 International Joint Conference on Neural Networks (IJCNN)
- Accession number :
- edsair.doi...........4acc418aa71e7551592136cd922fdc18
- Full Text :
- https://doi.org/10.1109/ijcnn48605.2020.9207341