Back to Search Start Over

Explaining Recurrent Neural Network Predictions in Sentiment Analysis

Authors :
Grégoire Montavon
Leila Arras
Klaus-Robert Müller
Wojciech Samek
Source :
WASSA@EMNLP
Publication Year :
2017

Abstract

Recently, a technique called Layer-wise Relevance Propagation (LRP) was shown to deliver insightful explanations in the form of input space relevances for understanding feed-forward neural network classification decisions. In the present work, we extend the usage of LRP to recurrent neural networks. We propose a specific propagation rule applicable to multiplicative connections as they arise in recurrent network architectures such as LSTMs and GRUs. We apply our technique to a word-based bi-directional LSTM model on a five-class sentiment prediction task, and evaluate the resulting LRP relevances both qualitatively and quantitatively, obtaining better results than a gradient-based related method which was used in previous work.<br />9 pages, 4 figures, accepted for EMNLP'17 Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis (WASSA)

Details

Language :
English
Database :
OpenAIRE
Journal :
WASSA@EMNLP
Accession number :
edsair.doi.dedup.....68fc4c351c476027231f6d08d3bf5193