Back to Search
Start Over
Explaining Recurrent Neural Network Predictions in Sentiment Analysis
- Source :
- WASSA@EMNLP
- Publication Year :
- 2017
-
Abstract
- Recently, a technique called Layer-wise Relevance Propagation (LRP) was shown to deliver insightful explanations in the form of input space relevances for understanding feed-forward neural network classification decisions. In the present work, we extend the usage of LRP to recurrent neural networks. We propose a specific propagation rule applicable to multiplicative connections as they arise in recurrent network architectures such as LSTMs and GRUs. We apply our technique to a word-based bi-directional LSTM model on a five-class sentiment prediction task, and evaluate the resulting LRP relevances both qualitatively and quantitatively, obtaining better results than a gradient-based related method which was used in previous work.<br />9 pages, 4 figures, accepted for EMNLP'17 Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis (WASSA)
- Subjects :
- FOS: Computer and information sciences
Computer science
Computer Science - Artificial Intelligence
Machine Learning (stat.ML)
02 engineering and technology
Machine learning
computer.software_genre
Task (project management)
Statistics - Machine Learning
020204 information systems
0202 electrical engineering, electronic engineering, information engineering
Relevance (information retrieval)
Neural and Evolutionary Computing (cs.NE)
Network architecture
Computer Science - Computation and Language
business.industry
Sentiment analysis
Computer Science - Neural and Evolutionary Computing
Artificial Intelligence (cs.AI)
Recurrent neural network
020201 artificial intelligence & image processing
Artificial intelligence
business
Computation and Language (cs.CL)
computer
Word (computer architecture)
Subjects
Details
- Language :
- English
- Database :
- OpenAIRE
- Journal :
- WASSA@EMNLP
- Accession number :
- edsair.doi.dedup.....68fc4c351c476027231f6d08d3bf5193