Back to Search Start Over

Interpreting Deep Learning Models in Natural Language Processing: A Review

Authors :
Sun, Xiaofei
Yang, Diyi
Li, Xiaoya
Zhang, Tianwei
Meng, Yuxian
Qiu, Han
Wang, Guoyin
Hovy, Eduard
Li, Jiwei
Publication Year :
2021

Abstract

Neural network models have achieved state-of-the-art performances in a wide range of natural language processing (NLP) tasks. However, a long-standing criticism against neural network models is the lack of interpretability, which not only reduces the reliability of neural NLP systems but also limits the scope of their applications in areas where interpretability is essential (e.g., health care applications). In response, the increasing interest in interpreting neural NLP models has spurred a diverse array of interpretation methods over recent years. In this survey, we provide a comprehensive review of various interpretation methods for neural models in NLP. We first stretch out a high-level taxonomy for interpretation methods in NLP, i.e., training-based approaches, test-based approaches, and hybrid approaches. Next, we describe sub-categories in each category in detail, e.g., influence-function based methods, KNN-based methods, attention-based models, saliency-based methods, perturbation-based methods, etc. We point out deficiencies of current methods and suggest some avenues for future research.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2110.10470
Document Type :
Working Paper