Back to Search Start Over

Variational Language Concepts for Interpreting Foundation Language Models

Authors :
Wang, Hengyi
Tan, Shiwei
Hong, Zhiqing
Zhang, Desheng
Wang, Hao
Publication Year :
2024

Abstract

Foundation Language Models (FLMs) such as BERT and its variants have achieved remarkable success in natural language processing. To date, the interpretability of FLMs has primarily relied on the attention weights in their self-attention layers. However, these attention weights only provide word-level interpretations, failing to capture higher-level structures, and are therefore lacking in readability and intuitiveness. To address this challenge, we first provide a formal definition of conceptual interpretation and then propose a variational Bayesian framework, dubbed VAriational Language Concept (VALC), to go beyond word-level interpretations and provide concept-level interpretations. Our theoretical analysis shows that our VALC finds the optimal language concepts to interpret FLM predictions. Empirical results on several real-world datasets show that our method can successfully provide conceptual interpretation for FLMs.<br />Comment: Accepted at EMNLP 2024 findings

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.03964
Document Type :
Working Paper