Back to Search Start Over

Flexible Model Interpretability through Natural Language Model Editing

Authors :
D'Oosterlinck, Karel
Demeester, Thomas
Develder, Chris
Potts, Christopher
Publication Year :
2023

Abstract

Model interpretability and model editing are crucial goals in the age of large language models. Interestingly, there exists a link between these two goals: if a method is able to systematically edit model behavior with regard to a human concept of interest, this editor method can help make internal representations more interpretable by pointing towards relevant representations and systematically manipulating them.<br />Comment: Extended Abstract -- work in progress. BlackboxNLP2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2311.10905
Document Type :
Working Paper