Back to Search Start Over

Separating Tongue from Thought: Activation Patching Reveals Language-Agnostic Concept Representations in Transformers

Authors :
Dumas, Clément
Wendler, Chris
Veselovsky, Veniamin
Monea, Giovanni
West, Robert
Publication Year :
2024

Abstract

A central question in multilingual language modeling is whether large language models (LLMs) develop a universal concept representation, disentangled from specific languages. In this paper, we address this question by analyzing latent representations (latents) during a word translation task in transformer-based LLMs. We strategically extract latents from a source translation prompt and insert them into the forward pass on a target translation prompt. By doing so, we find that the output language is encoded in the latent at an earlier layer than the concept to be translated. Building on this insight, we conduct two key experiments. First, we demonstrate that we can change the concept without changing the language and vice versa through activation patching alone. Second, we show that patching with the mean over latents across different languages does not impair and instead improves the models' performance in translating the concept. Our results provide evidence for the existence of language-agnostic concept representations within the investigated models.<br />Comment: 12 pages, 10 figures, previous version published under the title "How Do Llamas Process Multilingual Text? A Latent Exploration through Activation Patching" at the ICML 2024 mechanistic interpretability workshop at https://openreview.net/forum?id=0ku2hIm4BS

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2411.08745
Document Type :
Working Paper