Back to Search Start Over

In-context Learning and Induction Heads

Authors :
Olsson, Catherine
Elhage, Nelson
Nanda, Neel
Joseph, Nicholas
DasSarma, Nova
Henighan, Tom
Mann, Ben
Askell, Amanda
Bai, Yuntao
Chen, Anna
Conerly, Tom
Drain, Dawn
Ganguli, Deep
Hatfield-Dodds, Zac
Hernandez, Danny
Johnston, Scott
Jones, Andy
Kernion, Jackson
Lovitt, Liane
Ndousse, Kamal
Amodei, Dario
Brown, Tom
Clark, Jack
Kaplan, Jared
McCandlish, Sam
Olah, Chris
Publication Year :
2022

Abstract

"Induction heads" are attention heads that implement a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. In this work, we present preliminary and indirect evidence for a hypothesis that induction heads might constitute the mechanism for the majority of all "in-context learning" in large transformer models (i.e. decreasing loss at increasing token indices). We find that induction heads develop at precisely the same point as a sudden sharp increase in in-context learning ability, visible as a bump in the training loss. We present six complementary lines of evidence, arguing that induction heads may be the mechanistic source of general in-context learning in transformer models of any size. For small attention-only models, we present strong, causal evidence; for larger models with MLPs, we present correlational evidence.

Details

Language :
English
Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....a97baf73e706080b2ed925606ebb1b2b