Back to Search Start Over

Constraint satisfaction in large language models.

Authors :
Jacobs, Cassandra L.
MacDonald, Maryellen C.
Source :
Language, Cognition & Neuroscience. Jun2024, p1-18. 18p. 4 Illustrations.
Publication Year :
2024

Abstract

Constraint satisfaction theories were prominent in the late 20th century and emphasized continuous, rich interaction between many sources of information in a linguistic signal unfolding over time. A major challenge was rigorously capturing these highly interactive comprehension processes and yielding explicit predictions, because the important constraints were numerous and changed in prominence from one context to the next. Connectionist models were conceptually well-suited to this, but researchers had insufficient computing power and lacked sufficiently large corpora to bring these models to bear. These limitations no longer hold, and large language models (LLMs) offer an opportunity to test constraint satisfaction ideas about human language comprehension. We consider how LLMs can be applied to study interactive processes with lexical ambiguity resolution as a test case. We argue that further study of LLMs can advance theories of constraint satisfaction, though gaps remain in our understanding of how people and LLMs combine linguistic information. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
23273798
Database :
Academic Search Index
Journal :
Language, Cognition & Neuroscience
Publication Type :
Academic Journal
Accession number :
177922387
Full Text :
https://doi.org/10.1080/23273798.2024.2364339