1. Structured, flexible, and robust: benchmarking and improving large language models towards more human-like behavior in out-of-distribution reasoning tasks
- Author
-
Collins, Katherine M., Wong, Catherine, Feng, Jiahai, Wei, Megan, and Tenenbaum, Joshua B.
- Subjects
Computer Science - Symbolic Computation ,FOS: Computer and information sciences ,Computer Science - Machine Learning ,Symbolic computational modeling ,Computer Science - Computation and Language ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Semantics of language ,Reasoning ,Symbolic Computation (cs.SC) ,Computation and Language (cs.CL) ,Natural Language Processing ,Machine Learning (cs.LG) - Abstract
Human language offers a powerful window into our thoughts -- we tell stories, give explanations, and express our beliefs and goals through words. Abundant evidence also suggests that language plays a developmental role in structuring our learning. Here, we ask: how much of human-like thinking can be captured by learning statistical patterns in language alone? We first contribute a new challenge benchmark for comparing humans and distributional large language models (LLMs). Our benchmark contains two problem-solving domains (planning and explanation generation) and is designed to require generalization to new, out-of-distribution problems expressed in language. We find that humans are far more robust than LLMs on this benchmark. Next, we propose a hybrid Parse-and-Solve model, which augments distributional LLMs with a structured symbolic reasoning module. We find that this model shows more robust adaptation to out-of-distribution planning problems, demonstrating the promise of hybrid AI models for more human-like reasoning., Comment: Originally accepted to the 2022 Cognitive Science (CogSci) conference
- Published
- 2022
- Full Text
- View/download PDF