Back to Search Start Over

FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions

Authors :
Kim, Hyunwoo
Sclar, Melanie
Zhou, Xuhui
Bras, Ronan Le
Kim, Gunhee
Choi, Yejin
Sap, Maarten
Publication Year :
2023

Abstract

Theory of mind (ToM) evaluations currently focus on testing models using passive narratives that inherently lack interactivity. We introduce FANToM, a new benchmark designed to stress-test ToM within information-asymmetric conversational contexts via question answering. Our benchmark draws upon important theoretical requisites from psychology and necessary empirical considerations when evaluating large language models (LLMs). In particular, we formulate multiple types of questions that demand the same underlying reasoning to identify illusory or false sense of ToM capabilities in LLMs. We show that FANToM is challenging for state-of-the-art LLMs, which perform significantly worse than humans even with chain-of-thought reasoning or fine-tuning.<br />Comment: EMNLP 2023. Code and dataset can be found here: https://hyunw.kim/fantom

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.15421
Document Type :
Working Paper