1. GPT-4o reads the mind in the eyes
- Author
-
Strachan, James W. A., Pansardi, Oriana, Scaliti, Eugenio, Celotto, Marco, Saxena, Krati, Yi, Chunzhi, Manzi, Fabio, Rufo, Alessandro, Manzi, Guido, Graziano, Michael S. A., Panzeri, Stefano, and Becchio, Cristina
- Subjects
Computer Science - Human-Computer Interaction ,Computer Science - Computers and Society - Abstract
Large Language Models (LLMs) are capable of reproducing human-like inferences, including inferences about emotions and mental states, from text. Whether this capability extends beyond text to other modalities remains unclear. Humans possess a sophisticated ability to read the mind in the eyes of other people. Here we tested whether this ability is also present in GPT-4o, a multimodal LLM. Using two versions of a widely used theory of mind test, the Reading the Mind in Eyes Test and the Multiracial Reading the Mind in the Eyes Test, we found that GPT-4o outperformed humans in interpreting mental states from upright faces but underperformed humans when faces were inverted. While humans in our sample showed no difference between White and Non-white faces, GPT-4o's accuracy was higher for White than for Non-white faces. GPT-4o's errors were not random but revealed a highly consistent, yet incorrect, processing of mental-state information across trials, with an orientation-dependent error structure that qualitatively differed from that of humans for inverted faces but not for upright faces. These findings highlight how advanced mental state inference abilities and human-like face processing signatures, such as inversion effects, coexist in GPT-4o alongside substantial differences in information processing compared to humans.
- Published
- 2024