Back to Search Start Over

On Benchmarking Human-Like Intelligence in Machines

Authors :
Ying, Lance
Collins, Katherine M.
Wong, Lionel
Sucholutsky, Ilia
Liu, Ryan
Weller, Adrian
Shu, Tianmin
Griffiths, Thomas L.
Tenenbaum, Joshua B.
Publication Year :
2025

Abstract

Recent benchmark studies have claimed that AI has approached or even surpassed human-level performances on various cognitive tasks. However, this position paper argues that current AI evaluation paradigms are insufficient for assessing human-like cognitive capabilities. We identify a set of key shortcomings: a lack of human-validated labels, inadequate representation of human response variability and uncertainty, and reliance on simplified and ecologically-invalid tasks. We support our claims by conducting a human evaluation study on ten existing AI benchmarks, suggesting significant biases and flaws in task and label designs. To address these limitations, we propose five concrete recommendations for developing future benchmarks that will enable more rigorous and meaningful evaluations of human-like cognitive capacities in AI with various implications for such AI applications.<br />Comment: 18 pages, 5 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2502.20502
Document Type :
Working Paper