Back to Search Start Over

Large language models surpass human experts in predicting neuroscience results

Authors :
Luo, Xiaoliang
Rechardt, Akilles
Sun, Guangzhi
Nejad, Kevin K.
Yáñez, Felipe
Yilmaz, Bati
Lee, Kangjoo
Cohen, Alexandra O.
Borghesani, Valentina
Pashkov, Anton
Marinazzo, Daniele
Nicholas, Jonathan
Salatiello, Alessandro
Sucholutsky, Ilia
Minervini, Pasquale
Razavi, Sepehr
Rocca, Roberta
Yusifov, Elkhan
Okalova, Tereza
Gu, Nianlong
Ferianc, Martin
Khona, Mikail
Patil, Kaustubh R.
Lee, Pui-Shee
Mata, Rui
Myers, Nicholas E.
Bizley, Jennifer K
Musslick, Sebastian
Bilgin, Isil Poyraz
Niso, Guiomar
Ales, Justin M.
Gaebler, Michael
Murty, N Apurva Ratan
Loued-Khenissi, Leyla
Behler, Anna
Hall, Chloe M.
Dafflon, Jessica
Bao, Sherry Dongqi
Love, Bradley C.
Publication Year :
2024

Abstract

Scientific discoveries often hinge on synthesizing decades of research, a task that potentially outstrips human information processing capacities. Large language models (LLMs) offer a solution. LLMs trained on the vast scientific literature could potentially integrate noisy yet interrelated findings to forecast novel results better than human experts. To evaluate this possibility, we created BrainBench, a forward-looking benchmark for predicting neuroscience results. We find that LLMs surpass experts in predicting experimental outcomes. BrainGPT, an LLM we tuned on the neuroscience literature, performed better yet. Like human experts, when LLMs were confident in their predictions, they were more likely to be correct, which presages a future where humans and LLMs team together to make discoveries. Our approach is not neuroscience-specific and is transferable to other knowledge-intensive endeavors.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.03230
Document Type :
Working Paper