Back to Search Start Over

The model student: GPT-4 performance on graduate biomedical science exams.

Authors :
Stribling, Daniel
Xia, Yuxing
Amer, Maha K.
Graim, Kiley S.
Mulligan, Connie J.
Renne, Rolf
Source :
Scientific Reports. 3/8/2024, Vol. 14 Issue 1, p1-11. 11p.
Publication Year :
2024

Abstract

The GPT-4 large language model (LLM) and ChatGPT chatbot have emerged as accessible and capable tools for generating English-language text in a variety of formats. GPT-4 has previously performed well when applied to questions from multiple standardized examinations. However, further evaluation of trustworthiness and accuracy of GPT-4 responses across various knowledge domains is essential before its use as a reference resource. Here, we assess GPT-4 performance on nine graduate-level examinations in the biomedical sciences (seven blinded), finding that GPT-4 scores exceed the student average in seven of nine cases and exceed all student scores for four exams. GPT-4 performed very well on fill-in-the-blank, short-answer, and essay questions, and correctly answered several questions on figures sourced from published manuscripts. Conversely, GPT-4 performed poorly on questions with figures containing simulated data and those requiring a hand-drawn answer. Two GPT-4 answer-sets were flagged as plagiarism based on answer similarity and some model responses included detailed hallucinations. In addition to assessing GPT-4 performance, we discuss patterns and limitations in GPT-4 capabilities with the goal of informing design of future academic examinations in the chatbot era. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
20452322
Volume :
14
Issue :
1
Database :
Academic Search Index
Journal :
Scientific Reports
Publication Type :
Academic Journal
Accession number :
175932926
Full Text :
https://doi.org/10.1038/s41598-024-55568-7