1. Comparing ChatGPT With Experts' Responses to Scenarios that Assess Psychological Literacy.
- Author
-
Machin, M. Anthony, Machin, Tanya M., and Gasson, Natalie
- Subjects
CHATGPT ,GENERATIVE artificial intelligence - Abstract
Progress in understanding students' development of psychological literacy is critical. However, generative AI represents an emerging threat to higher education which may dramatically impact on student learning and how this learning transfers to their practice. This research investigated whether ChatGPT responded in ways that demonstrated psychological literacy and whether it matched the responses of subject matter experts (SMEs) on a measure of psychological literacy. We tasked ChatGPT with providing responses to 13 psychology research methods scenarios as well as to rate each of the five response options that were already developed for each scenario by the research team. ChatGPT responded in ways that would typically be regarded as displaying a high level of psychological literacy. The response options which were previously rated by two groups of SMEs were then compared with ratings provided by ChatGPT. The Pearson's correlations were very high (r' s =.73 and.80, respectively), as were the Spearman's rhos (rho's =.81 and.82, respectively). Kendall's tau were also quite high (tau's =.67 and.68, respectively). We conclude that ChatGPT may generate responses that match SME psychological literacy in research methods, which could also generalise across multiple domains of psychological literacy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF