1. Synthetic Students: A Comparative Study of Bug Distribution Between Large Language Models and Computing Students
- Author
-
MacNeil, Stephen, Rogalska, Magdalena, Leinonen, Juho, Denny, Paul, Hellas, Arto, and Crosland, Xandria
- Subjects
Computer Science - Computers and Society ,Computer Science - Artificial Intelligence - Abstract
Large language models (LLMs) present an exciting opportunity for generating synthetic classroom data. Such data could include code containing a typical distribution of errors, simulated student behaviour to address the cold start problem when developing education tools, and synthetic user data when access to authentic data is restricted due to privacy reasons. In this research paper, we conduct a comparative study examining the distribution of bugs generated by LLMs in contrast to those produced by computing students. Leveraging data from two previous large-scale analyses of student-generated bugs, we investigate whether LLMs can be coaxed to exhibit bug patterns that are similar to authentic student bugs when prompted to inject errors into code. The results suggest that unguided, LLMs do not generate plausible error distributions, and many of the generated errors are unlikely to be generated by real students. However, with guidance including descriptions of common errors and typical frequencies, LLMs can be shepherded to generate realistic distributions of errors in synthetic code.
- Published
- 2024
- Full Text
- View/download PDF