1. Examination of Code generated by Large Language Models
- Author
-
Beer, Robin, Feix, Alexander, Guttzeit, Tim, Muras, Tamara, Müller, Vincent, Rauscher, Maurice, Schäffler, Florian, and Löwe, Welf
- Subjects
Computer Science - Software Engineering ,Computer Science - Artificial Intelligence ,I.2.2 - Abstract
Large language models (LLMs), such as ChatGPT and Copilot, are transforming software development by automating code generation and, arguably, enable rapid prototyping, support education, and boost productivity. Therefore, correctness and quality of the generated code should be on par with manually written code. To assess the current state of LLMs in generating correct code of high quality, we conducted controlled experiments with ChatGPT and Copilot: we let the LLMs generate simple algorithms in Java and Python along with the corresponding unit tests and assessed the correctness and the quality (coverage) of the generated (test) codes. We observed significant differences between the LLMs, between the languages, between algorithm and test codes, and over time. The present paper reports these results together with the experimental methods allowing repeated and comparable assessments for more algorithms, languages, and LLMs over time.
- Published
- 2024