1. Large Language Models in Computer Science Education: A Systematic Literature Review
- Author
-
Raihan, Nishat, Siddiq, Mohammed Latif, Santos, Joanna C. S., and Zampieri, Marcos
- Subjects
Computer Science - Machine Learning ,Computer Science - Human-Computer Interaction - Abstract
Large language models (LLMs) are becoming increasingly better at a wide range of Natural Language Processing tasks (NLP), such as text generation and understanding. Recently, these models have extended their capabilities to coding tasks, bridging the gap between natural languages (NL) and programming languages (PL). Foundational models such as the Generative Pre-trained Transformer (GPT) and LLaMA series have set strong baseline performances in various NL and PL tasks. Additionally, several models have been fine-tuned specifically for code generation, showing significant improvements in code-related applications. Both foundational and fine-tuned models are increasingly used in education, helping students write, debug, and understand code. We present a comprehensive systematic literature review to examine the impact of LLMs in computer science and computer engineering education. We analyze their effectiveness in enhancing the learning experience, supporting personalized education, and aiding educators in curriculum development. We address five research questions to uncover insights into how LLMs contribute to educational outcomes, identify challenges, and suggest directions for future research., Comment: Accepted at 56th ACM Technical Symposium on Computer Science Education (SIGCSE TS 2025)
- Published
- 2024