Back to Search Start Over

CSEPrompts: A Benchmark of Introductory Computer Science Prompts

Authors :
Raihan, Nishat
Goswami, Dhiman
Puspo, Sadiya Sayara Chowdhury
Newman, Christian
Ranasinghe, Tharindu
Zampieri, Marcos
Publication Year :
2024

Abstract

Recent advances in AI, machine learning, and NLP have led to the development of a new generation of Large Language Models (LLMs) that are trained on massive amounts of data and often have trillions of parameters. Commercial applications (e.g., ChatGPT) have made this technology available to the general public, thus making it possible to use LLMs to produce high-quality texts for academic and professional purposes. Schools and universities are aware of the increasing use of AI-generated content by students and they have been researching the impact of this new technology and its potential misuse. Educational programs in Computer Science (CS) and related fields are particularly affected because LLMs are also capable of generating programming code in various programming languages. To help understand the potential impact of publicly available LLMs in CS education, we introduce CSEPrompts, a framework with hundreds of programming exercise prompts and multiple-choice questions retrieved from introductory CS and programming courses. We also provide experimental results on CSEPrompts to evaluate the performance of several LLMs with respect to generating Python code and answering basic computer science and programming questions.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.02540
Document Type :
Working Paper