Back to Search
Start Over
Cracking the Code: Evaluating Zero-Shot Prompting Methods for Providing Programming Feedback
- Publication Year :
- 2024
-
Abstract
- Despite the growing use of large language models (LLMs) for providing feedback, limited research has explored how to achieve high-quality feedback. This case study introduces an evaluation framework to assess different zero-shot prompt engineering methods. We varied the prompts systematically and analyzed the provided feedback on programming errors in R. The results suggest that prompts suggesting a stepwise procedure increase the precision, while omitting explicit specifications about which provided data to analyze improves error identification.<br />Comment: 8 pages, 1 figure
- Subjects :
- Computer Science - Software Engineering
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2412.15702
- Document Type :
- Working Paper