Back to Search
Start Over
Using ChatGPT to Score Essays and Short-Form Constructed Responses
- Publication Year :
- 2024
-
Abstract
- This study aimed to determine if ChatGPT's large language models could match the scoring accuracy of human and machine scores from the ASAP competition. The investigation focused on various prediction models, including linear regression, random forest, gradient boost, and boost. ChatGPT's performance was evaluated against human raters using quadratic weighted kappa (QWK) metrics. Results indicated that while ChatGPT's gradient boost model achieved QWKs close to human raters for some data sets, its overall performance was inconsistent and often lower than human scores. The study highlighted the need for further refinement, particularly in handling biases and ensuring scoring fairness. Despite these challenges, ChatGPT demonstrated potential for scoring efficiency, especially with domain-specific fine-tuning. The study concludes that ChatGPT can complement human scoring but requires additional development to be reliable for high-stakes assessments. Future research should improve model accuracy, address ethical considerations, and explore hybrid models combining ChatGPT with empirical methods.<br />Comment: 35 pages, 8 tables, 2 Figures, 27 references
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2408.09540
- Document Type :
- Working Paper