Back to Search Start Over

Unveiling the Landscape: Studies on Automated Short Answer Evaluation

Authors :
Abdulkadir Kara
Eda Saka Simsek
Serkan Yildirim
Source :
Asian Journal of Distance Education. 2024 19(1):178-199.
Publication Year :
2024

Abstract

Evaluation is an essential component of the learning process when discerning learning situations. Assessing natural language responses, like short answers, takes time and effort. Artificial intelligence and natural language processing advancements have led to more studies on automatically grading short answers. In this review, we systematically analyze short-answer evaluation studies. We present the development of the field in terms of scientific production features, datasets, and automatic evaluation features. The field has developed with pioneering studies in the US. Researchers generally conduct applications with English datasets. There has been a significant increase in research in recent years with large language models that support many different languages. These models have applications that achieve accuracy close to that of human evaluators. In addition, deep learning models do not require traditional approaches' detailed preprocessing and feature engineering processes. The dataset size trend is 1000 and above regarding the number of responses. It was observed that metrics such as accuracy, precision, and F1 score were used in performance determination. It is seen that the majority of the studies focus on scoring or rating. In this context, there needs to be more literature on the context of evaluation system applications that can provide descriptive and formative feedback. In addition, the developed assessment systems must be actively used in learning environments.

Details

Language :
English
ISSN :
1347-9008
Volume :
19
Issue :
1
Database :
ERIC
Journal :
Asian Journal of Distance Education
Publication Type :
Academic Journal
Accession number :
EJ1424258
Document Type :
Journal Articles<br />Reports - Research