Back to Search Start Over

A comparative study of methods for a priori prediction of MCQ difficulty

Authors :
Bijan Parsia
Jared Leo
Gina Donato
Sophie Forge
Uli Sattler
Nicolas Matentzoglu
Ghader Kurdi
Will Dowling
Source :
Semantic Web. 12:449-465
Publication Year :
2021
Publisher :
IOS Press, 2021.

Abstract

Successful exams require a balance of easy, medium, and difficult questions. Question difficulty is generally either estimated by an expert or determined after an exam is taken. The latter provides no utility for the generation of new questions and the former is expensive both in terms of time and cost. Additionally, it is not known whether expert prediction is indeed a good proxy for estimating question difficulty. In this paper, we analyse and compare two ontology-based measures for difficulty prediction of multiple choice questions, as well as comparing each measure with expert prediction (by 15 experts) against the exam performance of 12 residents over a corpus of 231 medical case-based questions that are in multiple choice format. We find one ontology-based measure (relation strength indicativeness) to be of comparable performance (accuracy = 47%) to expert prediction (average accuracy = 49%).

Details

ISSN :
22104968 and 15700844
Volume :
12
Database :
OpenAIRE
Journal :
Semantic Web
Accession number :
edsair.doi...........b3fae5a79f1d1216756c4869592803e0
Full Text :
https://doi.org/10.3233/sw-200390