1. A comparative study of methods for a priori prediction of MCQ difficulty
- Author
-
Bijan Parsia, Jared Leo, Gina Donato, Sophie Forge, Uli Sattler, Nicolas Matentzoglu, Ghader Kurdi, and Will Dowling
- Subjects
020205 medical informatics ,Computer Networks and Communications ,Computer science ,business.industry ,05 social sciences ,050301 education ,02 engineering and technology ,Machine learning ,computer.software_genre ,Computer Science Applications ,0202 electrical engineering, electronic engineering, information engineering ,A priori and a posteriori ,Artificial intelligence ,business ,0503 education ,computer ,Information Systems - Abstract
Successful exams require a balance of easy, medium, and difficult questions. Question difficulty is generally either estimated by an expert or determined after an exam is taken. The latter provides no utility for the generation of new questions and the former is expensive both in terms of time and cost. Additionally, it is not known whether expert prediction is indeed a good proxy for estimating question difficulty. In this paper, we analyse and compare two ontology-based measures for difficulty prediction of multiple choice questions, as well as comparing each measure with expert prediction (by 15 experts) against the exam performance of 12 residents over a corpus of 231 medical case-based questions that are in multiple choice format. We find one ontology-based measure (relation strength indicativeness) to be of comparable performance (accuracy = 47%) to expert prediction (average accuracy = 49%).
- Published
- 2021
- Full Text
- View/download PDF