1. Development and Pilot Testing of a Data-Rich Clinical Reasoning Training and Assessment Tool
- Author
-
Jason Waechter, Jon Allen, Chel Hee Lee, Laura Zwaan, and Research & Education
- Subjects
Students, Medical ,Humans ,Pilot Projects ,General Medicine ,Clinical Competence ,Clinical Reasoning ,Education ,Education, Medical, Undergraduate - Abstract
Problem Clinical reasoning is a core competency for physicians and also a common source of errors, driving high rates of misdiagnoses and patient harm. Efforts to provide training in and assessment of clinical reasoning skills have proven challenging because they are either labor- and resource-prohibitive or lack important data relevant to clinical reasoning. The authors report on the creation and use of online simulation cases to train and assess clinical reasoning skills among medical students. Approach Using an online library of simulation cases, they collected data relevant to the creation of the differential diagnosis, analysis of the history and physical exam, diagnostic justification, ordering tests; interpreting tests, and ranking of the most probable diagnosis. These data were compared with an expert-created scorecard, and detailed quantitative and qualitative feedback were generated and provided to the learners and instructors. Outcomes Following an initial pilot study to troubleshoot the software, the authors conducted a second pilot study in which 2 instructors developed and provided 6 cases to 75 second-year medical students. The students completed 376 cases (average 5.0 cases per student), generating more than 40,200 data points that the software analyzed to inform individual learner formative feedback relevant to clinical reasoning skills. The instructors reported that the workload was acceptable and sustainable. Next Steps The authors are actively expanding the library of clinical cases and providing more students and schools with formative feedback in clinical reasoning using our tool. Further, they have upgraded the software to identify and provide feedback on behaviors consistent with premature closure, anchoring, and confirmation biases. They are currently collecting and analyzing additional data using the same software to inform validation and psychometric outcomes for future publications.
- Published
- 2022