1. Development and validation of clinical performance assessment in simulated medical emergencies: an observational study
- Author
-
Ronaldo Sevilla Berrios, John C. O’Horo, Xiaomei Chen, Aysen Erdogan, Lisbeth Garcia Arguello, Oguz Kilickaya, Brian W. Pickering, Christopher N. Schmickl, Rahul Kashyap, Yue Dong, and Ognjen Gajic
- Subjects
medicine.medical_specialty ,Critical Care ,Task (project management) ,03 medical and health sciences ,0302 clinical medicine ,Cohen's kappa ,medicine ,Humans ,Medical physics ,030212 general & internal medicine ,Prospective Studies ,Program Development ,Reliability (statistics) ,Face validity ,business.industry ,Construct validity ,Rubric ,030208 emergency & critical care medicine ,Checklist ,Emergency medicine ,Emergency Medicine ,Observational study ,Clinical Competence ,business ,Research Article - Abstract
Background Critical illness is a time-sensitive process which requires practitioners to process vast quantities of data and make decisions rapidly. We have developed a tool, the Checklist for Early Recognition and Treatment of Acute Illness (CERTAIN), aimed at enhancing care delivery in such situations. To determine the efficacy of CERTAIN and similar cognitive aids, we developed rubric for evaluating provider performance in a simulated medical resuscitation environments. Methods We recruited 18 clinicians with current valid ACLS certification for evaluation in three simulated medical scenarios designed to mimic typical medical decompensation events routinely experienced in clinical care. Subjects were stratified as experienced or novice based on prior critical care training. A checklist of critical actions was designed using face validity for each scenario to evaluate task completion and performance. Simulation sessions were video recorded and scored by two independent raters. Construct validity was assessed under the assumption that experienced clinicians should perform better than novice clinicians on each task. Reliability was assessed as percentage agreement, kappa statistics and Bland-Altman plots as appropriate. Results Eleven experts and seven novices completed evaluation. The overall agreement on common checklist item completion was 84.8 %. The overall model achieved face validity and was consistent with our construct, with experienced clinicians trending towards better performance compared to novices for accuracy and speed of task completion. Conclusions A standardized video assessment tool has potential to provide a valid and reliable method to assess 12 performances of clinicians facing simulated medical emergencies. Electronic supplementary material The online version of this article (doi:10.1186/s12873-015-0066-x) contains supplementary material, which is available to authorized users.
- Published
- 2016