Back to Search Start Over

Performance evaluation of predictive AI models to support medical decisions: Overview and guidance

Authors :
Van Calster, Ben
Collins, Gary S.
Vickers, Andrew J.
Wynants, Laure
Kerr, Kathleen F.
BarreƱada, Lasai
Varoquaux, Gael
Singh, Karandeep
Moons, Karel G. M.
Hernandez-boussard, Tina
Timmerman, Dirk
Mclernon, David J.
Van Smeden, Maarten
Steyerberg, Ewout W.
Publication Year :
2024

Abstract

A myriad of measures to illustrate performance of predictive artificial intelligence (AI) models have been proposed in the literature. Selecting appropriate performance measures is essential for predictive AI models that are developed to be used in medical practice, because poorly performing models may harm patients and lead to increased costs. We aim to assess the merits of classic and contemporary performance measures when validating predictive AI models for use in medical practice. We focus on models with a binary outcome. We discuss 32 performance measures covering five performance domains (discrimination, calibration, overall, classification, and clinical utility) along with accompanying graphical assessments. The first four domains cover statistical performance, the fifth domain covers decision-analytic performance. We explain why two key characteristics are important when selecting which performance measures to assess: (1) whether the measure's expected value is optimized when it is calculated using the correct probabilities (i.e., a "proper" measure), and (2) whether they reflect either purely statistical performance or decision-analytic performance by properly considering misclassification costs. Seventeen measures exhibit both characteristics, fourteen measures exhibited one characteristic, and one measure possessed neither characteristic (the F1 measure). All classification measures (such as classification accuracy and F1) are improper for clinically relevant decision thresholds other than 0.5 or the prevalence. We recommend the following measures and plots as essential to report: AUROC, calibration plot, a clinical utility measure such as net benefit with decision curve analysis, and a plot with probability distributions per outcome category.<br />Comment: 60 pages, 8 tables, 11 figures, two supplementary appendices

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.10288
Document Type :
Working Paper