1. Using explainable machine learning to characterise data drift and detect emergent health risks for emergency department admissions during COVID-19
- Author
-
Neil M. White, F. P. Chmiel, Michael Boniface, Daniel Burns, Christopher Duckworth, T. Daniels, Michael Kiuber, and Zlatko Zlatev
- Subjects
Concept drift ,Computer science ,Science ,Machine learning ,computer.software_genre ,Article ,Task (project management) ,Machine Learning ,Patient safety ,Feature (machine learning) ,Humans ,Pandemics ,Multidisciplinary ,Learning classifier system ,business.industry ,Attendance ,Health care ,COVID-19 ,Emergency department ,Hospitalization ,Identification (information) ,Medicine ,Artificial intelligence ,business ,computer - Abstract
A key task of emergency departments is to promptly identify patients who require hospital admission. Early identification ensures patient safety and aids organisational planning. Supervised machine learning algorithms can use data describing historical episodes to make ahead-of-time predictions of clinical outcomes. Despite this, clinical settings are dynamic environments and the underlying data distributions characterising episodes can change with time (data drift), and so can the relationship between episode characteristics and associated clinical outcomes (concept drift). Practically this means deployed algorithms must be monitored to ensure their safety. We demonstrate how explainable machine learning can be used to monitor data drift, using the COVID-19 pandemic as a severe example. We present a machine learning classifier trained using (pre-COVID-19) data, to identify patients at high risk of admission during an emergency department attendance. We then evaluate our model’s performance on attendances occurring pre-pandemic (AUROC of 0.856 with 95%CI [0.852, 0.859]) and during the COVID-19 pandemic (AUROC of 0.826 with 95%CI [0.814, 0.837]). We demonstrate two benefits of explainable machine learning (SHAP) for models deployed in healthcare settings: (1) By tracking the variation in a feature’s SHAP value relative to its global importance, a complimentary measure of data drift is found which highlights the need to retrain a predictive model. (2) By observing the relative changes in feature importance emergent health risks can be identified.
- Published
- 2021