1. A Multi-Center Study on the Adaptability of a Shared Foundation Model for Electronic Health Records
- Author
-
Guo, Lin Lawrence, Fries, Jason, Steinberg, Ethan, Fleming, Scott Lanyon, Morse, Keith, Aftandilian, Catherine, Posada, Jose, Shah, Nigam, Sung, Lillian, Guo, Lin Lawrence, Fries, Jason, Steinberg, Ethan, Fleming, Scott Lanyon, Morse, Keith, Aftandilian, Catherine, Posada, Jose, Shah, Nigam, and Sung, Lillian
- Abstract
Foundation models hold promise for transforming AI in healthcare by providing modular components that are easily adaptable to downstream healthcare tasks, making AI development more scalable and cost-effective. Structured EHR foundation models, trained on coded medical records from millions of patients, demonstrated benefits including increased performance with fewer training labels, and improved robustness to distribution shifts. However, questions remain on the feasibility of sharing these models across different hospitals and their performance for local task adaptation. This multi-center study examined the adaptability of a recently released structured EHR foundation model ($FM_{SM}$), trained on longitudinal medical record data from 2.57M Stanford Medicine patients. Experiments were conducted using EHR data at The Hospital for Sick Children and MIMIC-IV. We assessed both adaptability via continued pretraining on local data, and task adaptability compared to baselines of training models from scratch at each site, including a local foundation model. We evaluated the performance of these models on 8 clinical prediction tasks. In both datasets, adapting the off-the-shelf $FM_{SM}$ matched the performance of GBM models locally trained on all data while providing a 13% improvement in settings with few task-specific training labels. With continued pretraining on local data, label efficiency substantially improved, such that $FM_{SM}$ required fewer than 1% of training examples to match the fully trained GBM's performance. Continued pretraining was also 60 to 90% more sample-efficient than training local foundation models from scratch. Our findings show that adapting shared EHR foundation models across hospitals provides improved prediction performance at less cost, underscoring the utility of base foundation models as modular components to streamline the development of healthcare AI., Comment: 46 pages, 5 figures, 3 tables, 14 appendices
- Published
- 2023