1. Clinical element models in the SHARPn consortium
- Author
-
Hongfang Liu, Vinod C. Kaggal, Joseph F. Coyle, Ning Zhuo, Christopher G. Chute, Thomas A. Oniki, Stanley M. Huff, Calvin E. Beebe, Craig G. Parker, Kyle Marchant, and Harold R. Solbrig
- Subjects
Normalization (statistics) ,Vocabulary ,Source data ,020205 medical informatics ,Computer science ,media_common.quotation_subject ,Information Storage and Retrieval ,Health Informatics ,02 engineering and technology ,Research and Applications ,Health informatics ,Terminology ,Salt lake ,Health Information Systems ,03 medical and health sciences ,0302 clinical medicine ,Utah ,Controlled vocabulary ,0202 electrical engineering, electronic engineering, information engineering ,Electronic Health Records ,030212 general & internal medicine ,media_common ,business.industry ,Data science ,Semantics ,Vocabulary, Controlled ,Scalability ,Medical Record Linkage ,business - Abstract
Objective The objective of the Strategic Health IT Advanced Research Project area four (SHARPn) was to develop open-source tools that could be used for the normalization of electronic health record (EHR) data for secondary use—specifically, for high throughput phenotyping. We describe the role of Intermountain Healthcare’s Clinical Element Models ([CEMs] Intermountain Healthcare Health Services, Inc, Salt Lake City, Utah) as normalization “targets” within the project. Materials and Methods Intermountain’s CEMs were either repurposed or created for the SHARPn project. A CEM describes “valid” structure and semantics for a particular kind of clinical data. CEMs are expressed in a computable syntax that can be compiled into implementation artifacts. The modeling team and SHARPn colleagues agilely gathered requirements and developed and refined models. Results Twenty-eight “statement” models (analogous to “classes”) and numerous “component” CEMs and their associated terminology were repurposed or developed to satisfy SHARPn high throughput phenotyping requirements. Model (structural) mappings and terminology (semantic) mappings were also created. Source data instances were normalized to CEM-conformant data and stored in CEM instance databases. A model browser and request site were built to facilitate the development. Discussion The modeling efforts demonstrated the need to address context differences and granularity choices and highlighted the inevitability of iso-semantic models. The need for content expertise and “intelligent” content tooling was also underscored. We discuss scalability and sustainability expectations for a CEM-based approach and describe the place of CEMs relative to other current efforts. Conclusions The SHARPn effort demonstrated the normalization and secondary use of EHR data. CEMs proved capable of capturing data originating from a variety of sources within the normalization pipeline and serving as suitable normalization targets.
- Published
- 2015