Back to Search Start Over

Privacy-preserving large language models for structured medical information retrieval.

Authors :
Wiest, Isabella Catharina
Ferber, Dyke
Zhu, Jiefu
van Treeck, Marko
Meyer, Sonja K.
Juglan, Radhika
Carrero, Zunamys I.
Paech, Daniel
Kleesiek, Jens
Ebert, Matthias P.
Truhn, Daniel
Kather, Jakob Nikolas
Source :
NPJ Digital Medicine; 9/20/2024, Vol. 7 Issue 1, p1-9, 9p
Publication Year :
2024

Abstract

Most clinical information is encoded as free text, not accessible for quantitative analysis. This study presents an open-source pipeline using the local large language model (LLM) "Llama 2" to extract quantitative information from clinical text and evaluates its performance in identifying features of decompensated liver cirrhosis. The LLM identified five key clinical features in a zero- and one-shot manner from 500 patient medical histories in the MIMIC IV dataset. We compared LLMs of three sizes and various prompt engineering approaches, with predictions compared against ground truth from three blinded medical experts. Our pipeline achieved high accuracy, detecting liver cirrhosis with 100% sensitivity and 96% specificity. High sensitivities and specificities were also yielded for detecting ascites (95%, 95%), confusion (76%, 94%), abdominal pain (84%, 97%), and shortness of breath (87%, 97%) using the 70 billion parameter model, which outperformed smaller versions. Our study successfully demonstrates the capability of locally deployed LLMs to extract clinical information from free text with low hardware requirements. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
23986352
Volume :
7
Issue :
1
Database :
Complementary Index
Journal :
NPJ Digital Medicine
Publication Type :
Academic Journal
Accession number :
179772007
Full Text :
https://doi.org/10.1038/s41746-024-01233-2