Back to Search Start Over

Potentials and pitfalls of ChatGPT and natural-language artificial intelligence models for the understanding of laboratory medicine test results. An assessment by the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Working Group on Artificial Intelligence (WG-AI)

Authors :
Cadamuro, J
Cabitza, F
Debeljak, Z
De Bruyne, S
Frans, G
Perez, S
Ozdemir, H
Tolios, A
Carobene, A
Padoan, A
Cadamuro, Janne
Cabitza, Federico
Debeljak, Zeljko
De Bruyne, Sander
Frans, Glynis
Perez, Salomon Martin
Ozdemir, Habib
Tolios, Alexander
Carobene, Anna
Padoan, Andrea
Cadamuro, J
Cabitza, F
Debeljak, Z
De Bruyne, S
Frans, G
Perez, S
Ozdemir, H
Tolios, A
Carobene, A
Padoan, A
Cadamuro, Janne
Cabitza, Federico
Debeljak, Zeljko
De Bruyne, Sander
Frans, Glynis
Perez, Salomon Martin
Ozdemir, Habib
Tolios, Alexander
Carobene, Anna
Padoan, Andrea
Publication Year :
2023

Abstract

Objectives: ChatGPT, a tool based on natural language processing (NLP), is on everyone's mind, and several potential applications in healthcare have been already proposed. However, since the ability of this tool to interpret laboratory test results has not yet been tested, the EFLM Working group on Artificial Intelligence (WG-AI) has set itself the task of closing this gap with a systematic approach.Methods: WG-AI members generated 10 simulated laboratory reports of common parameters, which were then passed to ChatGPT for interpretation, according to reference intervals (RI) and units, using an optimized prompt. The results were subsequently evaluated independently by all WG-AI members with respect to relevance, correctness, helpfulness and safety.Results: ChatGPT recognized all laboratory tests, it could detect if they deviated from the RI and gave a test-by-test as well as an overall interpretation. The interpretations were rather superficial, not always correct, and, only in some cases, judged coherently. The magnitude of the deviation from the RI seldom plays a role in the interpretation of laboratory tests, and artificial intelligence (AI) did not make any meaningful suggestion regarding follow-up diagnostics or further procedures in general.Conclusions: ChatGPT in its current form, being not specifically trained on medical data or laboratory data in particular, may only be considered a tool capable of interpreting a laboratory report on a test-by-test basis at best, but not on the interpretation of an overall diagnostic picture. Future generations of similar AIs with medical ground truth training data might surely revolutionize current processes in healthcare, despite this implementation is not ready yet.

Details

Database :
OAIster
Notes :
STAMPA, English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1456740972
Document Type :
Electronic Resource