Back to Search Start Over

Vignette-based comparative analysis of ChatGPT and specialist treatment decisions for rheumatic patients: results of the Rheum2Guide study.

Authors :
Labinsky, Hannah
Nagler, Lea-Kristin
Krusche, Martin
Griewing, Sebastian
Aries, Peer
Kroiß, Anja
Strunz, Patrick-Pascal
Kuhn, Sebastian
Schmalzing, Marc
Gernert, Michael
Knitza, Johannes
Source :
Rheumatology International. Oct2024, Vol. 44 Issue 10, p2043-2053. 11p.
Publication Year :
2024

Abstract

Background: The complex nature of rheumatic diseases poses considerable challenges for clinicians when developing individualized treatment plans. Large language models (LLMs) such as ChatGPT could enable treatment decision support. Objective: To compare treatment plans generated by ChatGPT-3.5 and GPT-4 to those of a clinical rheumatology board (RB). Design/methods: Fictional patient vignettes were created and GPT-3.5, GPT-4, and the RB were queried to provide respective first- and second-line treatment plans with underlying justifications. Four rheumatologists from different centers, blinded to the origin of treatment plans, selected the overall preferred treatment concept and assessed treatment plans' safety, EULAR guideline adherence, medical adequacy, overall quality, justification of the treatment plans and their completeness as well as patient vignette difficulty using a 5-point Likert scale. Results: 20 fictional vignettes covering various rheumatic diseases and varying difficulty levels were assembled and a total of 160 ratings were assessed. In 68.8% (110/160) of cases, raters preferred the RB's treatment plans over those generated by GPT-4 (16.3%; 26/160) and GPT-3.5 (15.0%; 24/160). GPT-4's plans were chosen more frequently for first-line treatments compared to GPT-3.5. No significant safety differences were observed between RB and GPT-4's first-line treatment plans. Rheumatologists' plans received significantly higher ratings in guideline adherence, medical appropriateness, completeness and overall quality. Ratings did not correlate with the vignette difficulty. LLM-generated plans were notably longer and more detailed. Conclusion: GPT-4 and GPT-3.5 generated safe, high-quality treatment plans for rheumatic diseases, demonstrating promise in clinical decision support. Future research should investigate detailed standardized prompts and the impact of LLM usage on clinical decisions. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
01728172
Volume :
44
Issue :
10
Database :
Academic Search Index
Journal :
Rheumatology International
Publication Type :
Academic Journal
Accession number :
179605662
Full Text :
https://doi.org/10.1007/s00296-024-05675-5