Back to Search
Start Over
The quality and readability of patient information provided by ChatGPT: can AI reliably explain common ENT operations?
- Source :
-
European Archives of Oto-Rhino-Laryngology . Nov2024, Vol. 281 Issue 11, p6147-6153. 7p. - Publication Year :
- 2024
-
Abstract
- Purpose: Access to high-quality and comprehensible patient information is crucial. However, information provided by increasingly prevalent Artificial Intelligence tools has not been thoroughly investigated. This study assesses the quality and readability of information from ChatGPT regarding three index ENT operations: tonsillectomy, adenoidectomy, and grommets. Methods: We asked ChatGPT standard and simplified questions. Readability was calculated using Flesch-Kincaid Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (GFI) and Simple Measure of Gobbledygook (SMOG) scores. We assessed quality using the DISCERN instrument and compared these with ENT UK patient leaflets. Results: ChatGPT readability was poor, with mean FRES of 38.9 and 55.1 pre- and post-simplification, respectively. Simplified information from ChatGPT was 43.6% more readable (FRES) but scored 11.6% lower for quality. ENT UK patient information readability and quality was consistently higher. Conclusions: ChatGPT can simplify information at the expense of quality, resulting in shorter answers with important omissions. Limitations in knowledge and insight curb its reliability for healthcare information. Patients should use reputable sources from professional organisations alongside clear communication with their clinicians for well-informed consent and making decisions. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 09374477
- Volume :
- 281
- Issue :
- 11
- Database :
- Academic Search Index
- Journal :
- European Archives of Oto-Rhino-Laryngology
- Publication Type :
- Academic Journal
- Accession number :
- 180499409
- Full Text :
- https://doi.org/10.1007/s00405-024-08598-w