Back to Search
Start Over
Clinical efficacy of pre-trained large language models through the lens of aphasia
- Source :
- Scientific Reports, Vol 14, Iss 1, Pp 1-16 (2024)
- Publication Year :
- 2024
- Publisher :
- Nature Portfolio, 2024.
-
Abstract
- Abstract The rapid development of large language models (LLMs) motivates us to explore how such state-of-the-art natural language processing systems can inform aphasia research. What kind of language indices can we derive from a pre-trained LLM? How do they differ from or relate to the existing language features in aphasia? To what extent can LLMs serve as an interpretable and effective diagnostic and measurement tool in a clinical context? To investigate these questions, we constructed predictive and correlational models, which utilize mean surprisals from LLMs as predictor variables. Using AphasiaBank archived data, we validated our models’ efficacy in aphasia diagnosis, measurement, and prediction. Our finding is that LLMs-surprisals can effectively detect the presence of aphasia and different natures of the disorder, LLMs in conjunction with the existing language indices improve models’ efficacy in subtyping aphasia, and LLMs-surprisals can capture common agrammatic deficits at both word and sentence level. Overall, LLMs have potential to advance automatic and precise aphasia prediction. A natural language processing pipeline can be greatly benefitted from integrating LLMs, enabling us to refine models of existing language disorders, such as aphasia.
Details
- Language :
- English
- ISSN :
- 20452322
- Volume :
- 14
- Issue :
- 1
- Database :
- Directory of Open Access Journals
- Journal :
- Scientific Reports
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.15d78d84d5f14b17b96e6c63d28d1440
- Document Type :
- article
- Full Text :
- https://doi.org/10.1038/s41598-024-66576-y