Back to Search Start Over

Machine learning to optimize literature screening in medical guideline development

Authors :
Wouter Harmsen
Janke de Groot
Albert Harkema
Ingeborg van Dusseldorp
Jonathan de Bruin
Sofie van den Brand
Rens van de Schoot
Source :
Systematic Reviews, Vol 13, Iss 1, Pp 1-10 (2024)
Publication Year :
2024
Publisher :
BMC, 2024.

Abstract

Abstract Objectives In a time of exponential growth of new evidence supporting clinical decision-making, combined with a labor-intensive process of selecting this evidence, methods are needed to speed up current processes to keep medical guidelines up-to-date. This study evaluated the performance and feasibility of active learning to support the selection of relevant publications within medical guideline development and to study the role of noisy labels. Design We used a mixed-methods design. Two independent clinicians’ manual process of literature selection was evaluated for 14 searches. This was followed by a series of simulations investigating the performance of random reading versus using screening prioritization based on active learning. We identified hard-to-find papers and checked the labels in a reflective dialogue. Main outcome measures Inter-rater reliability was assessed using Cohen’s Kappa (ĸ). To evaluate the performance of active learning, we used the Work Saved over Sampling at 95% recall (WSS@95) and percentage Relevant Records Found at reading only 10% of the total number of records (RRF@10). We used the average time to discovery (ATD) to detect records with potentially noisy labels. Finally, the accuracy of labeling was discussed in a reflective dialogue with guideline developers. Results Mean ĸ for manual title-abstract selection by clinicians was 0.50 and varied between − 0.01 and 0.87 based on 5.021 abstracts. WSS@95 ranged from 50.15% (SD = 17.7) based on selection by clinicians to 69.24% (SD = 11.5) based on the selection by research methodologist up to 75.76% (SD = 12.2) based on the final full-text inclusion. A similar pattern was seen for RRF@10, ranging from 48.31% (SD = 23.3) to 62.8% (SD = 21.20) and 65.58% (SD = 23.25). The performance of active learning deteriorates with higher noise. Compared with the final full-text selection, the selection made by clinicians or research methodologists deteriorated WSS@95 by 25.61% and 6.25%, respectively. Conclusion While active machine learning tools can accelerate the process of literature screening within guideline development, they can only work as well as the input given by human raters. Noisy labels make noisy machine learning.

Details

Language :
English
ISSN :
20464053
Volume :
13
Issue :
1
Database :
Directory of Open Access Journals
Journal :
Systematic Reviews
Publication Type :
Academic Journal
Accession number :
edsdoj.2d2755623ae049e391b2fc4386db5ed6
Document Type :
article
Full Text :
https://doi.org/10.1186/s13643-024-02590-5