1. Prospective evaluation of an artificial intelligence-enabled algorithm for automated diabetic retinopathy screening of 30 000 patients
- Author
-
Christopher G. Owen, Laura Webster, Peter Heydon, Irene M Stratton, Catherine A Egan, Adnan Tufail, Louis Bolter, John Anderson, Alain Du Chemin, Alicja R. Rudnicka, Peter H Scanlon, Samantha Mann, S J Aldington, and Ryan Chambers
- Subjects
Male ,Epidemiology ,Imaging ,0302 clinical medicine ,Image Processing, Computer-Assisted ,Mass Screening ,Prospective Studies ,Child ,Aged, 80 and over ,Public health ,Diabetic retinopathy ,Clinical Science ,Middle Aged ,Clinical Trial ,Telemedicine ,Sensory Systems ,Female ,Diagnostic tests/Investigation ,Algorithm ,Algorithms ,Retinopathy ,Adult ,medicine.medical_specialty ,Adolescent ,030209 endocrinology & metabolism ,Retina ,Young Adult ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,Artificial Intelligence ,Diabetes mellitus ,medicine ,Humans ,Aged ,Retrospective Studies ,Diabetic Retinopathy ,business.industry ,Diabetic retinopathy screening ,Reproducibility of Results ,RA645.D54 ,medicine.disease ,Triage ,Clinical trial ,Ophthalmology ,Medical Education ,Degeneration ,030221 ophthalmology & optometry ,Maculopathy ,RE ,Treatment Medical ,Artificial intelligence ,business ,Follow-Up Studies - Abstract
Background/aims Human grading of digital images from diabetic retinopathy (DR) screening programmes represents a significant challenge, due to the increasing prevalence of diabetes. We evaluate the performance of an automated artificial intelligence (AI) algorithm to triage retinal images from the English Diabetic Eye Screening Programme (DESP) into test-positive/technical failure versus test-negative, using human grading following a standard national protocol as the reference standard. Methods Retinal images from 30 405 consecutive screening episodes from three English DESPs were manually graded following a standard national protocol and by an automated process with machine learning enabled software, EyeArt v2.1. Screening performance (sensitivity, specificity) and diagnostic accuracy (95% CIs) were determined using human grades as the reference standard. Results Sensitivity (95% CIs) of EyeArt was 95.7% (94.8% to 96.5%) for referable retinopathy (human graded ungradable, referable maculopathy, moderate-to-severe non-proliferative or proliferative). This comprises sensitivities of 98.3% (97.3% to 98.9%) for mild-to-moderate non-proliferative retinopathy with referable maculopathy, 100% (98.7%,100%) for moderate-to-severe non-proliferative retinopathy and 100% (97.9%,100%) for proliferative disease. EyeArt agreed with the human grade of no retinopathy (specificity) in 68% (67% to 69%), with a specificity of 54.0% (53.4% to 54.5%) when combined with non-referable retinopathy. Conclusion The algorithm demonstrated safe levels of sensitivity for high-risk retinopathy in a real-world screening service, with specificity that could halve the workload for human graders. AI machine learning and deep learning algorithms such as this can provide clinically equivalent, rapid detection of retinopathy, particularly in settings where a trained workforce is unavailable or where large-scale and rapid results are needed.
- Published
- 2020
- Full Text
- View/download PDF