Back to Search Start Over

Enhancing the ophthalmic AI assessment with a fundus image quality classifier using local and global attention mechanisms

Authors :
Shengzhan Wang
Wenyue Shen
Zhiyuan Gao
Xiaoyu Jiang
Yaqi Wang
Yunxiang Li
Xiaoyu Ma
Wenhao Wang
Shuanghua Xin
Weina Ren
Kai Jin
Juan Ye
Source :
Frontiers in Medicine, Vol 11 (2024)
Publication Year :
2024
Publisher :
Frontiers Media S.A., 2024.

Abstract

BackgroundThe assessment of image quality (IQA) plays a pivotal role in the realm of image-based computer-aided diagnosis techniques, with fundus imaging standing as the primary method for the screening and diagnosis of ophthalmic diseases. Conventional studies on fundus IQA tend to rely on simplistic datasets for evaluation, predominantly focusing on either local or global information, rather than a synthesis of both. Moreover, the interpretability of these studies often lacks compelling evidence. In order to address these issues, this study introduces the Local and Global Attention Aggregated Deep Neural Network (LGAANet), an innovative approach that integrates both local and global information for enhanced analysis.MethodsThe LGAANet was developed and validated using a Multi-Source Heterogeneous Fundus (MSHF) database, encompassing a diverse collection of images. This dataset includes 802 color fundus photography (CFP) images (302 from portable cameras), and 500 ultrawide-field (UWF) images from 904 patients with diabetic retinopathy (DR) and glaucoma, as well as healthy individuals. The assessment of image quality was meticulously carried out by a trio of ophthalmologists, leveraging the human visual system as a benchmark. Furthermore, the model employs attention mechanisms and saliency maps to bolster its interpretability.ResultsIn testing with the CFP dataset, LGAANet demonstrated remarkable accuracy in three critical dimensions of image quality (illumination, clarity and contrast based on the characteristics of human visual system, and indicates the potential aspects to improve the image quality), recording scores of 0.947, 0.924, and 0.947, respectively. Similarly, when applied to the UWF dataset, the model achieved accuracies of 0.889, 0.913, and 0.923, respectively. These results underscore the efficacy of LGAANet in distinguishing between varying degrees of image quality with high precision.ConclusionTo our knowledge, LGAANet represents the inaugural algorithm trained on an MSHF dataset specifically for fundus IQA, marking a significant milestone in the advancement of computer-aided diagnosis in ophthalmology. This research significantly contributes to the field, offering a novel methodology for the assessment and interpretation of fundus images in the detection and diagnosis of ocular diseases.

Details

Language :
English
ISSN :
2296858X
Volume :
11
Database :
Directory of Open Access Journals
Journal :
Frontiers in Medicine
Publication Type :
Academic Journal
Accession number :
edsdoj.02b296853bcf431eab09e65dc2e19fcf
Document Type :
article
Full Text :
https://doi.org/10.3389/fmed.2024.1418048