Back to Search Start Over

En‐ConvNet: A novel approach for glaucoma detection from color fundus images using ensemble of deep convolutional neural networks.

Authors :
Elangovan, Poonguzhali
Nath, Malaya Kumar
Source :
International Journal of Imaging Systems & Technology. Nov2022, Vol. 32 Issue 6, p2034-2048. 15p.
Publication Year :
2022

Abstract

Glaucomatous optic neuropathy is the preeminent cause of incurable vision impairment and blindness across the world. Manual interpretation of the pathological structures in fundus images is time‐consuming and requires the expertise of a competent specialist. With the development of deep learning approaches, automated glaucoma diagnosis is easy and effective for larger screening. Convolutional neural networks, in particular, have emerged as a promising choice for glaucoma detection from fundus images due to their remarkable success in image classification. Transferring the optimized weights from a pre‐trained model expedites and simplifies the training process of deep neural network. In this paper, a deep ensemble model using the stacking ensemble learning technique is developed to attain the optimum performance for the classification of glaucomatous and normal images. Thirteen pre‐trained models such as Alexnet, Googlenet, VGG‐16, VGG‐19, Squeezenet, Resnet‐18, Resnet‐50, Resnet‐101, Efficientnet‐b0, Mobilenet‐v2, Densenet‐201, Inception‐v3, and Xception are implemented. Their performance is compared in 65 different configurations, comprising 13 CNN architectures and five various classification approaches. A two‐stage ensemble selection technique is proposed to select the optimal configurations. Selected configurations are pooled using a probability averaging technique. The final classification is performed using an SVM classifier. In this work, publicly available databases are modified (such as: DRISHTI‐GS1‐R, ORIGA‐R, RIM‐ONE2‐R, LAG‐R, and ACRIMA‐R) based on oversampling data‐level technique for validating the performance of deep ensemble model. Ensembling the best configurations reports an overall classification accuracy of 93.4%, 79.6%, 91.3%, 99.5%, and 99.6% in DRISHTI‐GS1‐R, ORIGA‐R, RIM‐ONE2‐R, ACRIMA‐R, and LAG‐R databases, respectively. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
08999457
Volume :
32
Issue :
6
Database :
Academic Search Index
Journal :
International Journal of Imaging Systems & Technology
Publication Type :
Academic Journal
Accession number :
159980317
Full Text :
https://doi.org/10.1002/ima.22761