7 results on '"Ruben Hemelings"'
Search Results
2. Deep learning on fundus images detects glaucoma beyond the optic disc
- Author
-
Bart Elen, Ruben Hemelings, Patrick De Boever, Matthew B. Blaschko, João Barbosa-Breda, and Ingeborg Stalmans
- Subjects
FOS: Computer and information sciences ,Male ,genetic structures ,PREDICTION ,Computer Vision and Pattern Recognition (cs.CV) ,Fundus image ,Computer Science - Computer Vision and Pattern Recognition ,Glaucoma ,Fundus (eye) ,RETINAL IMAGES ,Optic Nerve Diseases ,Medicine ,Diagnosis, Computer-Assisted ,Image resolution ,Multidisciplinary ,Image and Video Processing (eess.IV) ,Middle Aged ,Multidisciplinary Sciences ,medicine.anatomical_structure ,Area Under Curve ,FIBER LAYER THICKNESS ,Optic nerve ,Science & Technology - Other Topics ,Regression Analysis ,Female ,Engineering sciences. Technology ,Optic disc ,medicine.medical_specialty ,Fundus Oculi ,Science ,Optic Disk ,Information technology ,DIAGNOSIS ,Sensitivity and Specificity ,VALIDATION ,Article ,Retina ,Deep Learning ,Ophthalmology ,FOS: Electrical engineering, electronic engineering, information engineering ,Humans ,Biology ,Optic nerve diseases ,Aged ,Science & Technology ,IDENTIFICATION ,business.industry ,Deep learning ,Electrical Engineering and Systems Science - Image and Video Processing ,medicine.disease ,eye diseases ,Artificial intelligence ,sense organs ,business ,DIABETIC-RETINOPATHY - Abstract
Although unprecedented sensitivity and specificity values are reported, recent glaucoma detection deep learning models lack in decision transparency. Here, we propose a methodology that advances explainable deep learning in the field of glaucoma detection and vertical cup-disc ratio (VCDR), an important risk factor. We trained and evaluated deep learning models using fundus images that underwent a certain cropping policy. We defined the crop radius as a percentage of image size, centered on the optic nerve head (ONH), with an equidistant spaced range from 10-60% (ONH crop policy). The inverse of the cropping mask was also applied (periphery crop policy). Trained models using original images resulted in an area under the curve (AUC) of 0.94 [95% CI 0.92-0.96] for glaucoma detection, and a coefficient of determination (R-2) equal to 77% [95% CI 0.77-0.79] for VCDR estimation. Models that were trained on images with absence of the ONH are still able to obtain significant performance (0.88 [95% CI 0.85-0.90] AUC for glaucoma detection and 37% [95% CI 0.35-0.40] R-2 score for VCDR estimation in the most extreme setup of 60% ONH crop). Our findings provide the first irrefutable evidence that deep learning can detect glaucoma from fundus image regions outside the ONH. Research Group Ophthalmology, KU Leuven; VITO NV; Flemish Government; European Commission
- Published
- 2021
3. Pointwise Visual Field Estimation From Optical Coherence Tomography in Glaucoma Using Deep Learning
- Author
-
Ruben Hemelings, Bart Elen, João Barbosa-Breda, Erwin Bellon, Matthew B. Blaschko, Patrick De Boever, Ingeborg Stalmans, Stalmans, Ingeborg/0000-0001-7507-4512, Blaschko, Matthew/0000-0002-2640-181X, De Boever, Patrick/0000-0002-5197-8215, Barbosa-Breda, Joao/0000-0001-7816-816X, Hemelings, Ruben, Elen, Bart, Barbosa-Breda, Joao, Bellon, Erwin, Blaschko, Matthew B., DE BOEVER, Patrick, and Stalmans, Ingeborg
- Subjects
visual field ,Computer. Automation ,optical coherence tomography ,Biomedical Engineering ,Vision Disorders ,convolutional neural network ,Glaucoma ,structure-function ,Ophthalmology ,Deep Learning ,Humans ,Human medicine ,Visual Fields ,Biology ,Engineering sciences. Technology ,Tomography, Optical Coherence ,Retrospective Studies - Abstract
Purpose: Standard automated perimetry is the gold standard to monitor visual field ( VF) loss in glaucoma management, but it is prone to intrasubject variability. We trained and validated a customized deep learning (DL) regression model with Xception backbone that estimates pointwise and overall VF sensitivity fromunsegmented optical coherence tomography (OCT) scans. Methods: DL regression models have been trained with four imaging modalities (circumpapillary OCT at 3.5 mm, 4.1 mm, and 4.7 mm diameter) and scanning laser ophthalmoscopy en face images to estimate mean deviation (MD) and 52 threshold values. This retrospective study used data from patients who underwent a complete glaucoma examination, including a reliable Humphrey Field Analyzer (HFA) 24-2 SITA Standard (SS) VF exam and a SPECTRALIS OCT. Results: For MD estimation, weighted prediction averaging of all four individuals yielded a mean absolute error (MAE) of 2.89 dB (2.50-3.30) on 186 test images, reducing the baseline by 54% (MAEdecr%). For 52 VF threshold values' estimation, the weighted ensemble model resulted in anMAE of 4.82 dB (4.45-5.22), representing anMAEdecr% of 38% from baseline when predicting the pointwise mean value. DL managed to explain 75% and 58% of the variance (R-2) in MD and pointwise sensitivity estimation, respectively. Conclusions: Deep learning can estimate global and pointwise VF sensitivities that fall almost entirely within the 90% test-retest confidence intervals of the 24-2 SS test. Translational Relevance: Fast and consistent VF prediction from unsegmented OCT scans could become a solution for visual function estimation in patients unable to perform reliable VF exams. Supported by the Research Group Ophthalmology, KU Leuven and VITO NV (to RH) and the Flemish Government under the “Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen” program. No outside entities have been involved in the study design; in the collection, analysis, and interpretation of data; in the writing of the manuscript; or in the decision to submit the manuscript for publication. This article was presented at the Association for Research in Vision and Ophthalmology Annual Meeting (ARVO2021) virtual conference, May 1st to May 7th, 2021.
- Published
- 2022
4. Pathological myopia classification with simultaneous lesion segmentation using deep learning
- Author
-
Ruben Hemelings, Bart Elen, Ingeborg Stalmans, Matthew B. Blaschko, Julie Jacob, and Patrick De Boever
- Subjects
FOS: Computer and information sciences ,genetic structures ,Computer science ,Fundus Oculi ,Computer Vision and Pattern Recognition (cs.CV) ,Optic Disk ,Computer Science - Computer Vision and Pattern Recognition ,Glaucoma ,Health Informatics ,Context (language use) ,Fundus (eye) ,Convolutional neural network ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,chemistry.chemical_compound ,0302 clinical medicine ,Deep Learning ,medicine ,Medical imaging ,FOS: Electrical engineering, electronic engineering, information engineering ,Humans ,Segmentation ,Peripapillary atrophy ,Blindness ,business.industry ,Pathological myopia ,Image and Video Processing (eess.IV) ,Retinal detachment ,Pattern recognition ,Retinal ,Electrical Engineering and Systems Science - Image and Video Processing ,medicine.disease ,Retinal atrophy ,eye diseases ,Computer Science Applications ,medicine.anatomical_structure ,chemistry ,Myopia, Degenerative ,Optic nerve ,Artificial intelligence ,sense organs ,business ,030217 neurology & neurosurgery ,Software ,Optic disc - Abstract
This investigation reports on the results of convolutional neural networks developed for the recently introduced PathologicAL Myopia (PALM) dataset, which consists of 1200 fundus images. We propose a new Optic Nerve Head (ONH)-based prediction enhancement for the segmentation of atrophy and fovea. Models trained with 400 available training images achieved an AUC of 0.9867 for pathological myopia classification, and a Euclidean distance of 58.27 pixels on the fovea localization task, evaluated on a test set of 400 images. Dice and F1 metrics for semantic segmentation of lesions scored 0.9303 and 0.9869 on optic disc, 0.8001 and 0.9135 on retinal atrophy, and 0.8073 and 0.7059 on retinal detachment, respectively. Our work was acknowledged with an award in the context of the "PathologicAL Myopia detection from retinal images" challenge held during the IEEE International Symposium on Biomedical Imaging (April 2019). Considering that (pathological) myopia cases are often identified as false positives and negatives in classification systems for glaucoma, we envision that the current work could aid in future research to discriminate between glaucomatous and highly-myopic eyes, complemented by the localization and segmentation of landmarks such as fovea, optic disc and atrophy., Comment: 18 pages, 2 figures, preprint to journal
- Published
- 2021
5. Accurate prediction of glaucoma from color fundus images with a convolutional neural network that relies on active and transfer learning
- Author
-
João Barbosa-Breda, Ingeborg Stalmans, Patrick De Boever, S Pourjavan, Ruben Hemelings, Matthew B. Blaschko, Maarten Meire, Sara Van de Veire, Evelien Vandewalle, Sophie Lemmens, Bart Elen, Hemelings, Ruben, Elen, Bart, Barbosa-Breda, Joao, Lemmens, Sophie, Meire, Maarten, Pourjavan, Sayeh, Vandewalle, Evelien, Van de Veire, Sara, Blaschko, Matthew B., DE BOEVER, Patrick, and Stalmans, Ingeborg
- Subjects
artificial intelligence ,deep learning ,fundus image ,glaucoma detection ,genetic structures ,Fundus Oculi ,Computer science ,DEEP ,Optic Disk ,Glaucoma ,Convolutional neural network ,VALIDATION ,Patient referral ,Deep Learning ,medicine ,Humans ,Diagnosis, Computer-Assisted ,ALGORITHM ,Retrospective Studies ,Science & Technology ,Receiver operating characteristic ,business.industry ,Deep learning ,Pattern recognition ,General Medicine ,medicine.disease ,eye diseases ,Ophthalmology ,ROC Curve ,Neuroretinal rim ,Artificial intelligence ,sense organs ,business ,Transfer of learning ,Classifier (UML) ,Life Sciences & Biomedicine ,Follow-Up Studies ,DIABETIC-RETINOPATHY - Abstract
Purpose To assess the use of deep learning (DL) for computer-assisted glaucoma identification, and the impact of training using images selected by an active learning strategy, which minimizes labelling cost. Additionally, this study focuses on the explainability of the glaucoma classifier. Methods This original investigation pooled 8433 retrospectively collected and anonymized colour optic disc-centred fundus images, in order to develop a deep learning-based classifier for glaucoma diagnosis. The labels of the various deep learning models were compared with the clinical assessment by glaucoma experts. Data were analysed between March and October 2018. Sensitivity, specificity, area under the receiver operating characteristic curve (AUC), and amount of data used for discriminating between glaucomatous and non-glaucomatous fundus images, on both image and patient level. Results Trained using 2072 colour fundus images, representing 42% of the original training data, the trained DL model achieved an AUC of 0.995, sensitivity and specificity of, respectively, 98.0% (CI 95.5%-99.4%) and 91% (CI 84.0%-96.0%), for glaucoma versus non-glaucoma patient referral. Conclusions These results demonstrate the benefits of deep learning for automated glaucoma detection based on optic disc-centred fundus images. The combined use of transfer and active learning in the medical community can optimize performance of DL models, while minimizing the labelling cost of domain-specific mavens. Glaucoma experts are able to make use of heat maps generated by the deep learning classifier to assess its decision, which seems to be related to inferior and superior neuroretinal rim (within ONH), and RNFL in superotemporal and inferotemporal zones (outside ONH). The first author is jointly supported by the Research Group Ophthalmology, KU Leuven and VITO NV. No outside entities have been involved in the study design, in the collection, analysis and interpretation of data, in the writing of the manuscript, nor in the decision to submit the manuscript for publication. Thus, the authors declare that there are no conflicts of interest in this work.
- Published
- 2020
6. Artificial intelligence in glaucoma: Assisted diagnosis and risk assessment
- Author
-
Ruben Hemelings
- Subjects
Ophthalmology ,medicine.medical_specialty ,business.industry ,medicine ,Glaucoma ,General Medicine ,medicine.disease ,business ,Risk assessment ,Intensive care medicine - Published
- 2019
- Full Text
- View/download PDF
7. Artery-vein segmentation in fundus images using a fully convolutional network
- Author
-
Patrick De Boever, Karel Van Keer, Matthew B. Blaschko, Ruben Hemelings, Ingeborg Stalmans, and Bart Elen
- Subjects
Technology ,Computer science ,PREDICTION ,Datasets as Topic ,Fundus (eye) ,030218 nuclear medicine & medical imaging ,0302 clinical medicine ,Engineering ,Photography ,Segmentation ,Ground truth ,Radiological and Ultrasound Technology ,Radiology, Nuclear Medicine & Medical Imaging ,Fundus image ,Fully convolutional network ,Artery–vein segmentation ,Computer Graphics and Computer-Aided Design ,Benchmarking ,medicine.anatomical_structure ,DISEASES ,Computer Vision and Pattern Recognition ,Life Sciences & Biomedicine ,Artery ,Artery-vein segmentation ,Fundus Oculi ,Retinal Artery ,BLOOD-VESSEL SEGMENTATION ,Health Informatics ,Context (language use) ,ATHEROSCLEROSIS RISK ,RETINOPATHY ,CLASSIFICATION ,03 medical and health sciences ,Deep Learning ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Vein ,Engineering, Biomedical ,Science & Technology ,INCIDENT STROKE ,Pixel ,business.industry ,Deep learning ,Retinal Vessels ,Pattern recognition ,Retinal Vein ,RETINAL VASCULAR CALIBER ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
Epidemiological studies demonstrate that dimensions of retinal vessels change with ocular diseases, coronary heart disease and stroke. Different metrics have been described to quantify these changes in fundus images, with arteriolar and venular calibers among the most widely used. The analysis often includes a manual procedure during which a trained grader differentiates between arterioles and venules. This step can be time-consuming and can introduce variability, especially when large volumes of images need to be analyzed. In light of the recent successes of fully convolutional networks (FCNs) applied to biomedical image segmentation, we assess its potential in the context of retinal artery-vein (A/V) discrimination. To the best of our knowledge, a deep learning (DL) architecture for simultaneous vessel extraction and A/V discrimination has not been previously employed. With the aim of improving the automation of vessel analysis, a novel application of the U-Net semantic segmentation architecture (based on FCNs) on the discrimination of arteries and veins in fundus images is presented. By utilizing DL, results are obtained that exceed accuracies reported in the literature. Our model was trained and tested on the public DRIVE and HRF datasets. For DRIVE, measuring performance on vessels wider than two pixels, the FCN achieved accuracies of 94.42% and 94.11% on arteries and veins, respectively. This represents a decrease in error of 25% over the previous state of the art reported by Xu et al. (2017). Additionally, we introduce the HRF A/V ground truth, on which our model achieves 96.98% accuracy on all discovered centerline pixels. HRF A/V ground truth validated by an ophthalmologist, predicted A/V annotations and evaluation code are available at https://github.com/rubenhx/av-segmentation. (C) 2019 The Authors. Published by Elsevier Ltd. The first author is jointly supported by the Research Group Ophthalmology, KU Leuven and VITO NV. The authors would like to thank Bashir Al-Diri for providing artery-vein ground truth for the DRIVE data set.
- Published
- 2018
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.