4,716 results on '"RETINAL imaging"'
Search Results
2. Association and multimodal model of retinal and blood-based biomarkers for detection of preclinical Alzheimer's disease.
- Author
-
Ravichandran, Swetha, Snyder, Peter J., Alber, Jessica, Murchison, Charles F., Chaby, Lauren E., Jeromin, Andreas, and Arthur, Edmund
- Subjects
- *
ALZHEIMER'S disease , *POSITRON emission tomography , *OPTICAL coherence tomography , *RECEIVER operating characteristic curves , *RETINAL imaging - Abstract
Background: The potential diagnostic value of plasma amyloidogenic beta residue 42/40 ratio (Aβ42/Aβ40 ratio), neurofilament light (NfL), tau phosphorylated at threonine-181 (p-tau181), and threonine-217 (p-tau217) has been extensively discussed in the literature. We have also previously described the association between retinal biomarkers and preclinical Alzheimer's disease (AD). The goal of this study was to evaluate the association, and a multimodal model of, retinal and plasma biomarkers for detection of preclinical AD. Methods: We included 82 cognitively unimpaired (CU) participants (141 eyes; mean age: 67 years; range: 56–80) from the Atlas of Retinal Imaging in Alzheimer's Study (ARIAS). Blood samples were assessed for concentrations of Aβ42/Aβ40 ratio, NfL, p-tau181, and p-tau217 (ALZpath, Inc.) using Single molecule array (SIMOA) technology. The Spectralis II system (Heidelberg Engineering) was used to acquire macular centered Spectral Domain Optical Coherence Tomography (SD-OCT) images for evaluation of putative retinal gliosis surface area and macular retinal nerve fiber layer (mRNFL) thickness. For all participants, correlations (adjusted for age and correlation between eyes) were assessed between retinal and blood-based biomarkers. A subgroup cohort of 57 eyes from 32 participants with recent Aβ positron emission tomography (PET) results, comprising 18 preclinical patients (Aβ PET + ve, 32 eyes) and 14 controls (Aβ PET -ve, 25 eyes) with a mean age of 69 vs. 66, p = 0.06, was included for the assessment of a multimodal model to distinguish between the two groups. For this subgroup cohort, receiver operating characteristic (ROC) analysis was performed to compare the multimodal model of retinal and plasma biomarkers vs. each biomarker alone to distinguish between the two groups. Results: Significant correlation was found between putative retinal gliosis and p-tau217 in the univariate mixed model (β = 0.48, p = 0.007) but not for the other plasma biomarkers (p > 0.05). This positive correlation was also retained in the multivariate mixed model (β = 0.43, p = 0.022). The multimodal ROC model based on retinal (gliosis area, inner inferior RNFL thickness, inner superior RNFL thickness, and inner nasal RNFL thickness) and plasma biomarkers (p-tau217 and Aβ42/Aβ40 ratio) had an excellent AUC of 0.97 (95% CI = 0.93–1.01; p < 0.001) compared to unimodal models of retinal and plasma biomarkers. Conclusions: Our analyses show the potential of integrating retinal and blood-based biomarkers for improved detection and screening of preclinical AD. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
3. Multi-step framework for glaucoma diagnosis in retinal fundus images using deep learning.
- Author
-
Yi, Sanli and Zhou, Lingxiang
- Subjects
- *
OPTIC disc , *MEDICAL sciences , *FEATURE extraction , *RETINAL imaging , *DIAGNOSIS , *DEEP learning - Abstract
Glaucoma is one of the most common causes of blindness in the world. Screening glaucoma from retinal fundus images based on deep learning is a common method at present. In the diagnosis of glaucoma based on deep learning, the blood vessels within the optic disc interfere with the diagnosis, and there is also some pathological information outside the optic disc in fundus images. Therefore, integrating the original fundus image with the vessel-removed optic disc image can improve diagnostic efficiency. In this paper, we propose a novel multi-step framework named MSGC-CNN that can better diagnose glaucoma. In the framework, (1) we combine glaucoma pathological knowledge with deep learning model, fuse the features of original fundus image and optic disc region in which the interference of blood vessel is specifically removed by U-Net, and make glaucoma diagnosis based on the fused features. (2) Aiming at the characteristics of glaucoma fundus images, such as small amount of data, high resolution, and rich feature information, we design a new feature extraction network RA-ResNet and combined it with transfer learning. In order to verify our method, we conduct binary classification experiments on three public datasets, Drishti-GS, RIM-ONE-R3, and ACRIMA, with accuracy of 92.01%, 93.75%, and 97.87%. The results demonstrate a significant improvement over earlier results. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
4. Special Commentary: Balancing Benefits and Risks: The Case for Retinal Images to Be Considered as Nonprotected Health Information for Research Purposes.
- Subjects
- *
RETINAL imaging - Published
- 2025
- Full Text
- View/download PDF
5. Unveiling the intricacies of chronic kidney disease: From ocular manifestations to therapeutic frontiers.
- Author
-
Kanbay, Mehmet, Guldan, Mustafa, Ozbek, Lasin, Copur, Sidar, Mallamaci, Francesca, and Zoccali, Carmine
- Subjects
- *
CHRONIC kidney failure , *MACULAR degeneration , *OCULAR manifestations of general diseases , *KIDNEY diseases , *RETINAL imaging - Abstract
Background: Shared anatomical, histological and physiological pathways between the kidney and the eye are well documented, demonstrating that ocular manifestations serve as valuable prognostic indicators in chronic kidney disease (CKD), providing insights into disease severity and progression. Through non‐invasive imaging modalities such as retinal fundus photography, early retinal microvascular alterations indicative of CKD progression can be detected, enabling timely intervention and risk stratification. However, the conclusions drawn from the review primarily demonstrate a strong or independent association between glaucoma or retinopathy and CKD. Results and Conclusion: Multiple shared pathophysiological events have been implicated in the pathogenesis in the alterations at eye and kidney including renin‐angiotensin‐aldosterone system. Patients with CKD are more likely to experience glaucoma, age‐related macular degeneration, cataracts, uremic optic neuropathy and retinopathy. To establish the role of ocular manifestations in predicting CKD progression, it is crucial to address the limitations of correlation and explore the underlying causality with further research on common disease pathogenesis. Additionally, specific methods for risk stratification based on retinal changes, the effectiveness of timely interventions, and the development of predictive tools combining ocular and renal data are of utmost importance research topics to enlighten the bidirectional causality. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
6. Diabetic Retinopathy Detection Using Morlet Wavelet Transform Based Residual Network.
- Author
-
Revathi, Garidepalli and Chandre, Shanker
- Subjects
DIABETIC retinopathy ,WAVELET transforms ,RETINAL imaging ,VISION disorders ,DATA augmentation - Abstract
Early detection of Diabetic Retinopathy (DR) is crucial to prevent patients from the risk of blindness or vision loss which is caused by retinal damage in the eye due to long-term diabetic mellitus. However, existing detection models have several drawbacks to detect DR such as subtle differences between severity levels, and poor quality of images, which makes the detection process ineffective. To overcome these limitations, a Morlet Wavelet Transform-based Residual Network (MWT-ResNet) is to detect the DR accurately for early diagnosis. The MWT enable multiscale analysis which helps to analyze retinal images at different frequencies and times that enhance the detection of lesions correctly. The retinal images are acquired from two benchmark datasets and preprocessed to improve the contrast of images for the detection of lesions precisely. Then, the preprocessed retinal photos are augmented by a data augmentation method to balance data in severity classes and segmented by a marker-controlled watershed segmentation method. Finally, the proposed MWT-based ResNet model detects DR accurately by learning relevant information from extracted multi-scale features. The experimental results of the proposed MWT-ResNet method achieved an accuracy of 98.36 % and 0.983 for IDRiD and Messidor datasets which is higher than the existing methods like Gradient Boosting-ResNet (GB-ResNet). [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
7. DAUCD: Deep Attention U-Net for Cataract Detection Leveraging CNN Frameworks.
- Author
-
Vadduri, Maneesha and P., Kuppusamy
- Subjects
DEEP learning ,RETINAL imaging ,IMAGE analysis ,CATARACT ,BLOOD vessels ,RETINAL blood vessels - Abstract
Cataract remains the leading cause of blindness globally, accounting for nearly half of all cases. This study presents the Deep Attention U-Net for Cataract Diagnosis (DAUCD) model, leveraging advanced deep learning techniques to improve both the segmentation and classification of cataract in retinal images. The proposed DAUCD method integrates Attention U-Net architectures with pre-trained backbones (ResNet50, Inception-v3, and VGG19) for precise blood vessel segmentation, followed by the classification of segmented outputs using VGG16. The model achieved a classification accuracy of 98.24%, with a sensitivity of 99.77%, a specificity of 97.83%, and an AUC of 99.24%, particularly excelling with the ResNet50-based backbone. The dataset, curated from multiple sources including the cataract dataset, ODIR5K, eye-diseases-classification, and cataract eyes datasets, comprises a total of 10,444 fundus images. It was designed to support both segmentation and classification tasks, with images evenly distributed across cataract and non-cataract classes. This comprehensive dataset provided a strong foundation for validating the effectiveness and generalizability of the proposed DAUCD model. The findings of this research underscore the robustness and efficiency of the DAUCD model in medical image analysis, offering promising advancements in early detection and treatment outcomes for cataract patients. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
8. Diabetic retinopathy detection and severity classification using optimized deep learning with explainable AI technique.
- Author
-
Lalithadevi, B. and Krishnaveni, S.
- Subjects
CONVOLUTIONAL neural networks ,MEDICAL sciences ,DIABETIC retinopathy ,PUBLIC hospitals ,RETINAL imaging ,DEEP learning - Abstract
Diabetic Retinopathy (DR) is a serious consequence of prolonged diabetic patients that causes vision threatening issues and irreversible blindness problem. In the beginning stage of DR, affected people do not realize any specific symptoms. Manual testing is a highly complicated task due to the variations and formation of tiny micro miniature lesions in the retina. Since the lesion features are difficult to detect, classifying the different stages of DR has been a significant challenge. It is essential to have an effective method of detecting the pathologies of the retina during the screening process. Deep learning methods have a significant impact on identifying and diagnosing the diseases at an early stage. In this paper, we propose an innovative OptiDex model that utilizes deep learning to achieve precise detection and classification of diabetic retinopathy severity levels. OptiDex uses an Explainable AI (XAI) framework that can provide transparent decision-making and insights into the decision-making process of model. Additionally, we use an enhanced cat swarm optimization (ECSO) algorithm to optimize the model performance. The performance of the OptiDex model is evaluated using both clinical and public datasets, showcasing its accuracy in detecting diabetic retinopathy and classifying the severity level with impressive results. Various morphological operations, hysteresis thresholding, top and black hat transformation are applied to perform the segmentation of lesions abnormalities. Then, the combined version of the Nas-Mob architecture is used to extract and retrieve the most relevant features from segmented retinal images that discover the tiny lesions and strongly recommended for a better classification task. DCNN parameters are optimized through a nature-inspired metaheuristic called ECSO algorithm that reduces the bias value, updates the weights in the network, and obtained better accuracy. Moreover, the heatmap images are generated through the suggested approach using gradient-weighted class activation maps. Finally, the experiments were conducted on clinical datasets and public datasets extensively. We compute the performance metrics of the developed OptiDex model on clinical datasets obtained from SRM Hospital and public benchmarking datasets, namely Aptos, IDRiD, Messidor for training, validation and testing purposes. According to the experimental outcomes, our proposed model namely, OptiDex based DCNN + ECSO obtained a greater accuracy, sensitivity and specificity of 97.65%, 96.46%, 93.45% respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Deep learning based MA detection with modified ResNet-50.
- Author
-
PS, Bindhya, R, Chitra, and VS, Bibin Raj
- Subjects
HYPERGLYCEMIA ,MACHINE learning ,DEEP learning ,RETINAL imaging ,SUPPORT vector machines - Abstract
A deep understanding of retinal images is used to identify vascular diseases, such as Diabetic Retinopathy (DR) in individuals who experience high blood sugar levels and high blood pressure. DR is a progressive disease that starts from minute red saccular out pouches on blood vessels known as Micro-Aneurysm (MA). DR can be cured by eradicating MA on the retina. Detecting microaneurysms (MAs) in retinal digital images is a challenging task due to various factors. These factors include the diverse sizes, shapes, levels of noise, and contrasts exhibited by the images found in the publicly available datasets for Diabetic Retinopathy (DR). Moreover, the limited number of labelled examples in these datasets and the inherent difficulty faced by deep learning algorithms in accurately identifying small objects in retinal digital images further contribute to the complexity involved in MA detection. Here proposing a Deep Learning based MA detection using modified ResNet-50 with a Support Vector Machine. The suggested approach was training, tuning, and evaluation, both qualitatively and quantitatively, using publicly available datasets like E-ophthaMA and DIARETDB1. The suggested approach demonstrates improved outcomes in terms of time efficiency and resource utilisation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Advances in retinal imaging biomarkers for the diagnosis of cerebrovascular disease.
- Author
-
Zhang, Yier, Zhao, Ting, Ye, Ling, Yan, Sicheng, Shentu, Wuyue, Lai, Qilun, and Qiao, Song
- Subjects
CEREBRAL small vessel diseases ,CEREBROVASCULAR disease ,OPTICAL coherence tomography ,RETINAL imaging ,STROKE - Abstract
The increasing incidence and mortality rates of cerebrovascular disease impose a heavy burden on both patients and society. Retinal imaging techniques, such as fundus photography, optical coherence tomography, and optical coherence tomography angiography, can be used for rapid, non-invasive evaluation of cerebral microcirculation and brain function since the retina and the central nervous system share similar embryonic origin characteristics and physiological features. This article aimed to review retinal imaging biomarkers related to cerebrovascular diseases and their applications in cerebrovascular diseases (stroke, cerebral small vessel disease [CSVD], and vascular cognitive impairment [VCI]), thus providing reference for early diagnosis and prevention of cerebrovascular diseases. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. A novel chaotic weighted EHO-based methodology for retinal vessel segmentation.
- Author
-
Ashanand and Kaur, Manpreet
- Subjects
RETINAL blood vessels ,THRESHOLDING algorithms ,IMAGE segmentation ,RETINAL imaging ,STATISTICAL correlation ,MATHEMATICAL morphology - Abstract
Retinal image segmentation process deals with problems like spurious vascularisation and thin vessel detection. In this paper, a three-step methodology has been proposed for retinal vessel segmentation. In first step, RGB to YIQ conversion is performed. In second step, Y component enhancement is performed. A novel Chaotic weighted Elephant Herding Optimization (CWEHO) has been proposed to optimize the clip limit and block size values of Contrast Limited Adaptive Histogram Equalization (CLAHE). CWEHO-based CLAHE along with morphological operations, non-local means filter, and median filter is applied to enhance retinal images. In third step, thin and thick vessel segmentation is performed. Top hat transformation, otsu thresholding algorithm, and vessel point selection are applied for thick vessel extraction. The first-order Gaussian derivative in conjunction with the match filter is used to extract thin vessels. DRIVE and HRF datasets are used to assess the effectiveness of proposed methodology. The average values of segmentation accuracy, specificity, sensitivity, and Mathew Correlation Coefficient (MCC) are observed to be 0.9650, 0.9895, 0.7007, 0.7650, respectively, for observer1 and 0.9696, 0.9912, 0.7390, 0.7901 for observer2 using DRIVE dataset. Similarly, 0.9592, 0.9839, 0.6850, and 0.7116, respectively, metrics for HRF dataset. Compared to state-of-the-art methods, the proposed segmentation methodology provides better results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Hybrid generative model for grading the severity of diabetic retinopathy images.
- Author
-
Bhuvaneswari, R., Diviya, M., Subramanian, M., Maranan, Ramya, and Josphineleela, R
- Subjects
DIABETIC retinopathy ,CONVOLUTIONAL neural networks ,GAUSSIAN mixture models ,IMAGE recognition (Computer vision) ,RETINAL imaging - Abstract
One of the common eye conditions affecting patients with diabetes is diabetic retinopathy (DR). It is characterised by the progressive impairment to the blood vessels with the increase of glucose level in the blood. The grading efficiency still finds challenging because of the existence of intra-class variations and imbalanced data distributions on the retinal images. Traditional machine learning techniques utilise hand-engineered features for classification of the affected retinal images. As convolutional neural network produces better image classification accuracy in many medical images, this work utilises the CNN-based feature extraction method. This feature has been used to build Gaussian mixture model (GMM) for each class that maps the CNN features to log-likelihood dimensional vector spaces. Since the Gaussian mixture model can be realised as a mixture of both parametric and nonparametric density models and has their flexibility in capturing different data distributions, probabilistic outputs, interpretability, efficient parameter estimation, and robustness to outliers, the proposed model aimed to obtain and provide a smooth approximation of the underlying distribution of features for training the model. Then these vector spaces are trained by the SVM classifier. Experimental results illustrate the efficacy of the proposed model with accuracy 86.3% and 89.1%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. In vivo scanning laser fundus and high-resolution OCT imaging of retinal ganglion cell injury in a non-human primate model with an activatable fluorescent-labeled TAT peptide probe.
- Author
-
Qiu, Xudong, Gammon, Seth T., Rasmussen, Carol, Pisaneschi, Federica, Kim, Charlene B. Y., Ver Hoeve, James, Millward, Steven W., Barnett, Edward M., Nork, T. Michael, Kaufman, Paul L., and Piwnica-Worms, David
- Subjects
- *
AXONAL transport , *OPTICAL coherence tomography , *INTRAVITREAL injections , *INTRAOCULAR pressure , *NERVE fibers , *RETINAL ganglion cells , *RETINAL imaging - Abstract
The optical imaging agent TcapQ488 has enabled imaging of retinal ganglion cell (RGC) injury in vivo in rodents and has potential as an effective diagnostic probe for early detection and intervention monitoring in glaucoma patients. In the present study, we investigated TcapQ488 in non-human primates (NHPs) to identify labeling efficacy and early signals of injured RGC, to determine species-dependent changes in RGC probe uptake and clearance, and to determine dose-limiting toxicities. Doses of 3, 6, and 12 nmol of TcapQ488 were delivered intravitreally to normal healthy NHP eyes and eyes that had undergone hemiretinal endodiathermy axotomy (HEA) in the inferior retina. Post-injection fundus fluorescence imaging using a Spectralis imaging platform (Heidelberg Engineering) documented TcapQ488 activation in RGC cell bodies. Optical coherence tomography (OCT), slit-lamp examinations, intraocular pressure measurements, and visual electrophysiology testing were performed to monitor probe tolerability. For comparison, a negative control, non-cleavable, non-quenched probe (dTcap488, 6 nmol), was delivered intravitreally to a normal healthy eye. In normal healthy eyes, intravitreal injection of 3 nmol of TcapQ488 was well-tolerated, while 12 nmol of TcapQ488 to the healthy eye caused extensive probe activation in the ganglion cell layer (GCL) and eventual retinal nerve fiber layer thinning. In HEA eyes, the HEA procedure followed by intravitreal TcapQ488 (3 nmol) injection resulted in probe activation within cell bodies in the GCL, confined to the HEA-treated inferior retina, indicating cell injury and slow axonal transport in the GCL. However, in contrast to rodents, a vitreal haze that lasted 2–12 weeks obscured rapid high-resolution imaging of the fundus. By contrast, intravitreal TcapQ488 injection prior to the HEA procedure led to minimal probe labeling in the GCL. The results of the dTcap488 control experiments indicated that fast axonal transport carried the probe out of the retina after cell body uptake. No evidence of pan-retinal toxicity or loss of retino-cortical function was detected in any of the three NHPs tested. Overall, these data provide evidence of TcapQ488 activation, without toxicity, in NHP HEA eyes that had been intravitreally injected with 3 nmol of the probe. Compared to rodents, unexpectedly rapid axonal transport in the NHPs reduced the capacity to visualize RGC cell bodies and axons through the backdrop of an intravitreal haze. Nonetheless, although intravitreal clearance rates did not scale to NHPs, HEA-induced reductions in axonal transport enhanced probe visualization in the cell body. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. The influence of rotational error and axial shift of toric intraocular lenses on residual astigmatism.
- Author
-
Gargallo, Diana, Remón, Laura, Ares, Jorge, and Castro-Alonso, Francisco J.
- Subjects
- *
REFRACTIVE errors , *RAY tracing , *CATARACT surgery , *RETINAL imaging , *ASTIGMATISM - Abstract
Purpose: Accurate alignment of Toric Intraocular Lens (T-IOLs) in cataract surgery is crucial for good visual outcomes. The purpose of this study was to evaluate the influence of rotation, axial shift and their combined effects on the refractive error and image quality of a wide range of T-IOL powers (from +1.50 D to +6.00 D cylinder) and two pupil diameters (3.34 and 4.44 mm). Methods: Numerical ray tracing was utilized to quantify the residual error. Simulated retinal images and Visual Strehl (VS) ratios were calculated to evaluate image quality. Results: First, T-IOL rotation showed better agreement with Holladay's formula than 3.33% rule. Second, axial displacement resulted in acceptable residual cylinder (<0.50 D) across all examined cylinder powers. Third, concerning the combined effects, the influence of axial shift on residual cylinder becomes negligible when rotation errors exceed 2.5°. Fourth, a pupil-dependent nonlinear relationship was noted for image quality caused by both types of misalignment factors. Conclusions: The 3.33% rule was confirmed as a reasonable approximation for the residual astigmatism caused by rotation of T-IOLs. The influence of axial shift on residual astigmatism becomes insignificant when there is also rotation. Image quality studies confirm that 30° of rotation are enough invalidate the compensation benefits of a T-IOLs in comparison with a Spherical Intraocular lens. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Evaluating deep learning models for classifying OCT images with limited data and noisy labels.
- Author
-
Miladinović, Aleksandar, Biscontin, Alessandro, Ajčević, Miloš, Kresevic, Simone, Accardo, Agostino, Marangoni, Dario, Tognetto, Daniele, and Inferrera, Leandro
- Subjects
- *
IMAGE recognition (Computer vision) , *OPTICAL coherence tomography , *RETINAL imaging , *RETINAL diseases , *DISEASE management - Abstract
The use of deep learning for OCT image classification could enhance the diagnosis and monitoring of retinal diseases. However, challenges like variability in retinal abnormalities, noise, and artifacts in OCT images limit its clinical use. Our study aimed to evaluate the performance of various deep learning (DL) architectures in classifying retinal pathologies versus healthy cases based on OCT images, under data scarcity and label noise. We examined five DL architectures: ResNet18, ResNet34, ResNet50, VGG16, and InceptionV3. Fine-tuning of the pre-trained models was conducted on 5526 OCT images and reduced subsets down to 21 images to evaluate performance under data scarcity. The performance of models fine-tuned on subsets with label noise levels of 10%, 15%, and 20% was evaluated. All DL architectures achieved high classification accuracy (> 90%) with training sets of 345 or more images. InceptionV3 achieved the highest classification accuracy (99%) when trained on the entire training set. However, classification accuracy decreased and variability increased as sample size decreased. Label noise significantly affected model accuracy. Compensating for labeling errors of 10%, 15%, and 20% requires approximately 4, 9, and 14 times more images in the training set to reach the performance of 345 correctly labeled images. The results showed that DL models fine-tuned on sets of 345 or more OCT images can accurately classify retinal pathologies versus healthy controls. Our findings highlight that while mislabeling errors significantly impact classification performance in OCT analysis, this can be effectively mitigated by increasing the training sample size. By addressing data scarcity and labeling errors, our research aims to improve the real-world application and accuracy of retinal disease management. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. EfficientNet in Diabetic Retinopathy Detection: Impact of Pre-Training Scale.
- Author
-
Che Azemin, Mohd Zulfaezal, Mohd Tamrin, Mohd Izzuddin, Salam, Adzura, and Yusof, Firdaus
- Subjects
- *
DIABETIC retinopathy , *DEEP learning , *VISION disorders , *RETINAL imaging , *PEOPLE with diabetes - Abstract
Introduction: Diabetic retinopathy (DR) is a leading cause of vision loss in diabetic patients worldwide. Timely and accurate detection of referable diabetic retinopathy (RDR) is essential for preventing blindness. In this context, deep learning technologies offer promising advancements in the automated detection of DR. This study explores the effect of pre-training dataset sizes on the diagnostic accuracy of the EfficientNet architecture for RDR identification. Methods: We utilised the EyePACS dataset containing 23,252 retinal images, including 4,343 RDR instances, for training our models. These models were then tested on the APTOS dataset, comprising 3,662 images with 1,487 cases of RDR. Two variants of EfficientNet, one pre-trained on an ImageNet-1K dataset and the other on an ImageNet-21K dataset, were compared based on their sensitivity, specificity, and AUC. Results: The EfficientNet variant pretrained on the ImageNet-1K dataset achieved a sensitivity of 97.71%, specificity of 83.13%, and an AUC of 0.901. In comparison, the variant pre-trained on the ImageNet-21K dataset demonstrated a slightly improved sensitivity of 98.79%, but a reduced specificity of 80.83%, with a comparable AUC of 0.898. Conclusion: Our study demonstrates that deep learning is a valuable tool for RDR detection, with pre-training on larger datasets resulting in modest improvements in sensitivity. However, the difference in pre-training dataset size did not substantially alter the AUC, indicating that additional factors may contribute to the overall effectiveness of the models. These results emphasize the potential of deep learning in enhancing DR screening and diagnosis, with implications for reducing the burden of diabetes-related vision loss. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Evaluation of vessel diameters in processed medical retinal images.
- Author
-
Obreja, Cristian-Dragoș
- Subjects
- *
IMAGE segmentation , *IMAGE analysis , *IMAGE processing , *RETINAL imaging , *DIABETIC retinopathy - Abstract
This study focuses on evaluating image processing techniques for measuring retinal vessel diameters, a critical aspect of medical image analysis for diagnosing vascular abnormalities such as diabetic retinopathy. Four algorithms, Canny Edge Detection, Marr-Hildreth Filter, Watershed Segmentation, and Chan-Vese Algorithm, were assessed for their segmentation performance and measurement accuracy. A dataset of 70 retinal images from the DRIVE database, comprising both healthy and diabetic retinopathy cases, was used. Each algorithm was implemented in MATLAB and tailored to address challenges like noise, intensity variations, and weak boundaries. Vessel diameters were calculated using a custom MATLAB algorithm based on the full width at half maximum (FWHM) of intensity profiles, with linear interpolation refining the measurements. This work highlights the potential and limitations of these algorithms in achieving accurate and reliable vessel segmentation for medical imaging applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Velocity coding in the central brain of bumblebees.
- Author
-
Jaske, Bianca, Tschirner, Katja, Strube-Bloss, Martin Fritz, and Pfeiffer, Keram
- Subjects
- *
OPTICAL flow , *FREQUENCY tuning , *ANGULAR velocity , *FLOW velocity , *RETINAL imaging - Abstract
Moving animals experience wide-field optic flow due to the displacement of the retinal image during motion. These cues provide information about self-motion and are important for flight control and stabilization, and for more complex tasks like path integration. Although in honeybees and bumblebees the use of wide-field optic flow in behavioral tasks is well investigated, little is known about the underlying neuronal processing of these cues. Furthermore, there is a discrepancy between the temporal frequency tuning observed in most motion-sensitive neurons described so far from the optic lobe of insects and the velocity tuning that has been shown for many behaviors. Here, we investigated response properties of motion-sensitive neurons in the central brain of bumblebees. Extracellular recordings allowed us to present a large number of stimuli to probe the spatiotemporal tuning of these neurons. We presented moving gratings that simulated either front-to-back or back-to-front optic flow and found three response types. Direction-selective responses of one of the groups matched those of TN-neurons, which provide optic flow information to the central complex, whereas the other groups contained neurons with purely excitatory responses that were either selective or nonselective for stimulus direction. Most recorded units showed velocity-coding properties at lower angular velocities, but showed spatial frequency-dependent responses at higher velocities. Based on behavioral data, neuronal modeling work has previously predicted the existence of nondirection-selective neurons with such properties. Our data now provide physiological evidence for these neurons and show that neurons with TN-like properties exhibit a similar velocity-dependent coding. NEW & NOTEWORTHY: Using extracellular recordings, we show that neurons in the central brain and central complex of bumblebees show velocity coding properties. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. VDMNet: A Deep Learning Framework with Vessel Dynamic Convolution and Multi-Scale Fusion for Retinal Vessel Segmentation.
- Author
-
Xu, Guiwen, Hu, Tao, and Zhang, Qinghua
- Subjects
- *
OPTICAL coherence tomography , *RETINAL imaging , *DEEP learning , *RETINAL blood vessels , *ANGIOGRAPHY - Abstract
Retinal vessel segmentation is crucial for diagnosing and monitoring ophthalmic and systemic diseases. Optical Coherence Tomography Angiography (OCTA) enables detailed imaging of the retinal microvasculature, but existing methods for OCTA segmentation face significant limitations, such as susceptibility to noise, difficulty in handling class imbalance, and challenges in accurately segmenting complex vascular morphologies. In this study, we propose VDMNet, a novel segmentation network designed to overcome these challenges by integrating several advanced components. Firstly, we introduce the Fast Multi-Head Self-Attention (FastMHSA) module to effectively capture both global and local features, enhancing the network's robustness against complex backgrounds and pathological interference. Secondly, the Vessel Dynamic Convolution (VDConv) module is designed to dynamically adapt to curved and crossing vessels, thereby improving the segmentation of complex morphologies. Furthermore, we employ the Multi-Scale Fusion (MSF) mechanism to aggregate features across multiple scales, enhancing the detection of fine vessels while maintaining vascular continuity. Finally, we propose Weighted Asymmetric Focal Tversky Loss (WAFT Loss) to address class imbalance issues, focusing on the accurate segmentation of small and difficult-to-detect vessels. The proposed framework was evaluated on the publicly available ROSE-1 and OCTA-3M datasets. Experimental results demonstrated that our model effectively preserved the edge information of tiny vessels and achieved state-of-the-art performance in retinal vessel segmentation across several evaluation metrics. These improvements highlight VDMNet's superior ability to capture both fine vascular details and overall vessel connectivity, making it a robust solution for retinal vessel segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Detection of Disease Features on Retinal OCT Scans Using RETFound.
- Author
-
Du, Katherine, Nair, Atharv Ramesh, Shah, Stavan, Gadari, Adarsh, Vupparaboina, Sharat Chandra, Bollepalli, Sandeep Chandra, Sutharahan, Shan, Sahel, José-Alain, Jana, Soumya, Chhablani, Jay, and Vupparaboina, Kiran Kumar
- Subjects
- *
MACULAR degeneration , *MACHINE learning , *OPTICAL coherence tomography , *NOSOLOGY , *RETINAL diseases - Abstract
Eye diseases such as age-related macular degeneration (AMD) are major causes of irreversible vision loss. Early and accurate detection of these diseases is essential for effective management. Optical coherence tomography (OCT) imaging provides clinicians with in vivo, cross-sectional views of the retina, enabling the identification of key pathological features. However, manual interpretation of OCT scans is labor-intensive and prone to variability, often leading to diagnostic inconsistencies. To address this, we leveraged the RETFound model, a foundation model pretrained on 1.6 million unlabeled retinal OCT images, to automate the classification of key disease signatures on OCT. We finetuned RETFound and compared its performance with the widely used ResNet-50 model, using single-task and multitask modes. The dataset included 1770 labeled B-scans with various disease features, including subretinal fluid (SRF), intraretinal fluid (IRF), drusen, and pigment epithelial detachment (PED). The performance was evaluated using accuracy and AUC-ROC values, which ranged across models from 0.75 to 0.77 and 0.75 to 0.80, respectively. RETFound models display comparable specificity and sensitivity to ResNet-50 models overall, making it also a promising tool for retinal disease diagnosis. These findings suggest that RETFound may offer improved diagnostic accuracy and interpretability for specific tasks, potentially aiding clinicians in more efficient and reliable OCT image analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. On the Elusive but Vital Difference Between Privileged and Optimal Viewpoints.
- Author
-
Dolev, Yuval
- Subjects
- *
PHILOSOPHICAL literature , *FORM perception , *SCIENTIFIC literature , *RETINAL imaging , *PHENOMENOLOGY - Abstract
I argue that two theses, which get conflated tacitly but frequently in both the philosophical and the scientific literature on perception, must be distinguished. The first is that there are optimal viewpoints, viewpoints from which an object's shape is more readily discernable than from others. The second is that there are privileged viewpoints, viewpoints that alone secure the veridicality of perception. I claim that phenomenology establishes the ubiquitousness of optimal viewpoints, but that the notion of privileged viewpoints is indefensible. It emerges when the empirical investigation of the mechanism of perception, and specifically of the role of retinal images, becomes the basis for the phenomenology of perception. Both the notion of a privileged viewpoint and the models it serves, such as the two-step model, are, I argue, untenable. To emphasize: the claims are phenomenological, not empirical, and so cannot be confirmed or refuted by empirical evidence. Optimal viewpoints are further explored by critically examining Husserl's notion of a "sum of optima" and assessing it in the context of his claim that normal viewpoints are optimal. The paper ends with some thoughts on what the relationship between the science and the phenomenology of vision ought to be. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Application of optical coherence tomography in neurodegenerative diseases: a focus on Parkinson's, Alzheimer's, and Schizophrenia in Bulgarian patients.
- Author
-
Levi, Alina Y., Cherninkova, Sylvia, Milanova, Vihra, Haykin, Vasil M., and Oscar, Alexander H.
- Subjects
- *
ALZHEIMER'S disease , *PARKINSON'S disease , *ALZHEIMER'S patients , *DISEASE duration , *RETINAL imaging , *OPTICAL coherence tomography - Abstract
This study investigates retinal alterations in patients with Alzheimer's disease (AD), Parkinson's disease (PD), and schizophrenia (SZ) using Optical Coherence Tomography (OCT). The primary objective was to determine whether these neurodegenerative diseases manifest in measurable retinal changes and to assess the impact of disease duration, medication use, and cognitive decline on these alterations. A cross-sectional observational design was employed, including 132 patients and age- and gender-matched controls. OCT imaging was performed using the Topcon 3D OCT-1 Maestro 2, focusing on the retinal nerve fibre layer (RNFL), ganglion cell complex (GCC), and macular thickness. Significant retinal thinning was observed in the AD and PD groups, correlating with disease severity and cognitive decline, and was more pronounced with longer disease duration. In contrast, no significant retinal changes were identified in the SZ group. The study also explored the effects of different drug classes, revealing correlations between specific medications and retinal parameters, particularly in the AD and PD cohorts. To our knowledge, this is the first study of its kind conducted with Bulgarian patients. These findings suggest that OCT may serve as a non-invasive biomarker for neurodegeneration in AD and PD, though its utility in SZ remains limited. Future research should aim to standardize OCT protocols, further investigate the potential of retinal imaging in tracking neurodegenerative disease progression, and explore its integration with other neuroimaging techniques for a more comprehensive diagnostic and monitoring approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. The "triple-layer sign": an optical coherence tomography signature for the detection of non-exudative macular neovascularization.
- Author
-
Capuano, Vittorio, Sacconi, Riccardo, Miere, Alexandra, Borrelli, Enrico, Amoroso, Francesca, Costanzo, Eliana, Parravano, Mariacristina, Fragiotta, Serena, Bandello, Francesco, Souied, Eric H., and Querques, Giuseppe
- Subjects
- *
MACULAR degeneration , *OPTICAL coherence tomography , *RHODOPSIN , *RETINAL imaging , *SENSITIVITY & specificity (Statistics) - Abstract
Purpose: To assess the sensitivity and specificity of the "triple layer sign" (TLS) (retinal pigment epithelium (RPE), neovascular tissue, and Bruch's membrane) on structural optical coherence tomography (OCT) images for the diagnosis of treatment-naïve non-exudative type-1 macular neovascularization (NE-MNV) in age-related macular degeneration (AMD). Design: Cross-sectional study. Methods: Two masked retinal experts evaluated the presence of the TLS in eyes with NE-MNV and controls with an RPE elevation without exudation due to other causes than NE-MNV in AMD [e.g., medium-large drusen, cuticular drusen, basal laminar deposits (BlamD)]. Results: 130 eyes of 98 consecutive patients met the study criteria; 40 eyes of 40 patients satisfied the criteria for being included in the NE-MNV secondary to AMD group (27 females, 13 males, with a mean age of 73.8 ± 8.0 years), and 90 eyes of 58 patients met the criteria to be included in the control group (31 eyes were included in the medium-to-large drusen sub-group, 32 eyes in the cuticular drusen sub-group, and 27 eyes in the BlamD group. The TLS was observed in 39/40 patients with NE-MNV and 8/90 controls. The sensitivity and specificity of the TLS for the diagnosis of NE-MNV were 97% and 91%, respectively. Conclusions: The TLS on OCT demonstrated high sensitivity and specificity values in detecting treatment-naive type 1 NE-MNV. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Impact of preocular and ocular circulatory dynamics on the vascular density of retinal capillary plexuses and choriocapillaris.
- Author
-
Vallée, Rodolphe, Körpe, Dilsah, Vallée, Jean-Noël, Tsiropoulos, Georgios N., Gallo Castro, Daniela, Mantel, Irmela, Pournaras, Constantin J., and Ambresin, Aude
- Subjects
- *
FLUORESCENCE angiography , *LAMINAR flow , *TYPE 1 diabetes , *TYPE 2 diabetes , *RETINAL imaging - Abstract
Purpose: To highlight the influence of preocular and ocular vascular circulatory dynamics on the vascular density (VD) of retinal capillary plexuses (RCPs) and choriocapillaris (CC) in patients with and without cardiovascular risk (CVR) factors. Methods: A retrospective observational study in patients with and without CVR factors (type 1 and 2 diabetes, arterial hypertension, and hypercholesterolemia). Fluorescein (FA) and indocyanine (ICGA) angiography circulatory times were arterial time (FAAT), start (FAstartLF) and end (FAendLF) of laminar flow, and arterial time (ICGAAT), respectively. OCT angiography VDs were superficial (VDSCP) and deep (VDDCP) RCPs and CC (VDCC) VDs. Correlation and regression analysis were performed after adjusting for confounding factors. Results: 177 eyes of 177 patients (mean age: 65.2 ± 15.9 years, n = 92 with and 85 without CVR) were included. VDSCP and VDDCP were significantly inversely correlated with FAAT, FAstartLF and FAendLF likewise VDCC with ICGAAT. Correlations were stronger in patients without CVR than with CVR. CVR, FAAT, FAstartLF and FAendLF were more strongly correlated with VDDCP than VDSCP. FAAT, FAstartLF and FAendLF significantly impacted VDSCP and VDDCP, likewise ICGAAT impacted VDDCP. VDDCP was most strongly impacted by FAAT and FAstartLF. Conclusion: Ocular and pre-ocular circulatory dynamics significantly impacted RCPs and CC VDs, especially deep RCP. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Abetalipoproteinemia with angioid streaks, choroidal neovascularization, atrophy, and extracellular deposits revealed by multimodal retinal imaging.
- Author
-
Bijon, Jacques, Hussain, M. Mahmood, Bredefeld, Cindy L., Boesze-Battaglia, Kathleen, Freund, K. Bailey, and Curcio, Christine A.
- Subjects
- *
RETINAL ganglion cells , *MACULAR degeneration , *OPTICAL coherence tomography , *RHODOPSIN , *RETINAL imaging - Abstract
Purpose: Abetalipoproteinemia (ABL, MIM 200,100) is a rare autosomal recessive disorder caused by nonfunctional microsomal triglyceride transfer protein leading to absence of apolipoprotein B-containing lipoproteins in plasma and a retinitis pigmentosa-like fundus. The MTTP gene is expressed in retinal pigment epithelium (RPE) and ganglion cells of the human retina. Understanding ABL pathophysiology would benefit from new cellular-level clinical imaging of affected retinas. Methods: We report multimodal retinal imaging in two patients with ABL. Case 1 (67-year-old woman) exhibited a bilateral decline of vision due to choroidal neovascularization (CNV) associated with angioid streaks and calcified Bruch membrane. Optical coherence tomography were consistent with basal laminar deposits and subretinal drusenoid deposits (SDD). Results: Case 2 (46-year-old woman) exhibited unusual hyperpigmentation at the right fovea with count-fingers vision and a relatively unremarkable left fundus with 20/30 vision. The left eye exhibited the presence of nodular drusen and SDD and the absence of macular xanthophyll pigments. Conclusion: We propose that mutated MTTP within the retina may contribute to ABL retinopathy in addition to systemic deficiencies of fat-soluble vitamins. This concept is supported by a new mouse model with RPE-specific MTTP deficiency and a retinal degeneration phenotype. The observed range of human pathology, including angioid streaks, underscores the need for continued monitoring in adulthood, especially for CNV, a treatable condition. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. A Case Report and Literature Review of Vogt-Koyanagi-Harada-Like Uveitis Secondary to Dabrafenib and Trametinib: 4-Year Follow-Up Using Retinal Multimodal Imaging.
- Author
-
Ucan Gunduz, Gamze, Gullulu, Zeynep Zahide, Nizam Tekcan, Sema, Yalcinbayir, Ozgur, and Cubukcu, Erdem
- Subjects
- *
CUTANEOUS malignant melanoma , *OPTICAL coherence tomography , *RHODOPSIN , *RETINAL imaging , *MELANOMA , *IRIDOCYCLITIS - Abstract
Purpose: To report a case of Vogt-Koyanagi-Harada (VKH)-like uveitis under treatment with Dabrafenib and Trametinib for metastatic malignant melanoma, representing the longest follow-up (49 months) with retinal multimodal imaging. Methods: Retrospective case report. Results: A 49-year-old female with metastatic relapsing cutaneous malignant melanoma presented with blurry vision in both eyes for 1 week. She had been treated with Dabrafenib and Trametinib for 2 months. Fundus examination detected serous retinal detachments (SRDs) and hyperemic optic disks. Spectral-domain optical coherence tomography (SD-OCT) revealed SRDs, retinal pigment epithelium undulations, choroidal thickening, and loss of normal choroidal vascular architecture. The patient was diagnosed with VKH-like uveitis secondary to targeted agents since systemic investigations were unremarkable. Dabrafenib and Trametinib were discontinued, and pulse steroid treatment was started. Following the improvement of retinal and choroidal signs, the same targeted agents were restarted 6 weeks later. No recurrence of uveitis occurred during 49 months of follow-up; however, the convalescent phase findings of VKH were observed in the fundus examination. The systemic status of the patient, who is still using Dabrafenib and Trametinib, is stable. Conclusion: Although the mechanism is still unknown, the development of VKH-like uveitis secondary to targeted therapy may indicate successful tumor control in patients with metastatic melanoma. Providing effective immunosuppression with corticosteroids and making necessary dose modifications with a multidisciplinary approach may extend the survival of patients. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. ConKeD: multiview contrastive descriptor learning for keypoint-based retinal image registration.
- Author
-
Rivas-Villar, David, Hervella, Álvaro S., Rouco, José, and Novo, Jorge
- Subjects
- *
ARTIFICIAL neural networks , *IMAGE registration , *RETINAL imaging , *LEARNING strategies , *MEDICAL practice , *DEEP learning - Abstract
Retinal image registration is of utmost importance due to its wide applications in medical practice. In this context, we propose ConKeD, a novel deep learning approach to learn descriptors for retinal image registration. In contrast to current registration methods, our approach employs a novel multi-positive multi-negative contrastive learning strategy that enables the utilization of additional information from the available training samples. This makes it possible to learn high-quality descriptors from limited training data. To train and evaluate ConKeD, we combine these descriptors with domain-specific keypoints, particularly blood vessel bifurcations and crossovers, that are detected using a deep neural network. Our experimental results demonstrate the benefits of the novel multi-positive multi-negative strategy, as it outperforms the widely used triplet loss technique (single-positive and single-negative) as well as the single-positive multi-negative alternative. Additionally, the combination of ConKeD with the domain-specific keypoints produces comparable results to the state-of-the-art methods for retinal image registration, while offering important advantages such as avoiding pre-processing, utilizing fewer training samples, and requiring fewer detected keypoints, among others. Therefore, ConKeD shows a promising potential towards facilitating the development and application of deep learning-based methods for retinal image registration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. The Role of Size Contrast and Empty Space in the Explanation of the Moon Illusion.
- Author
-
Nemati, Farshad
- Subjects
- *
RETINAL imaging , *OPEN spaces , *RETINA , *MOTIVATION (Psychology) , *EXPLANATION - Abstract
The much larger appearance of the moon near horizon than the perceived size of the moon at zenith has motivated many scientists to develop theories that aim at explaining this puzzling phenomenon. Considering that the size of retinal images of the moon in these positions are very similar, the explanation of difference in their apparent sizes has relied on perceptual cues of distance embedded in the retinal image of their respective contexts. Although this account of the moon illusion is quite popular, it does not explain all aspects of this phenomenon. The theoretical formulation of the moon illusion based on other factors such as size contrast later may have had some advantages but has also created some new problems. Although the moon is perceived in a three-dimensional (3D) environment, the present analysis proposes that an explanation of the moon illusion based on two-dimensional (2D) cues can remove some of the unnecessary problems. The empty space and size contrast that have already been considered in explaining classic geometric-optical illusions play a parallel role in explaining the moon illusion. In other words, the role of open space in interaction with the image of the moon and different objects near horizon, all reflected on the retina, are considered as the main explaining factors. The advantages of this approach will be discussed and some of the facts pertaining to the moon illusion will be explained within this theoretical framework. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Assessing the Correlation Between Retinal Arteriolar Bifurcation Parameters and Coronary Atherosclerosis.
- Author
-
Dai, Guangzheng, Wang, Geng, Yu, Sile, Fu, Weinan, Hu, Shenming, Huang, Yue, Luan, Xinze, Cao, Xue, Wang, Xiaoting, Yan, Hairu, Liu, Xinying, and He, Xingru
- Subjects
- *
CORONARY artery stenosis , *CORONARY artery disease , *OPTIC disc , *DEEP learning , *RETINAL imaging - Abstract
Introduction: The aim of this study was to examine the relationship between the morphological parameters of retinal arteriolar bifurcations and coronary artery disease (CAD). Methods: In this cross-sectional observational study, fundus photography was conducted on 444 participants to capture retinal arteriolar bifurcations. A total of 731 fundus photographs yielded 9625 measurable bifurcations. Analyzed bifurcation parameters included the diameters of the parent vessel (d0), the larger branch (d1), and the smaller branch (d2), as well as the angles (θ1) and (θ2) representing the orientation of each branch in relation to the parent vessel, respectively. Additionally, theoretical optimal angles ( θ 1 ′ ) and ( θ 2 ′ ), calculated from the measured parameters, provided a benchmark for ideal bifurcation geometry. The study assessed the variation in these parameters across different levels of coronary atherosclerosis severity. Results: After adjusting for anatomical characteristics including the asymmetry ratio, area ratio, and distance to the optic disc, we observed that patients with severe coronary artery stenosis had significant deviations from the theoretical optimal bifurcation angles, with a decrease in ( θ 1 ′ ) and an increase in ( θ 2 ′ ) compared to those with moderate stenosis. Conclusion: The findings suggest a clear alteration in retinal arteriolar bifurcation morphology among patients with severe CAD, which could potentially serve as an indicator of disease severity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Automatic detection of microaneurysms using DeTraC deep convolutional neural network classifier with woodpecker mating algorithm.
- Author
-
Sherine, A. P. and Wilfred Franklin, S.
- Subjects
- *
CONVOLUTIONAL neural networks , *DIABETIC retinopathy , *DIABETES complications , *RETINAL imaging , *ANEURYSMS - Abstract
Diabetic Retinopathy (DR) is a microvascular complication of diabetes that leads to visual blindness. Early identification of DR can prevent the loss of sight. The first visible sign of DR is the appearance of micro aneurysms (MAs). Micro aneurysms are seen as small red circular spots on the retinal surface. The very small size of micro aneurysms proves to be challenging in its proper detection. In this research, DeTrac Deep Convolutional Neural Network m classifier with Woodpecker Mating Algorithm is proposed for the detection of MAs. By using this technique, every image is classified as either MAs or non-MAs pixel to automatically detect micro aneurysms from the retinal images. Experimental results are evaluated on diabetic-retinopathy-detection (DRD) dataset of the Kaggle website. Extensive simulations on this dataset shows an improved performance over the existing methods with 0.98 mean sensitivity, 0.97 mean specificity, and 0.98 mean accuracy in detecting the MAs irrespective of their intrinsic properties. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Retinal Image Augmentation using Composed GANs.
- Author
-
Alghamdi, Manal and Abdel-Mottaleb, Mohamed
- Subjects
GENERATIVE adversarial networks ,DATA augmentation ,RETINAL imaging ,COMPUTER-assisted image analysis (Medicine) ,DEEP learning - Abstract
Medical image analysis faces a significant challenge in the scarcity of annotated data, which is crucial for developing generalizable Deep Learning (DL) models that require extensive training data. Consequently, the field of medical image generation has garnered substantial interest and potential for further exploration. Besides widely employed data augmentation techniques, such as rotation, reflection, and scaling, Generative Adversarial Networks (GANs) have demonstrated the ability to effectively leverage additional information from datasets by generating synthetic samples from real images. In the context of retinal image synthesis, an image-to-image translation approach is frequently adopted to generate retinal images from available vessel maps, which can be scarce and resource-intensive to obtain. Deviating from prior work reliant on pre-existing vessel maps, this study proposes a learning-based model that is independent of vessel maps, utilizing Progressive Growing GAN (PGGAN) to generate vascular networks from random noise. The visual and quantitative evaluations conducted suggest that the majority of the images generated by the proposed model are substantially distinct from the training set while maintaining a high proportion of true image quality, underscoring the model's potential as a powerful tool for data augmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. An Automated Wavelet Scattering Network Classification Using Three Stages of Cataract Disease.
- Author
-
Al-Saadi, Enas Hamood, Khdiar, Ahmed Nidhal, and Al-Saadi, Lamis Hamood
- Subjects
CATARACT ,EYE diseases ,DEEP learning ,RETINAL imaging ,HISTOGRAMS ,BLINDNESS - Abstract
Copyright of Baghdad Science Journal is the property of Republic of Iraq Ministry of Higher Education & Scientific Research (MOHESR) and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
33. Microvascular Metrics on Diabetic Retinopathy Severity: Analysis of Diabetic Eye Images from Real-World Data.
- Author
-
Cuscó, Cristina, Esteve-Bricullé, Pau, Almazán-Moga, Ana, Fernández-Carneado, Jimena, and Ponsati, Berta
- Subjects
DIABETIC retinopathy ,RETINAL imaging ,NOSOLOGY ,PEOPLE with diabetes ,IMAGE analysis - Abstract
Objective: To quantify microvascular lesions in a large real-world data (RWD) set, based on single central retinal fundus images of diabetic eyes from different origins, with the aim of validating its use as a precision tool for classifying diabetic retinopathy (DR) severity. Design: Retrospective meta-analysis across multiple fundus image datasets. Sample size: The study analyzed 2445 retinal fundus images from diabetic patients across four diverse RWD international datasets, including populations from Spain, India, China and the US. Intervention: The quantification of specific microvascular lesions: microaneurysms (MAs), hemorrhages (Hmas) and hard exudates (HEs) using advanced automated image analysis techniques on central retinal images to validate reliable metrics for DR severity assessment. The images were pre-classified in the DR severity levels as defined by the International Clinical Diabetic Retinopathy (ICDR) scale. Main Outcome Measures: The primary variables measured were the number of MAs, Hmas, red lesions (RLs) and HEs. These counts were related with DR severity levels using statistical methods to validate the relationship between lesion counts and disease severity. Results: The analysis revealed a robust and statistically significant increase (p < 0.001) in the number of microvascular lesions and the DR severity across all datasets. Tight data distributions were reported for MAs, Hmas and RLs, supporting the reliability of lesion quantification for accurately assessing DR severity. HEs also followed a similar pattern, but with a broader dispersion of data. Data used in this study are consistent with the definition of the DR severity levels established by the ICDR guidelines. Conclusions: The statistically significant increase in the number of microvascular lesions across DR severity validate the use of lesion quantification in a single central retinal field as a key biomarker for disease classification and assessment. This quantification method demonstrates an improvement over traditional assessment scales, providing a quantitative microvascular metric that enhances the precision of disease classification and patient monitoring. The inclusion of a numerical component allows for the detection of subtle variations within the same severity level, offering a deeper understanding of disease progression. The consistency of results across diverse datasets not only confirms the method's reliability but also its applicability in a global healthcare setting. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. RETINAL IMAGING FOR DIABETIC RETINOPATHY DETECTION THROUGH DEEP LEARNING.
- Author
-
PEDAPUDI, RAMYA, CHOWDARY, KADAMBALA PRANITA, GREESHMA, MASANAM, KUMAR, KARNATA JASWANTH, and AMULYA, KURAPATI
- Subjects
RETINAL imaging ,DIABETIC retinopathy ,DEEP learning ,MACHINE learning ,DIABETES - Abstract
The prevalence of diabetes is increasing globally, necessitating efficient methods to enhance the timely identification and treatment of diabetes, Focusing on early detection and effective management strategies for complications is essential. This study presents an integrated solution comprising two modules: diabetic detection and diabetic retinopathy detection. The diabetic detection module employs machine learning techniques like decision trees, random forests, and KNN for forecasting presence of diabetes based on patient data. The diabetic retinopathy detection module utilizes deep learning techniques, specifically the ResNet50 model architecture, to analyze retinal images and identify signs of diabetic retinopathy. A comprehensive implementation of both modules, including data preprocessing, model training, and evaluation, using Python libraries such as TensorFlow, Keras, and scikit- learn. The trained models are then integrated into a web application. This web application allows users to input their medical data and retinal images, and receive real- time predictions regarding their diabetic status and risk of diabetic retinopathy. The integration of these modules into a web application provides an intuitive interface tailored for both healthcare professionals and patients to assess diabetic risks conveniently. Furthermore, it facilitates early intervention and management of diabetic complications, ultimately improving patient outcomes and reducing healthcare burdens. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Imaging the eye as a window to brain health: frontier approaches and future directions.
- Author
-
Banna, Hasan U., Slayo, Mary, Armitage, James A., del Rosal, Blanca, Vocale, Loretta, and Spencer, Sarah J.
- Subjects
- *
ALZHEIMER'S disease , *RETINAL diseases , *RETINAL imaging , *TECHNOLOGICAL innovations , *BRAIN diseases - Abstract
Recent years have seen significant advances in diagnostic testing of central nervous system (CNS) function and disease. However, there remain challenges in developing a comprehensive suite of non- or minimally invasive assays of neural health and disease progression. Due to the direct connection with the CNS, structural changes in the neural retina, retinal vasculature and morphological changes in retinal immune cells can occur in parallel with disease conditions in the brain. The retina can also, uniquely, be assessed directly and non-invasively. For these reasons, the retina may prove to be an important "window" for revealing and understanding brain disease. In this review, we discuss the gross anatomy of the eye, focusing on the sensory and non-sensory cells of the retina, especially microglia, that lend themselves to diagnosing brain disease by imaging the retina. We include a history of ocular imaging to describe the different imaging approaches undertaken in the past and outline current and emerging technologies including retinal autofluorescence imaging, Raman spectroscopy, and artificial intelligence image analysis. These new technologies show promising potential for retinal imaging to be used as a tool for the diagnosis of brain disorders such as Alzheimer's disease and others and the assessment of treatment success. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. A paradoxical misperception of relative motion.
- Author
-
D’Angelo, Josephine C., Tiruveedhula, Pavan, Weber, Raymond J., Arathorn, David W., and Roorda, Austin
- Subjects
- *
RELATIVE motion , *RETINAL imaging , *ADAPTIVE optics , *EYE movements , *COMPUTER systems - Abstract
Detecting the motion of an object relative to a world-fixed frame of reference is an exquisite human capability [G. E. Legge, F. Campbell, Vis. Res. 21, 205–213 (1981)]. However, there is a special condition where humans are unable to accurately detect relative motion: Images moving in a direction consistent with retinal slip where the motion is unnaturally amplified can, under some conditions, appear stable [D. W. Arathorn, S. B. Stevenson, Q. Yang, P. Tiruveedhula, A. Roorda, J. Vis. 13, 22 (2013)]. We asked: Is world-fixed retinal image background content necessary for the visual system to compute the direction of eye motion, and consequently generate stable percepts of images moving with amplified slip? Or, are nonvisual cues sufficient? Subjects adjusted the parameters of a stimulus moving in a random trajectory to match the perceived motion of images moving contingent to the retina. Experiments were done with and without retinal image background content. The perceived motion of stimuli moving with amplified retinal slip was suppressed in the presence of a visible background; however, higher magnitudes of motion were perceived under conditions when there was none. Our results demonstrate that the presence of retinal image background content is essential for the visual system to compute its direction of motion. The visual content that might be thought to provide a strong frame of reference to detect amplified retinal slips, instead paradoxically drives the misperception of relative motion. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Distinguishing <italic>ABCA4</italic> from <italic>PRPH2</italic>-related disease: qualitative analysis of examination and imaging features.
- Author
-
Fan, Kenneth C., Wong, Calvin W., Nichols, Braden A., Sadat, Roa, Becker, Troy C., Brown, David M., and Wykoff, Charles C.
- Subjects
- *
STARGARDT disease , *FISHER exact test , *RETINAL diseases , *IMAGE analysis , *RETINAL imaging - Abstract
IntroductionMethodsResultsConclusions
ABCA4 andPRPH2 -related diseases are both phenotypically heterogeneous and clinically difficult to differentiate. There may be examination and imaging features that can aid in establishing a clinical diagnosis.A single-center, retrospective, consecutive case series including patients with a molecular confirmation of pathologic variants in either the ABCA4 or PRPH2 were included. Chi-square analysis, Fisher exact test, and Student’s t-test comparing prevalence of specific examination and imaging features between ABCA4 and PRPH2Of the 127 eyes from 64 patients included, the ABCA4 group was more significantly associated with peripapillary sparing on both fundus imaging (73% vs. 40%;p = 0.006) and FAF (71% vs. 44%;p = 0.025), macular (64% vs. 12%;p < 0.001) and peripheral pisciform flecks (22% vs. 3.6%;p = 0.025). The PRPH2 group was more highly associated with macular chorioretinal atrophy (86% vs. 55%;p = 0.003).Peripapillary sparing and pisciform flecks are more highly associated withABCA4 -related disease, while macular chorioretinal atrophy is more highly associated withPRPH2 -related disease. Logistic regression demonstrates that bull’s eye maculopathy and macular flecks are predictive of theABCA4 genotype. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
38. Leveraging Artificial Intelligence for Diabetic Retinopathy Screening and Management: History and Current Advances.
- Author
-
Rajalakshmi, Ramachandran, PramodKumar, Thyparambil Aravindakshan, Naziyagulnaaz, Abdul Subhan, Anjana, Ranjit Mohan, Raman, Rajiv, Manikandan, Suchetha, and Mohan, Viswanathan
- Subjects
- *
OPTICAL coherence tomography , *ARTIFICIAL intelligence , *PEOPLE with diabetes , *DIABETIC retinopathy , *RETINAL imaging - Abstract
AimMethodsResultsConclusionRegular screening of large number of people with diabetes for diabetic retinopathy (DR) with the support of available human resources alone is a global challenge. Digital health innovation is a boon in screening for DR. Multiple artificial intelligence (AI)-based deep learning (DL) algorithms have shown promise for accurate diagnosis of referable DR (RDR). The aim of this review is to evaluate the use of AI for DR screening and the various currently available automated DR detection algorithms.We reviewed articles published up to May 15th 2024, on the use of AI for DR by searching PubMed, Medline, Embase, Scopus, and Google Scholar using keywords like diabetic retinopathy, retinal imaging, teleophthalmology, automated detection, artificial intelligence, deep learning and fundus photography.This narrative review, traces the advent of AI and its use in digital health, the key concepts in AI and DL algorithm development for diagnosis of DR, some crucial AI algorithms that have been validated for detection of DR and the benefits and challenges of use of AI in detection and management of DR. While there are many approved AI algorithms that are in use globally for DR detection, IDx-DR, EyeArt, and AEYE Diagnostic Screening (AEYE-DS) are the algorithms that have been approved so far by USFDA for automated DR screening.AI has revolutionized screening of DR by enabling early automated detection. Continuous advances in AI technology, combined with high-quality retinal imaging, can lead to early diagnosis of sight-threatening DR, appropriate referrals, and better outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Progressive changes in binocular perception from stereopsis to rivalry.
- Author
-
Hasegawa, Yohske and Kondo, Hirohito M.
- Subjects
STEREOSCOPIC views ,BINOCULAR rivalry ,DISTRIBUTION (Probability theory) ,VISUAL perception ,RETINAL imaging ,BINOCULAR vision ,DEPTH perception - Abstract
Introduction: The binocular system provides a stereoscopic view from slightly different retinal images and produces perceptual alternations, namely, binocular rivalry, from significantly different retinal images. When we observe a stereogram in which the stimulus configurations induce stereopsis and rivalry simultaneously, the binocular system prefers stereopsis to rivalry. However, changes in visual perception are yet to be investigated by parametrically manipulating the components of a stereogram. Methods: We examined stereopsis preferences in stereograms with various horizontal disparities. The stereograms of our paradigms included horizontal and vertical bars in one eye and a vertical bar alone in the other. Under experimental conditions, the vertical bar superimposed on the horizontal bar varied its position relative to the opposite vertical bar: range of horizontal disparity, 0.0′ to 42.3′. The superimposed vertical bar was absent under the control condition. Observers were instructed to indicate the disappearance of monocular horizontal bars, that is, targets, from their perception during the 30-s trials. Results: The total disappearance duration decreased under experimental conditions compared with that under control conditions, and it gradually increased with an increase in the disparity of the stereoscopic vertical bars. Discussion: These results indicate that the disparity in the stereoscopic components biases binocular perception away from the rivalry between the vertical and horizontal bars toward the stereopsis of the vertical bars. Furthermore, the disappearance duration showed a unimodal and asymmetric distribution across all disparity conditions. This suggests that rivalry processing occurs in parallel when stereopsis is dominant. We found that stereopsis preference is an outcome of binocular perception selection biased by disparity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. No reference retinal image quality assessment using support vector machine classifier in wavelet domain.
- Author
-
Sahu, Sima, Singh, Amit Kumar, and Priyadarshini, Nishita
- Subjects
IMAGE recognition (Computer vision) ,RETINAL imaging ,SUPPORT vector machines ,COMPUTER-aided diagnosis ,MEDICAL personnel - Abstract
The automatic retinal screening system (ARSS) is a valuable computer-aided diagnosis tool for healthcare providers and public health initiatives. The ARSS facilitates mass retinal screenings that analyse retinal images and detect early signs of vision-threatening retinal diseases. The degradation in retinal image's naturalness causes imprecise diagnosis. This paper proposed a quality assessment method that is suitable for ARSS and is important for closing care gaps and reducing healthcare costs in the field of healthcare. A no-reference (NR) quality assessment method utilizing natural scene statistics (NSS) and the multi-resolution approach is developed to detect retinal image quality. Image quality classification is performed combining NSS features and statistical featurenns of retinal image. A support vector machine classifier is used to map the retinal image features and find image quality. The proposed method is compared with existing NR image quality assessment methods. The results show that the proposed method has improved accuracy, recall, precision and F-measure values of 3.42%, 3.66%, 1.63% and 2.66%, respectively, over the competing methods, demonstrating its suitability for ARSS. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Open ultrawidefield fundus image dataset with disease diagnosis and clinical image quality assessment.
- Author
-
He, Shucheng, Ye, Xin, Xie, Wenbin, Shen, Yingjiao, Yang, Shangchao, Zhong, Xiaxing, Guan, Hanyi, Zhou, Xiangpeng, Wu, Jiang, and Shen, Lijun
- Subjects
IMAGE quality in imaging systems ,MEDICAL screening ,DIABETIC retinopathy ,DIAGNOSTIC imaging ,DIAGNOSIS ,RETINAL imaging - Abstract
Ultrawidefield fundus (UWF) images have a wide imaging range (200° of the retinal region), which offers the opportunity to show more information for ophthalmic diseases. Image quality assessment (IQA) is a prerequisite for applying UWF and is crucial for developing artificial intelligence-driven diagnosis and screening systems. Most image quality systems have been applied to the assessments of natural images, but whether these systems are suitable for evaluating the UWF image quality remains debatable. Additionally, existing IQA datasets only provide photographs of diabetic retinopathy (DR) patients and quality evaluation results applicable for natural image, neglecting patients' clinical information. To address these issues, we established a real-world clinical practice ultra-widefield fundus images dataset, with 700 high-resolution UWF images and corresponding clinical information from six common fundus diseases and healthy volunteers. The image quality is annotated by three ophthalmologists based on the field of view, illumination, artifact, contrast, and overall quality. This dataset illustrates the distribution of UWF image quality across diseases in clinical practice, offering a foundation for developing effective IQA systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Brain state and cortical layer-specific mechanisms underlying perception at threshold.
- Author
-
Morton, Mitchell P, Denagamage, Sachira, Blume, Isabel J., Reynolds, John H., Jadi, Monika P., and Nandy, Anirvan S.
- Subjects
- *
THRESHOLD (Perception) , *RETINAL imaging , *SENSORIMOTOR integration , *SYNCHRONIC order , *MONKEYS - Abstract
Identical stimuli can be perceived or go unnoticed across successive presentations, producing divergent behavioral outcomes despite similarities in sensory input. We sought to understand how fluctuations in behavioral state and cortical layer and cell class-specific neural activity underlie this perceptual variability. We analyzed physiological measurements of state and laminar electrophysiological activity in visual area V4 while monkeys were rewarded for correctly reporting a stimulus change at perceptual threshold. Hit trials were characterized by a behavioral state with heightened arousal, greater eye position stability, and enhanced decoding performance of stimulus identity from neural activity. Target stimuli evoked stronger responses in V4 in hit trials, and excitatory neurons in the superficial layers, the primary feed-forward output of the cortical column, exhibited lower variability. Feed-forward interlaminar population correlations were stronger on hits. Hit trials were further characterized by greater synchrony between the output layers of the cortex during spontaneous activity, while the stimulus-evoked period showed elevated synchrony in the feed-forward pathway. Taken together, these results suggest that a state of elevated arousal and stable retinal images allow enhanced processing of sensory stimuli, which contributes to hits at perceptual threshold. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Utilising Retinal Phenotypes to Predict Cerebrovascular Disease and Detect Related Risk Factors in Multi-Ethnic Populations: A Narrative Review.
- Author
-
Injety, Ranjit J., Shenoy, Riddhi, Free, Robert C., Minhas, Jatinder S., and Thomas, Mervyn G.
- Subjects
- *
CEREBROVASCULAR disease , *RETINAL imaging , *RACIAL differences , *PHENOTYPES , *ETHNICITY - Abstract
Cerebrovascular diseases (CBVDs) are a major cause of mortality and disability, with significant ethnic variations suggesting specific risk factors. Early detection of these risk factors is critical, and retinal imaging offers a non-invasive method to achieve this.Background: Retinal phenotypes can serve as early markers for CBVDs. Racial differences in retinal and vascular morphometric characteristics have been described. Examining these characteristics in the context of racial differences could improve early detection and targeted interventions for CBVDs. This review discusses the role of retinal imaging in predicting CBVDs and highlights the importance of ethnicity-specific approaches.Summary: Understanding ethnic variations in retinal features can enhance the precision of CBVD prediction and enable personalised treatment strategies. [ABSTRACT FROM AUTHOR]Key Messages: - Published
- 2024
- Full Text
- View/download PDF
44. Optic Neuropathy AFG3L2 Related in a Patient Affected by Congenital Stationary Night Blindness.
- Author
-
Cammarata, Gabriella, Mihalich, Alessandra, Manfredini, Emanuela, Lamperti, Costanza, Bianchi Marzoli, Stefania, Di Blasio, Anna Maria, and Schwartz, Stephen G.
- Subjects
- *
GENETIC variation , *GENETIC testing , *RETINAL imaging , *RETINAL degeneration , *OPTIC disc - Abstract
Objective: We describe a patient affected by congenital stationary night blindness (CSNB) secondary to CACNA1F and optic neuropathy associated with an AFG3L2 variant. Methods: We performed comprehensive neuro‐ophthalmologic examinations, retinal imaging, complete ocular electrophysiology, and brain and optic nerve MRI. Genomic DNA was extracted from the peripheral blood. The patient's DNA was then investigated by next‐generation sequencing (NGS) with a panel including 32 genes associated with retinal dystrophy and therefore with a panel including seven genes associated with genetic forms of optic atrophy. Results: The genetic analysis identified a pathogenetic CACNA1F variant causing CSNB and a heterozygous variant in AFG3L2 that alters OPA1 processing and is known to be associated with OPA1‐like optic neuropathy. Conclusion: Optic disc atrophy has been previously described as an atypical feature in the phenotype of CSNB CACNA1F‐related. In this patient, we found a variant of the AFG3L2 gene that presumably explains the presence of optic atrophy in a subject affected by CSNB. Clinical Relevance: The clinical evidence of optic atrophy, which is atypical in CSNB, should raise the suspicion of concomitant hereditary optic neuropathy and emphasize the importance of broad genetic diagnostic testing to better define the genotype–phenotype correlation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Automatic screening of retinal lesions for detecting diabetic retinopathy using adaptive multiscale MobileNet with abnormality segmentation from public dataset.
- Author
-
Selvaganapathy, Nandhini, Siddhan, Saravanan, Sundararajan, Parthasarathy, and Balasundaram, Sathiyaprasad
- Subjects
- *
DIABETIC retinopathy , *RETINAL imaging , *MEDICAL screening , *EARLY diagnosis , *OPHTHALMOLOGISTS - Abstract
Owing to the epidemic growth of diabetes, ophthalmologists need to examine the huge fundus images for diagnosing the disease of Diabetic Retinopathy (DR). Without proper knowledge, people are too lethargic to detect the DR. Therefore, the early diagnosis system is requisite for treating ailments in the medical industry. Therefore, a novel deep model-based DR detection structure is recommended to fix the aforementioned difficulties. The developed deep model-based diabetic retinopathy detection process is performed adaptively. The DR detection process is imitated by garnering the images from benchmark sources. The gathered images are further preceded by the abnormality segmentation phase. Here, the Residual TransUNet with Enhanced loss function is used to employ the abnormality segmentation, and the loss function in this structure may be helpful to lessen the error in the segmentation procedure. Further, the segmented images are passed to the final phase of retinopathy detection. At this phase, the detection is carried out through the Adaptive Multiscale MobileNet. The variables in the AMMNet are optimized by the Adaptive Puzzle Optimization to obtain better detection performance. Finally, the effectiveness of the offered approach is confirmed by the experimentation procedure over various performance indices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. A Method for Retina Segmentation by Means of U-Net Network.
- Author
-
Santone, Antonella, De Vivo, Rosamaria, Recchia, Laura, Cesarelli, Mario, and Mercaldo, Francesco
- Subjects
CONVOLUTIONAL neural networks ,MACULAR degeneration ,IMAGE segmentation ,RETINAL imaging ,OPTIC disc ,RETINAL blood vessels ,DIABETIC retinopathy - Abstract
Retinal image segmentation plays a critical role in diagnosing and monitoring ophthalmic diseases such as diabetic retinopathy and age-related macular degeneration. We propose a deep learning-based approach utilizing the U-Net network for the accurate and efficient segmentation of retinal images. U-Net, a convolutional neural network widely used for its performance in medical image segmentation, is employed to segment key retinal structures, including the optic disc and blood vessels. We evaluate the proposed model on a publicly available retinal image dataset, demonstrating interesting performance in automatic retina segmentation, thus showing the effectiveness of the proposed method. Our proposal provides a promising method for automated retinal image analysis, aiding in early disease detection and personalized treatment planning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Multi-modal representation learning in retinal imaging using self-supervised learning for enhanced clinical predictions.
- Author
-
Sükei, Emese, Rumetshofer, Elisabeth, Schmidinger, Niklas, Mayr, Andreas, Schmidt-Erfurth, Ursula, Klambauer, Günter, and Bogunović, Hrvoje
- Subjects
- *
MEDICAL imaging systems , *RETINAL imaging , *OPTICAL coherence tomography , *ARTIFICIAL intelligence , *OPTICAL images - Abstract
Self-supervised learning has become the cornerstone of building generalizable and transferable artificial intelligence systems in medical imaging. In particular, contrastive representation learning techniques trained on large multi-modal datasets have demonstrated impressive capabilities of producing highly transferable representations for different downstream tasks. In ophthalmology, large multi-modal datasets are abundantly available and conveniently accessible as modern retinal imaging scanners acquire both 2D fundus images and 3D optical coherence tomography (OCT) scans to assess the eye. In this context, we introduce a novel multi-modal contrastive learning-based pipeline to facilitate learning joint representations for the two retinal imaging modalities. After self-supervised pre-training on 153,306 scan pairs, we show that such a pre-training framework can provide both a retrieval system and encoders that produce comprehensive OCT and fundus image representations that generalize well for various downstream tasks on three independent external datasets, explicitly focusing on clinically pertinent prediction tasks. In addition, we show that interchanging OCT with lower-cost fundus imaging can preserve the predictive power of the trained models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Design of Multidomain Feature Analysis Model for Estimation of Anemic Conditions via Deep Transfer Learning.
- Author
-
Kharkar, Vinit P. and Thakare, Ajay P.
- Subjects
- *
GABOR transforms , *DEEP learning , *WAVELET transforms , *RETINAL imaging , *ENTROPY - Abstract
The detection of anemia from anemic retinopathy involves the identification and analysis of specific retinal changes that occur as a result of severe anemia which requires analysis of multimodal sources. Existing models for identification of these conditions either showcase high complexity, or have lower efficiency when evaluated on clinical scans. Moreover, most of these models cannot be scaled for multi-disease images, which limits their applicability under real-time use cases. To overcome these issues, this research article proposes the design of an efficient and novel multidomain feature analysis model for the estimation of anemic conditions with the help of deep transfer learning process. The proposed model initially represents retinal image scans into multiple domains via estimation of Frequency, Gabor, Convolution, Wavelet and Entropy features. These features are selected via Bacterial Foraging Optimizer (BFO), which assists in retaining high variance feature sets. Due to this, the model is able to reduce redundancies, that assists in improving the speed and accuracy of classification operations. To perform this task, the model uses a set of customized 1-D Cascaded Binary CNN, that assist in categorizing input images into 4 different classes. These include, "Anemic due to Glaucoma", "Glaucoma", "Anemic", and "Normal", with 98.3% accuracy which is 4.5% higher than existing methods. The model is also able to improve the precision by 2.9%, recall by 3.5%, while reducing the delay needed for classification by 5.9% when compared with existing techniques. Due to which, the model is highly useful for clinical deployments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Optimizing retinal vessel visualization using multi-exposure fusion and adaptive contrast enhancement for improved diagnostic imaging.
- Author
-
Obreja, Cristian-Dragoș
- Subjects
- *
RETINAL blood vessels , *DIABETIC retinopathy , *BLOOD vessels , *DIAGNOSTIC imaging , *DATABASES , *RETINAL imaging - Abstract
This study presents a multi-exposed fusion algorithm aimed at enhancing the quality of retinal images captured under variable illumination conditions. Retinal imaging devices frequently struggle with inconsistent lighting, which can lead to low-contrast images where critical vascular details may be lost. The proposed algorithm combines multiple exposures, preserving the best features from each - improving both clarity and detail. Using the database of 40 retinal images, the method evaluates image quality through the structural similarity index measure (SSIM). Results indicate high structural similarity between fused images and input images across different illumination levels, with SSIM values above 0.9 for medium and high exposure. Furthermore, incorporating Contrast-Limited Adaptive Histogram Equalization (CLAHE) enhances contrast, facilitating clearer vessel visualization against the background. The improved contrast and detail retention achieved by the algorithm support accurate retinal vessel analysis, which is crucial in diagnosing conditions like diabetic retinopathy and glaucoma. This approach provides a robust, enhanced imaging solution for medical diagnostics, significantly improving readability and reliability in retinal assessments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Multi-dimensional dense attention network for pixel-wise segmentation of optic disc in colour fundus images.
- Author
-
MA, Sreema, A, Jayachandran, and Perumal T, Sudarson Rama
- Subjects
- *
IMAGE segmentation , *DEEP learning , *OPTIC disc , *DIABETIC retinopathy , *RETINAL imaging - Abstract
BACKGROUND: Segmentation of retinal fragments like blood vessels, Optic Disc (OD), and Optic Cup (OC) enables the early detection of different retinal pathologies like Diabetic Retinopathy (DR), Glaucoma, etc. OBJECTIVE: Accurate segmentation of OD remains challenging due to blurred boundaries, vessel occlusion, and other distractions and limitations. These days, deep learning is rapidly progressing in the segmentation of image pixels, and a number of network models have been proposed for end-to-end image segmentation. However, there are still certain limitations, such as limited ability to represent context, inadequate feature processing, limited receptive field, etc., which lead to the loss of local details and blurred boundaries. METHODS: A multi-dimensional dense attention network, or MDDA-Net, is proposed for pixel-wise segmentation of OD in retinal images in order to address the aforementioned issues and produce more thorough and accurate segmentation results. In order to acquire powerful contexts when faced with limited context representation capabilities, a dense attention block is recommended. A triple-attention (TA) block is introduced in order to better extract the relationship between pixels and obtain more comprehensive information, with the goal of addressing the insufficient feature processing. In the meantime, a multi-scale context fusion (MCF) is suggested for acquiring the multi-scale contexts through context improvement. RESULTS: Specifically, we provide a thorough assessment of the suggested approach on three difficult datasets. In the MESSIDOR and ORIGA data sets, the suggested MDDA-NET approach obtains accuracy levels of 99.28% and 98.95%, respectively. CONCLUSION: The experimental results show that the MDDA-Net can obtain better performance than state-of-the-art deep learning models under the same environmental conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.