40 results on '"Ran, An Ran"'
Search Results
2. Value proposition of retinal imaging in Alzheimer's disease screening: A review of eight evolving trends
- Author
-
Chan, Victor T.T., Ran, An Ran, Wagner, Siegfried K., Hui, Herbert Y.H., Hu, Xiaoyan, Ko, Ho, Fekrat, Sharon, Wang, Yaxing, Lee, Cecilia S., Young, Alvin L., Tham, Clement C., Tham, Yih Chung, Keane, Pearse A., Milea, Dan, Chen, Christopher, Wong, Tien Yin, Mok, Vincent C.T., and Cheung, Carol Y.
- Published
- 2024
- Full Text
- View/download PDF
3. High Myopia Normative Database of Peripapillary Retinal Nerve Fiber Layer Thickness to Detect Myopic Glaucoma in a Chinese Population
- Author
-
Zhang, Xiulan, Liu, Yizhi, Lv, Lin, Friedman, David S., Jonas, Jost B., Aung, Tin, Chen, Shida, Wang, Wei, Lin, Fengbin, Song, Yunhe, Wang, Peiyuan, Li, Fei, Gao, Kai, Liu, Bingqian, Liu, Yuhong, Chen, Meiling, Bressler, Neil M., Park, Ki Ho, Lam, Dennis S.C., He, Mingguang, Ohno-Matsui, Kyoko, Weinreb, Robert N., Cheng, Ching-Yu, Healey, Paul, Zangwill, Linda M., Chen, Xiang, Tang, Guangxian, Jin, Ling, Chong, Rachel S., Ran, An Ran, Wang, Zhenyu, Jiang, Jingwen, Kong, Kangjie, Sun, Jian, Wang, Deming, Tham, Clement C., Sun, Xiaodong, and Cheung, Carol Y.
- Published
- 2023
- Full Text
- View/download PDF
4. A deep learning model for detection of Alzheimer's disease based on retinal photographs: a retrospective, multicentre case-control study
- Author
-
Cheung, Carol Y, Ran, An Ran, Wang, Shujun, Chan, Victor T T, Sham, Kaiser, Hilal, Saima, Venketasubramanian, Narayanaswamy, Cheng, Ching-Yu, Sabanayagam, Charumathi, Tham, Yih Chung, Schmetterer, Leopold, McKay, Gareth J, Williams, Michael A, Wong, Adrian, Au, Lisa W C, Lu, Zhihui, Yam, Jason C, Tham, Clement C, Chen, John J, Dumitrascu, Oana M, Heng, Pheng-Ann, Kwok, Timothy C Y, Mok, Vincent C T, Milea, Dan, Chen, Christopher Li-Hsian, and Wong, Tien Yin
- Published
- 2022
- Full Text
- View/download PDF
5. Deep learning in glaucoma with optical coherence tomography: a review
- Author
-
Ran, An Ran, Tham, Clement C., Chan, Poemen P., Cheng, Ching-Yu, Tham, Yih-Chung, Rim, Tyler Hyungtaek, and Cheung, Carol Y.
- Published
- 2021
- Full Text
- View/download PDF
6. Detection of glaucomatous optic neuropathy with spectral-domain optical coherence tomography: a retrospective training and validation deep-learning analysis
- Author
-
Ran, An Ran, Cheung, Carol Y, Wang, Xi, Chen, Hao, Luo, Lu-yang, Chan, Poemen P, Wong, Mandy O M, Chang, Robert T, Mannil, Suria S, Young, Alvin L, Yung, Hon-wah, Pang, Chi Pui, Heng, Pheng-Ann, and Tham, Clement C
- Published
- 2019
- Full Text
- View/download PDF
7. Performance of Artificial Intelligence in Detecting Diabetic Macular Edema From Fundus Photography and Optical Coherence Tomography Images: A Systematic Review and Meta-analysis.
- Author
-
Lam, Ching, Wong, Yiu Lun, Tang, Ziqi, Hu, Xiaoyan, Nguyen, Truong X., Yang, Dawei, Zhang, Shuyi, Ding, Jennifer, Szeto, Simon K.H., Ran, An Ran, and Cheung, Carol Y.
- Subjects
OPTICAL coherence tomography ,MACULAR edema ,ARTIFICIAL intelligence ,RECEIVER operating characteristic curves ,PEOPLE with diabetes - Abstract
BACKGROUND: Diabetic macular edema (DME) is the leading cause of vision loss in people with diabetes. Application of artificial intelligence (AI) in interpreting fundus photography (FP) and optical coherence tomography (OCT) images allows prompt detection and intervention. PURPOSE: To evaluate the performance of AI in detecting DME from FP or OCT images and identify potential factors affecting model performances. DATA SOURCES: We searched seven electronic libraries up to 12 February 2023. STUDY SELECTION: We included studies using AI to detect DME from FP or OCT images. DATA EXTRACTION: We extracted study characteristics and performance parameters. DATA SYNTHESIS: Fifty-three studies were included in the meta-analysis. FP-based algorithms of 25 studies yielded pooled area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity of 0.964, 92.6%, and 91.1%, respectively. OCT-based algorithms of 28 studies yielded pooled AUROC, sensitivity, and specificity of 0.985, 95.9%, and 97.9%, respectively. Potential factors improving model performance included deep learning techniques, larger size, and more diversity in training data sets. Models demonstrated better performance when validated internally than externally, and those trained with multiple data sets showed better results upon external validation. LIMITATIONS: Analyses were limited by unstandardized algorithm outcomes and insufficient data in patient demographics, OCT volumetric scans, and external validation. CONCLUSIONS: This meta-analysis demonstrates satisfactory performance of AI in detecting DME from FP or OCT images. External validation is warranted for future studies to evaluate model generalizability. Further investigations may estimate optimal sample size, effect of class balance, patient demographics, and additional benefits of OCT volumetric scans. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Chapter 12 - Artificial intelligence in ophthalmology III: systemic disease prediction
- Author
-
Ran, An Ran, Hui, Herbert Y.H., Cheung, Carol Y., and Wong, Tien Yin
- Published
- 2024
- Full Text
- View/download PDF
9. Deep learning in optical coherence tomography: Where are the gaps?
- Author
-
Li, Dawei, Ran, An Ran, Cheung, Carol Y., and Prince, Jerry L.
- Subjects
- *
OPTICAL coherence tomography , *DEEP learning , *IMAGE analysis , *OPTIC nerve , *COMORBIDITY , *HEAD & neck cancer - Abstract
Optical coherence tomography (OCT) is a non‐invasive optical imaging modality, which provides rapid, high‐resolution and cross‐sectional morphology of macular area and optic nerve head for diagnosis and managing of different eye diseases. However, interpreting OCT images requires experts in both OCT images and eye diseases since many factors such as artefacts and concomitant diseases can affect the accuracy of quantitative measurements made by post‐processing algorithms. Currently, there is a growing interest in applying deep learning (DL) methods to analyse OCT images automatically. This review summarises the trends in DL‐based OCT image analysis in ophthalmology, discusses the current gaps, and provides potential research directions. DL in OCT analysis shows promising performance in several tasks: (1) layers and features segmentation and quantification; (2) disease classification; (3) disease progression and prognosis; and (4) referral triage level prediction. Different studies and trends in the development of DL‐based OCT image analysis are described and the following challenges are identified and described: (1) public OCT data are scarce and scattered; (2) models show performance discrepancies in real‐world settings; (3) models lack of transparency; (4) there is a lack of societal acceptance and regulatory standards; and (5) OCT is still not widely available in underprivileged areas. More work is needed to tackle the challenges and gaps, before DL is further applied in OCT image analysis for clinical use. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Correction: Deep learning in glaucoma with optical coherence tomography: a review
- Author
-
Ran, An Ran, Tham, Clement C., Chan, Poemen P., Cheng, Ching-Yu, Tham, Yih-Chung, Rim, Tyler Hyungtaek, and Cheung, Carol Y.
- Published
- 2021
- Full Text
- View/download PDF
11. Clinically relevant factors associated with a binary outcome of diabetic macular ischaemia: an OCTA study.
- Author
-
Da Wei Yang, Zi Qi Tang, Fang Yao Tang, Szeto, Simon K. H., Chan, Jason, Fanny Yip, Wong, Cherie Y. K., Ran, An Ran, Lai, Timothy Y. Y., and Cheung, Carol Y.
- Abstract
Aims We investigated the demographic, ocular, diabetes-related and systemic factors associated with a binary outcome of diabetic macular ischaemia (DMI) as assessed by optical coherence tomography angiography (OCTA) evaluation of non-perfusion at the level of the superficial capillary plexus (SCP) and deep capillary plexus (DCP) in a cohort of patients with diabetes mellitus (DM). Materials and methods 617 patients with DM were recruited from July 2015 to December 2020 at the Chinese University of Hong Kong Eye Centre. Image quality assessment (gradable or ungradable for assessing DMI) and DMI evaluation (presence or absence of DMI) were assessed at the level of the SCP and DCP by OCTA. Results 1107 eyes from 593 subjects were included in the final analysis. 560 (50.59%) eyes had DMI at the level of SCP, and 647 (58.45%) eyes had DMI at the level of DCP. Among eyes without diabetic retinopathy (DR), DMI was observed in 19.40% and 24.13% of eyes at SCP and DCP, respectively. In the multivariable logistic regression models, older age, poorer visual acuity, thinner ganglion cell-inner plexiform layer thickness, worsened DR severity, higher haemoglobin A1c level, lower estimated glomerular filtration rate and higher low-density lipoprotein cholesterol level were associated with SCP-DMI. In addition to the aforementioned factors, presence of diabetic macular oedema and shorter axial length were associated with DCP-DMI. Conclusion We reported a series of associated factors of SCP-DMI and DCP-DMI. The binary outcome of DMI might promote a simplified OCTA-based DMI evaluation before subsequent quantitative analysis for assessing DMI extent and fulfil the urge for an updating diabetic retinal disease staging to be implemented with OCTA. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Deep Reinforcement Learning-Based Retinal Imaging in Alzheimer's Disease: Potential and Perspectives.
- Author
-
Hui, Herbert Y.H., Ran, An Ran, Dai, Jia Jia, and Cheung, Carol Y.
- Subjects
- *
RETINAL imaging , *REINFORCEMENT learning , *ALZHEIMER'S disease , *DEEP learning , *MACHINE learning - Abstract
Alzheimer's disease (AD) remains a global health challenge in the 21st century due to its increasing prevalence as the major cause of dementia. State-of-the-art artificial intelligence (AI)-based tests could potentially improve population-based strategies to detect and manage AD. Current retinal imaging demonstrates immense potential as a non-invasive screening measure for AD, by studying qualitative and quantitative changes in the neuronal and vascular structures of the retina that are often associated with degenerative changes in the brain. On the other hand, the tremendous success of AI, especially deep learning, in recent years has encouraged its incorporation with retinal imaging for predicting systemic diseases. Further development in deep reinforcement learning (DRL), defined as a subfield of machine learning that combines deep learning and reinforcement learning, also prompts the question of how it can work hand in hand with retinal imaging as a viable tool for automated prediction of AD. This review aims to discuss potential applications of DRL in using retinal imaging to study AD, and their synergistic application to unlock other possibilities, such as AD detection and prediction of AD progression. Challenges and future directions, such as the use of inverse DRL in defining reward function, lack of standardization in retinal imaging, and data availability, will also be addressed to bridge gaps for its transition into clinical use. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Association between blood pressure and retinal arteriolar and venular diameters in Chinese early adolescent children, and whether the association has gender difference: a cross-sectional study
- Author
-
He, Yuan, Li, Shi-Ming, Kang, Meng-Tian, Liu, Luo-Ru, Li, He, Wei, Shi-Fei, Ran, An-Ran, Wang, Ningli, and the Anyang Childhood Eye Study Group
- Published
- 2018
- Full Text
- View/download PDF
14. Deep Learning in Optical Coherence Tomography Angiography: Current Progress, Challenges, and Future Directions.
- Author
-
Yang, Dawei, Ran, An Ran, Nguyen, Truong X., Lin, Timothy P. H., Chen, Hao, Lai, Timothy Y. Y., Tham, Clement C., and Cheung, Carol Y.
- Subjects
- *
OPTICAL coherence tomography , *ARTIFICIAL neural networks , *DEEP learning , *ANGIOGRAPHY , *IMAGE analysis , *SIGNAL convolution - Abstract
Optical coherence tomography angiography (OCT-A) provides depth-resolved visualization of the retinal microvasculature without intravenous dye injection. It facilitates investigations of various retinal vascular diseases and glaucoma by assessment of qualitative and quantitative microvascular changes in the different retinal layers and radial peripapillary layer non-invasively, individually, and efficiently. Deep learning (DL), a subset of artificial intelligence (AI) based on deep neural networks, has been applied in OCT-A image analysis in recent years and achieved good performance for different tasks, such as image quality control, segmentation, and classification. DL technologies have further facilitated the potential implementation of OCT-A in eye clinics in an automated and efficient manner and enhanced its clinical values for detecting and evaluating various vascular retinopathies. Nevertheless, the deployment of this combination in real-world clinics is still in the "proof-of-concept" stage due to several limitations, such as small training sample size, lack of standardized data preprocessing, insufficient testing in external datasets, and absence of standardized results interpretation. In this review, we introduce the existing applications of DL in OCT-A, summarize the potential challenges of the clinical deployment, and discuss future research directions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. List of contributors
- Author
-
Ahn, Joseph C., Anand, Shankara, Arnaout, O., Bandyopadhyay, Anuja, Bartholomew, Erin, Bates, David W., Bender, Sarah M.L., Bhullar, Puneet K., Bhutani, Tina, Boaro, A., Boyer, Edward W., Brown, Ethan D.L., Carreiro, Stephanie, Chahla, Jorge, Chandran, Viji Pulikkel, Cheng, Ching-Yu, Cheung, Carol Y., Choudhary, Anirudh, Choudhury, Avishek, Chung, Mimi, Ciurtin, Coziana, Comfere, Nneka I., Corriveau-Lecavalier, Nick, Côté, Mélina, del Alamo, Diego, Dias, Roger D., Dönnes, Pierre, Duong, Dat, Ebnali, Mahdi, El Sherbini, Adham, ElZarrad, M. Khair, Fakhouri, Tala H., Fakhoury, Marc, Fein, Joshua A., Fischer, Uwe M., Glicksberg, Benjamin S., Goldstein, Cathy, Gonem, Sherif, Green, Darren V.S., Gupta, Raghav, Hakimi, Marwa, Halamka, John D., Han, Christina S., Hannah-Shmouni, Fady, Harrer, Stefan, Hazime, Ali Amer, Howard, Michael A., Hui, Herbert Y.H., Ildardashty, Alexander, J.F. Shaikh, Hashim, Jeliazkov, Jeliazko R., Jones, David T., Juhn, Young J., Jury, Elizabeth C., Kandaswamy, Swaminathan, Kang, Yanna, Kann, Benjamin H., Kaplin, Scott, Karpiak, Joel, Kassir, Elias, Kaur, Harsimran, Khan, Sohil, Khatib, Reem, Kherabi, Yousra, Knake, Lindsey A., Krittanawong, Chayakrit, Kuhn, Veronica C., Kunze, Kyle, Lamarche, Benoît, Laplante, Simon, Lehmann, Lisa Soleymani, Li, Dawei, Liao, Wilson, Likitlersuang, Jirapat, Liu, Qi, Madani, Amin, Malik, Momin M., Mann, Matthias, Masood, Sameer, Mathur, Piyush, McManus, Sean, Menard, Jeffrey, Mivalt, Filip, Murphree, Dennis, Myers, Thomas G., Narayan, Sanjiv M., Natarajan, Vivek, Pandav, Krunal, Parwani, Anil V., Pedraza Bermeo, Adriana Marcela, Peiffer-Smadja, Nathan, Peng, Junjie, Peng, Lily, Peters, Margot S., Petrick, Nicholas, Polce, Evan, Poojari, Pooja Gopal, Price, W. Nicholson, II, Raghavan, Lavanya, Rajan, Asha K., Ran, An Ran, Rangu, Sowmith, Rashid, Muhammed, Reddy, Charitha D., Rider, Nicholas L., Rigatti, Marc, Rivers, Michael, Robinson, George, Rogers, Albert J., Rogerson, Colin M., Ryu, Euijung, Sahiner, Berkman, Sarnaik, Kunaal, Shafi, Saba, Shah, Vijay H., Shapovalov, Maxim V., Sharma, Samin K, Shekhar, Skand, Shin, Harold, Shouval, Roni, Smith, Kenneth, Sokumbi, Olayemi, Solomon, Benjamin D., Somani, Sulaiman S., Sternke, Matt C., Strauss, Maximillian T., Syrowatka, Ania, Tang, W. H. Wilson, Teven, Chad M., Tewari, Ashutosh Kumar, Thunga, Girish, Urena, Estefania, Verma, Ashish, Vietas, Jay, Waikel, Rebekah L., Wi, Chung-Il, Winter, Meredith C., Wong, Melissa S., Wong, Tien Yin, Wu, Chao-Ping, Yeroushalmi, Samuel, Zenati, Marco A., Zeng, Wen-Feng, and Zheng, Yingfeng
- Published
- 2024
- Full Text
- View/download PDF
16. At the Junction of Organic Solar Cells: Charge Generation and Recombination at Donor/Acceptor Interfaces
- Author
-
Ran, Niva Ran
- Subjects
Energy ,Physics ,Materials Science ,Charge transfer state ,Device Physics ,Donor/acceptor interfaces ,Energetic offsets ,Organic photovoltaics ,Organic semiconductors - Abstract
Heterojunction, what’s your function?In organic electronic devices composed of donor and acceptor semiconductors, the donor/acceptor interface is most typically the site with all the action: i.e. charge-carrier generation and recombination. Developing an understanding of the optimal geometry and energetics at this interface is necessary to optimize the active material and device architecture for their desired application. In this dissertation, we explore the role of morphology and energetics at the donor/acceptor interfaces on photovoltaic performance, but the results can be applied to any device with donor/acceptor heterojunctions.We begin our investigation by characterizing emission from small-molecule blends. We find a correlation between the emergence of phase separation and crystallinity, electroluminescence from the donor singlet-state, and good photovoltaic performance. Next, upon demonstrating control over the molecular orientation, we then uncover the genuine effects of molecular geometry at the donor/acceptor interface on charge generation and recombination: (i) Face-on devices have a higher open-circuit voltage, due to greater charge transfer state energy and radiative efficiency. (ii) Edge-on devices are more efficient at charge generation, which is attributed to a smaller electronic coupling and a lower activation energy for charge generation.From the perspective of energetics, we focus on a polymer-fullerene blend system with small energetic offsets. This system has very low potential losses: it achieves a high open circuit voltage relative to the energy of the absorbed photons. We characterize the energetic landscape in this blend and conclude that the blend has very high energetic order, and that potential losses associated with charge transfer have been minimized. Unfortunately, the blend is also characterized by exceptionally fast bimolecular recombination, most likely resulting from a highly-mixed blend morphology and charge-trapping effects. Nonetheless, these results are promising as they suggest that given an optimized morphology, organic solar cells (and other organic electronic devices) have more potential than we had previously believed.
- Published
- 2016
17. Effect of Text Messaging Parents of School-Aged Children on Outdoor Time to Control Myopia: A Randomized Clinical Trial.
- Author
-
Li, Shi-Ming, Ran, An-Ran, Kang, Meng-Tian, Yang, Xiaoyuan, Ren, Ming-Yang, Wei, Shi-Fei, Gan, Jia-He, Li, Lei, He, Xi, Li, He, Liu, Luo-Ru, Wang, Yipeng, Zhan, Si-Yan, Atchison, David A., Morgan, Ian, and Wang, Ningli
- Published
- 2022
- Full Text
- View/download PDF
18. Finding New Diagnostic Information for Detecting Glaucoma using Neural Networks
- Author
-
Noury, Erfan, Mannil, Suria S., Chang, Robert T., Ran, An Ran, Cheung, Carol Y., Thapa, Suman S., Rao, Harsha L., Dasari, Srilakshmi, Riyazuddin, Mohammed, Chang, Dolly, Nagaraj, Sriharsha, Tham, Clement C., and Zadeh, Reza
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,genetic structures ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,sense organs ,Electrical Engineering and Systems Science - Image and Video Processing ,eye diseases ,Machine Learning (cs.LG) - Abstract
We describe a new approach to automated Glaucoma detection in 3D Spectral Domain Optical Coherence Tomography (OCT) optic nerve scans. First, we gathered a unique and diverse multi-ethnic dataset of OCT scans consisting of glaucoma and non-glaucomatous cases obtained from four tertiary care eye hospitals located in four different countries. Using this longitudinal data, we achieved state-of-the-art results for automatically detecting Glaucoma from a single raw OCT using a 3D Deep Learning system. These results are close to human doctors in a variety of settings across heterogeneous datasets and scanning environments. To verify correctness and interpretability of the automated categorization, we used saliency maps to find areas of focus for the model. Matching human doctor behavior, the model predictions indeed correlated with the conventional diagnostic parameters in the OCT printouts, such as the retinal nerve fiber layer. We further used our model to find new areas in the 3D data that are presently not being identified as a diagnostic parameter to detect glaucoma by human doctors. Namely, we found that the Lamina Cribrosa (LC) region can be a valuable source of helpful diagnostic information previously unavailable to doctors during routine clinical care because it lacks a quantitative printout. Our model provides such volumetric quantification of this region. We found that even when a majority of the RNFL is removed, the LC region can distinguish glaucoma. This is clinically relevant in high myopes, when the RNFL is already reduced, and thus the LC region may help differentiate glaucoma in this confounding situation. We further generalize this approach to create a new algorithm called DiagFind that provides a recipe for finding new diagnostic information in medical imagery that may have been previously unusable by doctors., 28 pages, 12 figures, 15 tables, title changed, new authors added
- Published
- 2019
19. Detecting Glaucoma Using 3D Convolutional Neural Network of Raw SD-OCT Optic Nerve Scans
- Author
-
Noury, Erfan, Mannil, Suria S., Chang, Robert T., Ran, An Ran, Cheung, Carol Y., Thapa, Suman S., Rao, Harsha L., Dasari, Srilakshmi, Riyazuddin, Mohammed, Nagaraj, Sriharsha, and Zadeh, Reza
- Subjects
glaucoma ,genetic structures ,retinal nerve fiber layer thickness (RNFL) ,sense organs ,chronic progressive optic neuropathy ,Spectral Domain Optical Coherence Tomography ,eye diseases - Abstract
Background. Glaucoma is a chronic progressive optic neuropathy with characteristic visual field defects and corresponding structural changes, including nerve fiber layer thinning and optic nerve neuroretinal rim loss. These changes are traditionally monitored by SD-OCT (Spectral Domain Optical Coherence Tomography), which contains a large amount of 3D voxel information in a 6mm × 6mm × 2mm cube of data. However, only a fixed 3.4 mm diameter circle (2D slice) centered over the optic nerve is currently extracted using automated segmentation of the retinal nerve fiber layer thickness (RNFL). This RNFL thickness is reported relative to a normative database to help detect thinning and neuroretinal rim loss, which does not use the additional information in the optic nerve head cube. Clinicians rarely scroll through the entire cube. Therefore we propose developing and validating a three-dimensional (3D) deep learning system using the entire unprocessed OCT optic nerve volumes to distinguish true glaucoma from normals in order to discover any additional imaging biomarkers within the cube through saliency mapping. The algorithm has been validated against 4 additional distinct datasets from different countries using multimodal test results to define glaucoma rather than just the OCT alone. We hypothesize that the output from this 3D model, alongside a map of the regions where the model attends to make a prediction, can help identify novel diagnostic information in the cube. Methods. 2076 OCT (Cirrus SD-OCT, Carl Zeiss Meditec, Dublin, CA) 6 mm cubes centered over the optic nerve, 200 × 200 × 1024 volumes of 879 eyes (390 healthy and 489 glaucoma) from 487 patients, age 18-84 years, were exported from the Glaucoma Clinic Imaging Database at the Byers Eye Institute, Stanford University, from March 2010 to December 2017. This included bilateral eyes of 391 patients and unilateral eyes of 97 patients with a right eye to left eye ratio of 1.05:1. A 3D deep neural network was trained and tested on this unique OCT optic nerve head dataset from Stanford. 570 randomly selected optic nerve head cube scans of eyes with a diagnosis of glaucoma (True Glaucoma) and 342 scans of eyes with a normal diagnosis (True Normal) were used for training. A total of 81 scans of eyes with True Glaucoma and 32 scans of eyes with True Normal annotations were included in the primary validation set. 58 scans of eyes with True Glaucoma annotation and 50 scans of eyes with a True Normal annotation were included in the test set. A total of 3620 scans (all obtained using the Cirrus SD-OCT device) from 1458 eyes obtained from 4 different institutions, from United States (943 scans), Hong Kong (1625 scans), India (672 scans), and Nepal (380 scans) were used for external evaluation. True Glaucoma for the training data was defined as glaucomatous disc changes along with defects on SD-OCT RNFL and/or GCIPL (thickness and/or deviation) maps with corresponding visual field defects as well as intraocular pressure lowering treatment upon chart review. The range of glaucoma patients included mild to severe without excluding high myopes. True normal was defined as cases with non-glaucomatous optic disc with no structural defects on OCT RNFL/GCIPL deviation or sector maps and normal visual fields upon chart review. Results. The 3D deep learning system achieved an area under the receiver operation characteristics curve (AUROC) of 0.8883 in the primary Stanford test set identifying true normal from true glaucoma. The system obtained AUROCs of 0.8571, 0.7695, 0.8706, and 0.7965 on OCT cubes from United States, Hong Kong, India, and Nepal, respectively. We also analyzed the performance of the model separately for each myopia severity level as defined by spherical equivalent and the model was able to achieve F1 scores of 0.9673, 0.9491, and 0.8528 on severe, moderate, and mild myopia cases, respectively. Saliency map visualizations highlighted a significant association between the optic nerve lamina cribrosa region in the glaucoma group. Conclusions. A 3D convolutional neural network using SD-OCT optic nerve head cubes can distinguish true glaucoma from normal with good accuracy and this generalized to multiple diverse external SD-OCT datasets. Highlighted areas from saliency mapping revealed new areas within the deep lamina cribrosa. This deserves further investigation, as there is potential to monitor laminar changes even after RNFL has thinned.
- Published
- 2019
- Full Text
- View/download PDF
20. Brain Activation Induced by Myopic and Hyperopic Defocus From Spectacles.
- Author
-
Kang, Meng-Tian, Wang, Bo, Ran, An-Ran, Gan, Jiahe, Du, Jialing, Yusufu, Mayinuer, Liang, Xintong, Li, Shi-Ming, and Wang, Ningli
- Subjects
FRONTAL lobe ,TEMPORAL lobe ,FUNCTIONAL magnetic resonance imaging ,PARIETAL lobe ,CEREBRAL circulation - Abstract
Purpose: To assess neural changes in perceptual effects induced by myopic defocus and hyperopic defocus stimuli in ametropic and emmetropic subjects using functional magnetic resonance imaging (fMRI). Methods: This study included 41 subjects with a mean age of 26.0 ± 2.9 years. The mean spherical equivalence refraction was −0.54 ± 0.51D in the emmetropic group and −3.57 ± 2.27D in the ametropic group. The subjects were instructed to view through full refractive correction, with values of +2.00D to induce myopic defocus state and −2.00D to induce hyperopic defocus state. This was carried over in three random sessions. Arterial spin labeling (ASL) perfusion was measured using fMRI to obtain quantified regional cerebral blood flow (rCBF). Behavioral tests including distant visual acuity (VA) and contrast sensitivity (CS), were measured every 5 min for 30 min. Results: Myopic defocus induced significantly greater rCBF increase in four cerebral regions compared with full correction: right precentral gyrus, right superior temporal gyrus, left inferior parietal lobule, and left middle temporal gyrus (P < 0.001). The differences were less significant in low myopes than emmetropes. In the hyperopic defocus session, the increased responses of rCBF were only observed in the right and left precentral gyrus. Myopic defocused VA and CS improved significantly within 5 min and reached a plateau shortly after. Conclusion: This study revealed that myopic defocus stimuli can significantly increase blood perfusion in visual attention-related cerebral regions, which suggests a potential direction for future investigation on the relationship between retinal defocus and its neural consequences. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
21. A Multitask Deep-Learning System to Classify Diabetic Macular Edema for Different Optical Coherence Tomography Devices: A Multicenter Analysis.
- Author
-
Tang, Fangyao, Wang, Xi, Ran, An-ran, Chan, Carmen K.M., Ho, Mary, Yip, Wilson, Young, Alvin L., Lok, Jerry, Szeto, Simon, Chan, Jason, Yip, Fanny, Wong, Raymond, Tang, Ziqi, Yang, Dawei, Ng, Danny S., Chen, Li Jia, Brelén, Marten, Chu, Victor, Li, Kenneth, and Lai, Tracy H.T.
- Subjects
LASER photocoagulation ,MACULAR edema ,OPTICAL coherence tomography ,CONVOLUTIONAL neural networks ,DEEP learning ,RECEIVER operating characteristic curves ,RESEARCH ,RETINAL degeneration ,RESEARCH methodology ,DIABETES ,MEDICAL cooperation ,EVALUATION research ,COMPARATIVE studies ,RESEARCH funding ,DIABETIC retinopathy - Abstract
Objective: Diabetic macular edema (DME) is the primary cause of vision loss among individuals with diabetes mellitus (DM). We developed, validated, and tested a deep learning (DL) system for classifying DME using images from three common commercially available optical coherence tomography (OCT) devices.Research Design and Methods: We trained and validated two versions of a multitask convolution neural network (CNN) to classify DME (center-involved DME [CI-DME], non-CI-DME, or absence of DME) using three-dimensional (3D) volume scans and 2D B-scans, respectively. For both 3D and 2D CNNs, we used the residual network (ResNet) as the backbone. For the 3D CNN, we used a 3D version of ResNet-34 with the last fully connected layer removed as the feature extraction module. A total of 73,746 OCT images were used for training and primary validation. External testing was performed using 26,981 images across seven independent data sets from Singapore, Hong Kong, the U.S., China, and Australia.Results: In classifying the presence or absence of DME, the DL system achieved area under the receiver operating characteristic curves (AUROCs) of 0.937 (95% CI 0.920-0.954), 0.958 (0.930-0.977), and 0.965 (0.948-0.977) for the primary data set obtained from CIRRUS, SPECTRALIS, and Triton OCTs, respectively, in addition to AUROCs >0.906 for the external data sets. For further classification of the CI-DME and non-CI-DME subgroups, the AUROCs were 0.968 (0.940-0.995), 0.951 (0.898-0.982), and 0.975 (0.947-0.991) for the primary data set and >0.894 for the external data sets.Conclusions: We demonstrated excellent performance with a DL system for the automated classification of DME, highlighting its potential as a promising second-line screening tool for patients with DM, which may potentially create a more effective triaging mechanism to eye clinics. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
22. UD-MIL: Uncertainty-Driven Deep Multiple Instance Learning for OCT Image Classification.
- Author
-
Wang, Xi, Tang, Fangyao, Chen, Hao, Luo, Luyang, Tang, Ziqi, Ran, An-Ran, Cheung, Carol Y., and Heng, Pheng-Ann
- Subjects
CONVOLUTIONAL neural networks ,DEEP learning ,RECEIVER operating characteristic curves ,RECURRENT neural networks ,OPTICAL coherence tomography ,SUPERVISED learning ,NOSOLOGY - Abstract
Deep learning has achieved remarkable success in the optical coherence tomography (OCT) image classification task with substantial labelled B-scan images available. However, obtaining such fine-grained expert annotations is usually quite difficult and expensive. How to leverage the volume-level labels to develop a robust classifier is very appealing. In this paper, we propose a weakly supervised deep learning framework with uncertainty estimation to address the macula-related disease classification problem from OCT images with the only volume-level label being available. First, a convolutional neural network (CNN) based instance-level classifier is iteratively refined by using the proposed uncertainty-driven deep multiple instance learning scheme. To our best knowledge, we are the first to incorporate the uncertainty evaluation mechanism into multiple instance learning (MIL) for training a robust instance classifier. The classifier is able to detect suspicious abnormal instances and abstract the corresponding deep embedding with high representation capability simultaneously. Second, a recurrent neural network (RNN) takes instance features from the same bag as input and generates the final bag-level prediction by considering the individually local instance information and globally aggregated bag-level representation. For more comprehensive validation, we built two large diabetic macular edema (DME) OCT datasets from different devices and imaging protocols to evaluate the efficacy of our method, which are composed of 30,151 B-scans in 1,396 volumes from 274 patients (Heidelberg-DME dataset) and 38,976 B-scans in 3,248 volumes from 490 patients (Triton-DME dataset), respectively. We compare the proposed method with the state-of-the-art approaches, and experimentally demonstrate that our method is superior to alternative methods, achieving volume-level accuracy, F1-score and area under the receiver operating characteristic curve (AUC) of 95.1%, 0.939 and 0.990 on Heidelberg-DME and those of 95.1%, 0.935 and 0.986 on Triton-DME, respectively. Furthermore, the proposed method also yields competitive results on another public age-related macular degeneration OCT dataset, indicating the high potential as an effective screening tool in the clinical practice. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
23. The association of TGFB1 genetic polymorphisms with high myopia: a systematic review and meta-analysis
- Author
-
Meng, Bo, LI, Shi-Ming, Yang, Yu, Yang, Zhi-Rong, Sun, Feng, Kang, Meng-Tian, Sun, Yun-Yun, Ran, An-Ran, Wang, Jia-Nan, Yan, Ran, BaI, Ya-Wen, Wang, Ning-Li, and Zhan, Si-Yan
- Subjects
Original Article - Abstract
Objective: The TGFB1 gene is among the most studied genes in high myopia due to its role in scleral remodeling. But reported findings of association on TGFB1 and high myopia are inconsistent. This present study is to evaluate the association of TGFB1 polymorphisms and high myopia. Methods: A comprehensive literature search was conducted on studies published up to April 5, 2015. Summary odds ratios (ORs) and 95% confidence intervals were analyzed. Heterogeneity across studies was evaluated by Cochran Q statistic test and the I2 index. Sensitivity analyses were conducted by the approach of one-study remove to assess the influence of single study on the combined effect. Results: Eight studies were included in this study for meta-analysis. Rs1982073 was associated with high myopia in dominant model (OR=1.64; 95% CI=1.04~2.58; P
- Published
- 2015
24. Tutorialistic Gameplay : A comparison to restrictive explicit tutorials in a hard fun game of emergence
- Author
-
Ran, Andreas Ran
- Subjects
hard fun ,Datavetenskap (datalogi) ,Computer Sciences ,ComputingMilieux_PERSONALCOMPUTING ,tutorialistic gameplay ,game of emergence ,tutorials ,games - Abstract
This report investigates if tutorials are necessary in a hard fun game of emergence with tutorialistic gameplay. This is done by comparing the performances of players who have and have not played a tutorial. The study is based on research of the classification of different kinds of games as well as research regarding the effect of tutorials on player experience. The terms tutorialistic gameplay and restrictive tutorials are introduced and defined. The method used for data collection was a game that automatically recorded performance data at one minute intervals. This performance data was then compiled and analysed to answer the research question: How does learning through tutorialistic gameplay affect player performance innormal gameplay compared to learning through restrictive tutorials in a hard fun game of emergence? Not enough results were received to answer the question, but they show examples of learning through tutorialistic gameplay having both positive and negative effects.
- Published
- 2014
25. Distribution and associations of intraocular pressure in 7- and 12-year-old Chinese children: The Anyang Childhood Eye Study.
- Author
-
Li, Shuning, Li, Shi-Ming, Wang, Xiao-lei, Kang, Meng-Tian, Liu, Luo-Ru, Li, He, Wei, Shi-Fei, Ran, An-Ran, Zhan, Siyan, Thomas, Ravi, Wang, Ningli, and null, null
- Subjects
MYOPIA treatment ,INTRAOCULAR pressure ,CHINESE people ,JUVENILE diseases ,TONOMETRY ,DISEASES - Abstract
Purpose: To report the intraocular pressure (IOP) and its association with myopia and other factors in 7 and 12-year-old Chinese children. Methods: All children participating in the Anyang Childhood Eye Study underwent non-contact tonometry as well as measurement of central corneal thickness (CCT), axial length, cycloplegic auto-refraction, blood pressure, height and weight. A questionnaire was used to collect other relevant information. Univariable and multivariable analysis were performed to determine the associations of IOP. Results: A total of 2760 7-year-old children (95.4%) and 2198 12-year-old children (97.0%) were included. The mean IOP was 13.5±3.1 mmHg in the younger cohort and 15.8±3.5 mmHg in older children (P<0.0001). On multivariable analysis, higher IOP in the younger cohort was associated with female gender (standardized regression coefficient [SRC], 0.11, P<0.0001), increasing central corneal thickness (SRC, 0.39, P<0.0001), myopia (SRC, 0.05, P = 0.03), deep anterior chamber (SRC, 0.07, P<0.01), smaller waist (SRC, 0.07, P<0.01) and increasing mean arterial pressure (SRC, 0.13, P<0.0001). In the older cohort, higher IOP was again associated with female gender (SRC, 0.16, P<0.0001), increasing central corneal thickness (SRC, 0.43, P<0.0001), deep anterior chamber (SRC, 0.09, P<0.01), higher body mass index (SRC, 0.07, P = 0.04) and with increasing mean arterial pressure (SRC, 0.09, P = 0.01), age at which reading commenced (SRC, 0.10, P<0.01) and birth method (SRC, 0.09, P = 0.01), but not with myopia (SRC, 0.09, P = 0.20). Conclusion: In Chinese children, higher IOP was associated with female gender, older age, thicker central cornea, deeper anterior chamber and higher mean arterial pressure. Higher body mass index, younger age at commencement of reading and being born of a caesarean section was also associated with higher IOP in adolescence. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
26. 'Bin' – Organizatsye Fun Di Sabres Fun Yerushalayim d’Yidish / 'Bee' – Organization of the Sabras of Jerusalem of Yiddish
- Author
-
Lazar Ran, Leyzer Ran
- Abstract
לייזער ראַן׃ ״בין״ – אָרגאַניזאַציע פֿון די סאַברעס פֿון ירושלים ד׳ייִדיש
- Published
- 1978
27. walking to caesarea.
- Author
-
Ran, Ami Ran
- Published
- 2020
28. High Myopia Normative Database of Peripapillary Retinal Nerve Fiber Layer Thickness to Detect Myopic Glaucoma in a Chinese Population.
- Author
-
Song, Yunhe, Li, Fei, Chong, Rachel S., Wang, Wei, Ran, An Ran, Lin, Fengbin, Wang, Peiyuan, Wang, Zhenyu, Jiang, Jingwen, Kong, Kangjie, Jin, Ling, Chen, Meiling, Sun, Jian, Wang, Deming, Tham, Clement C., Lam, Dennis S.C., Zangwill, Linda M., Weinreb, Robert N., Aung, Tin, and Jonas, Jost B.
- Subjects
- *
DATABASES , *NERVE fibers , *CHINESE people , *MYOPIA , *GLAUCOMA , *RETINAL blood vessels - Abstract
To develop and validate the performance of a high myopia (HM)-specific normative database of peripapillary retinal nerve fiber layer (pRNFL) thickness in differentiating HM from highly myopic glaucoma (HMG). Cross-sectional multicenter study. A total of 1367 Chinese participants (2325 eyes) with nonpathologic HM or HMG were included from 4 centers. After quality control, 1108 eyes from 694 participants with HM were included in the normative database; 459 eyes from 408 participants (323 eyes with HM and 136 eyes with HMG) and 322 eyes from 197 participants (131 eyes with HM and 191 eyes with HMG) were included in the internal and external validation sets, respectively. Only HMG eyes with an intraocular pressure > 21 mmHg were included. The pRNFL thickness was measured with swept-source (SS) OCT. Four strategies of pRNFL-specified values were examined, including global and quadrantic pRNFL thickness below the lowest fifth or the lowest first percentile of the normative database. The accuracy, sensitivity, and specificity of the HM-specific normative database for detecting HMG. Setting the fifth percentile of the global pRNFL thickness as the threshold, using the HM-specific normative database, we achieved an accuracy of 0.93 (95% confidence interval [CI], 0.90–0.95) and 0.85 (95% CI, 0.81–0.89), and, using the first percentile as the threshold, we acheived an accuracy of 0.85 (95% CI, 0.81–0.88) and 0.70 (95% CI, 0.65–0.75) in detecting HMG in the internal and external validation sets, respectively. The fifth percentile of the global pRNFL thickness achieved high sensitivities of 0.75 (95% CI, 0.67–0.82) and 0.75 (95% CI, 0.68–0.81) and specificities of 1.00 (95% CI, 0.99–1.00) and 1.00 (95% CI, 0.97–1.00) in the internal and external validation datasets, respectively. Compared with the built-in database of the OCT device, the HM-specific normative database showed a higher sensitivity and specificity than the corresponding pRNFL thickness below the fifth or first percentile (P < 0.001 for all). The HM-specific normative database is more capable of detecting HMG eyes than the SS OCT built-in database, which may be an effective tool for differential diagnosis between HMG and HM. Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Towards multi-center glaucoma OCT image screening with semi-supervised joint structure and function multi-task learning.
- Author
-
Wang, Xi, Chen, Hao, Ran, An-Ran, Luo, Luyang, Chan, Poemen P., Tham, Clement C., Chang, Robert T., Mannil, Suria S., Cheung, Carol Y., and Heng, Pheng-Ann
- Subjects
- *
PERIMETRY , *GLAUCOMA , *OPTICAL coherence tomography , *RETROLENTAL fibroplasia , *REGRESSION analysis , *VISUAL fields , *RECEIVER operating characteristic curves - Abstract
• It is the first study on unifying structure analysis and function regression for glaucoma screening from OCT images. • The semi-supervised smoothness assumption is made to solve the missing regression label problem. • A multi-task learning network is proposed to explore the structure-function relationship for glaucoma screening. • Extensive experiments on large-scale multi-center datasets demonstrate the effectiveness of the multi-task learning model. Glaucoma is the leading cause of irreversible blindness in the world. Structure and function assessments play an important role in diagnosing glaucoma. Nowadays, Optical Coherence Tomography (OCT) imaging gains increasing popularity in measuring the structural change of eyes. However, few automated methods have been developed based on OCT images to screen glaucoma. In this paper, we are the first to unify the structure analysis and function regression to distinguish glaucoma patients from normal controls effectively. Specifically, our method works in two steps: a semi-supervised learning strategy with smoothness assumption is first applied for the surrogate assignment of missing function regression labels. Subsequently, the proposed multi-task learning network is capable of exploring the structure and function relationship between the OCT image and visual field measurement simultaneously, which contributes to classification performance improvement. It is also worth noting that the proposed method is assessed by two large-scale multi-center datasets. In other words, we first build the largest glaucoma OCT image dataset (i.e., HK dataset) involving 975,400 B-scans from 4,877 volumes to develop and evaluate the proposed method, then the model without further fine-tuning is directly applied on another independent dataset (i.e., Stanford dataset) containing 246,200 B-scans from 1,231 volumes. Extensive experiments are conducted to assess the contribution of each component within our framework. The proposed method outperforms the baseline methods and two glaucoma experts by a large margin, achieving volume-level Area Under ROC Curve (AUC) of 0.977 on HK dataset and 0.933 on Stanford dataset, respectively. The experimental results indicate the great potential of the proposed approach for the automated diagnosis system. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
30. Turbulent wind waves on a water current
- Author
-
P. B. Rutkevich, M. V. Zavolgensky, EGU, Publication, Water problems Institute RAN (IVP RAN ZD), Space Research Institute of the Russian Academy of Sciences (IKI), and Russian Academy of Sciences [Moscow] (RAS)
- Subjects
Physics ,lcsh:Dynamic and structural geology ,Wave propagation ,Infragravity wave ,lcsh:QE1-996.5 ,[SDU.STU]Sciences of the Universe [physics]/Earth Sciences ,General Medicine ,Mechanics ,Geophysics ,lcsh:Geology ,Love wave ,symbols.namesake ,lcsh:QE500-639.5 ,Wind wave ,symbols ,[SDU.STU] Sciences of the Universe [physics]/Earth Sciences ,lcsh:Q ,Gravity wave ,Rayleigh wave ,Mechanical wave ,lcsh:Science ,Longitudinal wave - Abstract
An analytical model of water waves generated by the wind over the water surface is presented. A simple modeling method of wind waves is described based on waves lengths diagram, azimuthal hodograph of waves velocities and others. Properties of the generated waves are described. The wave length and wave velocity are obtained as functions on azimuth of wave propagation and growth rate. Motionless waves dynamically trapped into the general picture of three dimensional waves are described. The gravitation force does not enter the three dimensional of turbulent wind waves. That is why these waves have turbulent and not gravitational nature. The Langmuir stripes are naturally modeled and existence of the rogue waves is theoretically proved.
- Published
- 2008
31. Developing a privacy-preserving deep learning model for glaucoma detection: a multicentre study with federated learning.
- Author
-
Ran AR, Wang X, Chan PP, Wong MOM, Yuen H, Lam NM, Chan NCY, Yip WWK, Young AL, Yung HW, Chang RT, Mannil SS, Tham YC, Cheng CY, Wong TY, Pang CP, Heng PA, Tham CC, and Cheung CY
- Subjects
- Humans, Male, Female, Middle Aged, Intraocular Pressure physiology, Aged, Algorithms, Prospective Studies, Deep Learning, Tomography, Optical Coherence methods, Glaucoma diagnosis, Glaucoma physiopathology, Optic Disk diagnostic imaging, Optic Disk pathology
- Abstract
Background: Deep learning (DL) is promising to detect glaucoma. However, patients' privacy and data security are major concerns when pooling all data for model development. We developed a privacy-preserving DL model using the federated learning (FL) paradigm to detect glaucoma from optical coherence tomography (OCT) images., Methods: This is a multicentre study. The FL paradigm consisted of a 'central server' and seven eye centres in Hong Kong, the USA and Singapore. Each centre first trained a model locally with its own OCT optic disc volumetric dataset and then uploaded its model parameters to the central server. The central server used FedProx algorithm to aggregate all centres' model parameters. Subsequently, the aggregated parameters are redistributed to each centre for its local model optimisation. We experimented with three three-dimensional (3D) networks to evaluate the stabilities of the FL paradigm. Lastly, we tested the FL model on two prospectively collected unseen datasets., Results: We used 9326 volumetric OCT scans from 2785 subjects. The FL model performed consistently well with different networks in 7 centres (accuracies 78.3%-98.5%, 75.9%-97.0%, and 78.3%-97.5%, respectively) and stably in the 2 unseen datasets (accuracies 84.8%-87.7%, 81.3%-84.8%, and 86.0%-87.8%, respectively). The FL model achieved non-inferior performance in classifying glaucoma compared with the traditional model and significantly outperformed the individual models., Conclusion: The 3D FL model could leverage all the datasets and achieve generalisable performance, without data exchange across centres. This study demonstrated an OCT-based FL paradigm for glaucoma identification with ensured patient privacy and data security, charting another course toward the real-world transition of artificial intelligence in ophthalmology., Competing Interests: Competing interests: None declared., (© Author(s) (or their employer(s)) 2024. No commercial re-use. See rights and permissions. Published by BMJ.)
- Published
- 2024
- Full Text
- View/download PDF
32. Deep learning-based image quality assessment for optical coherence tomography macular scans: a multicentre study.
- Author
-
Tang Z, Wang X, Ran AR, Yang D, Ling A, Yam JC, Zhang X, Szeto SKH, Chan J, Wong CYK, Hui VWK, Chan CKM, Wong TY, Cheng CY, Sabanayagam C, Tham YC, Liew G, Anantharaman G, Raman R, Cai Y, Che H, Luo L, Liu Q, Wong YL, Ngai AKY, Yuen VL, Kei N, Lai TYY, Chen H, Tham CC, Heng PA, and Cheung CY
- Abstract
Aims: To develop and externally test deep learning (DL) models for assessing the image quality of three-dimensional (3D) macular scans from Cirrus and Spectralis optical coherence tomography devices., Methods: We retrospectively collected two data sets including 2277 Cirrus 3D scans and 1557 Spectralis 3D scans, respectively, for training (70%), fine-tuning (10%) and internal validation (20%) from electronic medical and research records at The Chinese University of Hong Kong Eye Centre and the Hong Kong Eye Hospital. Scans with various eye diseases (eg, diabetic macular oedema, age-related macular degeneration, polypoidal choroidal vasculopathy and pathological myopia), and scans of normal eyes from adults and children were included. Two graders labelled each 3D scan as gradable or ungradable, according to standardised criteria. We used a 3D version of the residual network (ResNet)-18 for Cirrus 3D scans and a multiple-instance learning pipline with ResNet-18 for Spectralis 3D scans. Two deep learning (DL) models were further tested via three unseen Cirrus data sets from Singapore and five unseen Spectralis data sets from India, Australia and Hong Kong, respectively., Results: In the internal validation, the models achieved the area under curves (AUCs) of 0.930 (0.885-0.976) and 0.906 (0.863-0.948) for assessing the Cirrus 3D scans and Spectralis 3D scans, respectively. In the external testing, the models showed robust performance with AUCs ranging from 0.832 (0.730-0.934) to 0.930 (0.906-0.953) and 0.891 (0.836-0.945) to 0.962 (0.918-1.000), respectively., Conclusions: Our models could be used for filtering out ungradable 3D scans and further incorporated with a disease-detection DL model, allowing a fully automated eye disease detection workflow., Competing Interests: Competing interests: None declared., (© Author(s) (or their employer(s)) 2023. No commercial re-use. See rights and permissions. Published by BMJ.)
- Published
- 2024
- Full Text
- View/download PDF
33. Clinically relevant factors associated with a binary outcome of diabetic macular ischaemia: an OCTA study.
- Author
-
Yang DW, Tang ZQ, Tang FY, Szeto SK, Chan J, Yip F, Wong CY, Ran AR, Lai TY, and Cheung CY
- Subjects
- Humans, Fluorescein Angiography methods, Retinal Vessels, Retina, Tomography, Optical Coherence methods, Ischemia diagnosis, Diabetic Retinopathy diagnosis, Diabetes Mellitus
- Abstract
Aims: We investigated the demographic, ocular, diabetes-related and systemic factors associated with a binary outcome of diabetic macular ischaemia (DMI) as assessed by optical coherence tomography angiography (OCTA) evaluation of non-perfusion at the level of the superficial capillary plexus (SCP) and deep capillary plexus (DCP) in a cohort of patients with diabetes mellitus (DM)., Materials and Methods: 617 patients with DM were recruited from July 2015 to December 2020 at the Chinese University of Hong Kong Eye Centre. Image quality assessment (gradable or ungradable for assessing DMI) and DMI evaluation (presence or absence of DMI) were assessed at the level of the SCP and DCP by OCTA., Results: 1107 eyes from 593 subjects were included in the final analysis. 560 (50.59%) eyes had DMI at the level of SCP, and 647 (58.45%) eyes had DMI at the level of DCP. Among eyes without diabetic retinopathy (DR), DMI was observed in 19.40% and 24.13% of eyes at SCP and DCP, respectively. In the multivariable logistic regression models, older age, poorer visual acuity, thinner ganglion cell-inner plexiform layer thickness, worsened DR severity, higher haemoglobin A1c level, lower estimated glomerular filtration rate and higher low-density lipoprotein cholesterol level were associated with SCP-DMI. In addition to the aforementioned factors, presence of diabetic macular oedema and shorter axial length were associated with DCP-DMI., Conclusion: We reported a series of associated factors of SCP-DMI and DCP-DMI. The binary outcome of DMI might promote a simplified OCTA-based DMI evaluation before subsequent quantitative analysis for assessing DMI extent and fulfil the urge for an updating diabetic retinal disease staging to be implemented with OCTA., Competing Interests: Competing interests: None declared., (© Author(s) (or their employer(s)) 2023. No commercial re-use. See rights and permissions. Published by BMJ.)
- Published
- 2023
- Full Text
- View/download PDF
34. Federated Learning in Ocular Imaging: Current Progress and Future Direction.
- Author
-
Nguyen TX, Ran AR, Hu X, Yang D, Jiang M, Dou Q, and Cheung CY
- Abstract
Advances in artificial intelligence deep learning (DL) have made tremendous impacts on the field of ocular imaging over the last few years. Specifically, DL has been utilised to detect and classify various ocular diseases on retinal photographs, optical coherence tomography (OCT) images, and OCT-angiography images. In order to achieve good robustness and generalisability of model performance, DL training strategies traditionally require extensive and diverse training datasets from various sites to be transferred and pooled into a "centralised location". However, such a data transferring process could raise practical concerns related to data security and patient privacy. Federated learning (FL) is a distributed collaborative learning paradigm which enables the coordination of multiple collaborators without the need for sharing confidential data. This distributed training approach has great potential to ensure data privacy among different institutions and reduce the potential risk of data leakage from data pooling or centralisation. This review article aims to introduce the concept of FL, provide current evidence of FL in ocular imaging, and discuss potential challenges as well as future applications.
- Published
- 2022
- Full Text
- View/download PDF
35. Three-Dimensional Multi-Task Deep Learning Model to Detect Glaucomatous Optic Neuropathy and Myopic Features From Optical Coherence Tomography Scans: A Retrospective Multi-Centre Study.
- Author
-
Ran AR, Wang X, Chan PP, Chan NC, Yip W, Young AL, Wong MOM, Yung HW, Chang RT, Mannil SS, Tham YC, Cheng CY, Chen H, Li F, Zhang X, Heng PA, Tham CC, and Cheung CY
- Abstract
Purpose: We aim to develop a multi-task three-dimensional (3D) deep learning (DL) model to detect glaucomatous optic neuropathy (GON) and myopic features (MF) simultaneously from spectral-domain optical coherence tomography (SDOCT) volumetric scans., Methods: Each volumetric scan was labelled as GON according to the criteria of retinal nerve fibre layer (RNFL) thinning, with a structural defect that correlated in position with the visual field defect (i.e., reference standard). MF were graded by the SDOCT en face images, defined as presence of peripapillary atrophy (PPA), optic disc tilting, or fundus tessellation. The multi-task DL model was developed by ResNet with output of Yes/No GON and Yes/No MF. SDOCT scans were collected in a tertiary eye hospital (Hong Kong SAR, China) for training (80%), tuning (10%), and internal validation (10%). External testing was performed on five independent datasets from eye centres in Hong Kong, the United States, and Singapore, respectively. For GON detection, we compared the model to the average RNFL thickness measurement generated from the SDOCT device. To investigate whether MF can affect the model's performance on GON detection, we conducted subgroup analyses in groups stratified by Yes/No MF. The area under the receiver operating characteristic curve (AUROC), sensitivity, specificity, and accuracy were reported., Results: A total of 8,151 SDOCT volumetric scans from 3,609 eyes were collected. For detecting GON, in the internal validation, the proposed 3D model had significantly higher AUROC (0.949 vs. 0.913, p < 0.001) than average RNFL thickness in discriminating GON from normal. In the external testing, the two approaches had comparable performance. In the subgroup analysis, the multi-task DL model performed significantly better in the group of "no MF" (0.883 vs. 0.965, p -value < 0.001) in one external testing dataset, but no significant difference in internal validation and other external testing datasets. The multi-task DL model's performance to detect MF was also generalizable in all datasets, with the AUROC values ranging from 0.855 to 0.896., Conclusion: The proposed multi-task 3D DL model demonstrated high generalizability in all the datasets and the presence of MF did not affect the accuracy of GON detection generally., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2022 Ran, Wang, Chan, Chan, Yip, Young, Wong, Yung, Chang, Mannil, Tham, Cheng, Chen, Li, Zhang, Heng, Tham and Cheung.)
- Published
- 2022
- Full Text
- View/download PDF
36. Deep Learning for Glaucoma Detection and Identification of Novel Diagnostic Areas in Diverse Real-World Datasets.
- Author
-
Noury E, Mannil SS, Chang RT, Ran AR, Cheung CY, Thapa SS, Rao HL, Dasari S, Riyazuddin M, Chang D, Nagaraj S, Tham CC, and Zadeh R
- Subjects
- Humans, Deep Learning, Glaucoma diagnosis, Myopia, Optic Disk diagnostic imaging, Optic Nerve Diseases diagnosis
- Abstract
Purpose: To develop a three-dimensional (3D) deep learning algorithm to detect glaucoma using spectral-domain optical coherence tomography (SD-OCT) optic nerve head (ONH) cube scans and validate its performance on ethnically diverse real-world datasets and on cropped ONH scans., Methods: In total, 2461 Cirrus SD-OCT ONH scans of 1012 eyes were obtained from the Glaucoma Clinic Imaging Database at the Byers Eye Institute, Stanford University, from March 2010 to December 2017. A 3D deep neural network was trained and tested on this unique raw OCT cube dataset to identify a multimodal definition of glaucoma excluding other concomitant retinal disease and optic neuropathies. A total of 1022 scans of 363 glaucomatous eyes (207 patients) and 542 scans of 291 normal eyes (167 patients) from Stanford were included in training, and 142 scans of 48 glaucomatous eyes (27 patients) and 61 scans of 39 normal eyes (23 patients) were included in the validation set. A total of 3371 scans (Cirrus SD-OCT) from four different countries were used for evaluation of the model: the non overlapping test dataset from Stanford (USA) consisted of 694 scans: 241 scans from 113 normal eyes of 66 patients and 453 scans of 157 glaucomatous eyes of 89 patients. The datasets from Hong Kong (total of 1625 scans; 666 OCT scans from 196 normal eyes of 99 patients and 959 scans of 277 glaucomatous eyes of 155 patients), India (total of 672 scans; 211 scans from 147 normal eyes of 98 patients and 461 scans from 171 glaucomatous eyes of 101 patients), and Nepal (total of 380 scans; 158 scans from 143 normal eyes of 89 patients and 222 scans from 174 glaucomatous eyes of 109 patients) were used for external evaluation. The performance of the model was then evaluated on manually cropped scans from Stanford using a new algorithm called DiagFind. The ONH region was cropped by identifying the appropriate zone of the image in the expected location relative to Bruch's Membrane Opening (BMO) using a commercially available imaging software. Subgroup analyses were performed in groups stratified by eyes, myopia severity of glaucoma, and on a set of glaucoma cases without field defects. Saliency maps were generated to highlight the areas the model used to make a prediction. The model's performance was compared to that of a glaucoma specialist using all available information on a subset of cases., Results: The 3D deep learning system achieved area under the curve (AUC) values of 0.91 (95% CI, 0.90-0.92), 0.80 (95% CI, 0.78-0.82), 0.94 (95% CI, 0.93-0.96), and 0.87 (95% CI, 0.85-0.90) on Stanford, Hong Kong, India, and Nepal datasets, respectively, to detect perimetric glaucoma and AUC values of 0.99 (95% CI, 0.97-1.00), 0.96 (95% CI, 0.93-1.00), and 0.92 (95% CI, 0.89-0.95) on severe, moderate, and mild myopia cases, respectively, and an AUC of 0.77 on cropped scans. The model achieved an AUC value of 0.92 (95% CI, 0.90-0.93) versus that of the human grader with an AUC value of 0.91 on the same subset of scans (\(P=0.99\)). The performance of the model in terms of recall on glaucoma cases without field defects was found to be 0.76 (0.68-0.85). Saliency maps highlighted the lamina cribrosa in glaucomatous eyes versus superficial retina in normal eyes as the regions associated with classification., Conclusions: A 3D convolutional neural network (CNN) trained on SD-OCT ONH cubes can distinguish glaucoma from normal cases in diverse datasets obtained from four different countries. The model trained on additional random cropping data augmentation performed reasonably on manually cropped scans, indicating the importance of lamina cribrosa in glaucoma detection., Translational Relevance: A 3D CNN trained on SD-OCT ONH cubes was developed to detect glaucoma in diverse datasets obtained from four different countries and on cropped scans. The model identified lamina cribrosa as the region associated with glaucoma detection.
- Published
- 2022
- Full Text
- View/download PDF
37. Detection of Diabetic Retinopathy from Ultra-Widefield Scanning Laser Ophthalmoscope Images: A Multicenter Deep Learning Analysis.
- Author
-
Tang F, Luenam P, Ran AR, Quadeer AA, Raman R, Sen P, Khan R, Giridhar A, Haridas S, Iglicki M, Zur D, Loewenstein A, Negri HP, Szeto S, Lam BKY, Tham CC, Sivaprasad S, Mckay M, and Cheung CY
- Subjects
- Cross-Sectional Studies, Equipment Design, Female, Humans, Male, Middle Aged, ROC Curve, Deep Learning, Diabetic Retinopathy diagnosis, Neural Networks, Computer, Ophthalmoscopes, Ophthalmoscopy methods
- Abstract
Purpose: To develop a deep learning (DL) system that can detect referable diabetic retinopathy (RDR) and vision-threatening diabetic retinopathy (VTDR) from images obtained on ultra-widefield scanning laser ophthalmoscope (UWF-SLO)., Design: Observational, cross-sectional study., Participants: A total of 9392 UWF-SLO images of 1903 eyes from 1022 subjects with diabetes from Hong Kong, the United Kingdom, India, and Argentina., Methods: All images were labeled according to the presence or absence of RDR and the presence or absence of VTDR. Labeling was performed by retina specialists from fundus examination, according to the International Clinical Diabetic Retinopathy Disease Severity Scale. Three convolutional neural networks (ResNet50) were trained with a transfer-learning procedure for assessing gradability and identifying VTDR and RDR. External validation was performed on 4 datasets spanning different geographical regions., Main Outcome Measures: Area under the receiver operating characteristic curve (AUROC); area under the precision-recall curve (AUPRC); sensitivity, specificity, and accuracy of the DL system in gradability assessment; and detection of RDR and VTDR., Results: For gradability assessment, the system achieved an AUROC of 0.923 (95% confidence interval [CI], 0.892-0.947), sensitivity of 86.5% (95% CI, 77.6-92.8), and specificity of 82.1% (95% CI, 77.3-86.2) for the primary validation dataset, and >0.82 AUROCs, >79.6% sensitivity, and >70.4% specificity for the geographical external validation datasets. For detecting RDR and VTDR, the AUROCs were 0.981 (95% CI, 0.977-0.984) and 0.966 (95% CI, 0.961-0.971), with sensitivities of 94.9% (95% CI, 92.3-97.9) and 87.2% (95% CI, 81.5-91.6), specificities of 95.1% (95% CI, 90.6-97.9) and 95.8% (95% CI, 93.3-97.6), and positive predictive values (PPVs) of 98.0% (95% CI, 96.1-99.0) and 91.1% (95% CI, 86.3-94.3) for the primary validation dataset, respectively. The AUROCs and accuracies for detecting both RDR and VTDR were >0.9% and >80%, respectively, for the geographical external validation datasets. The AUPRCs were >0.9, and sensitivities, specificities, and PPVs were >80% for the geographical external validation datasets for RDR and VTDR detection., Conclusions: The excellent performance achieved with this DL system for image quality assessment and detection of RDR and VTDR in UWF-SLO images highlights its potential as an efficient and effective diabetic retinopathy screening tool., (Copyright © 2021 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.)
- Published
- 2021
- Full Text
- View/download PDF
38. A 3D Deep Learning System for Detecting Referable Glaucoma Using Full OCT Macular Cube Scans.
- Author
-
Russakoff DB, Mannil SS, Oakley JD, Ran AR, Cheung CY, Dasari S, Riyazzuddin M, Nagaraj S, Rao HL, Chang D, and Chang RT
- Subjects
- Humans, Tomography, Optical Coherence, Deep Learning, Glaucoma diagnosis, Macula Lutea diagnostic imaging, Optic Nerve Diseases
- Abstract
Purpose: The purpose of this study was to develop a 3D deep learning system from spectral domain optical coherence tomography (SD-OCT) macular cubes to differentiate between referable and nonreferable cases for glaucoma applied to real-world datasets to understand how this would affect the performance., Methods: There were 2805 Cirrus optical coherence tomography (OCT) macula volumes (Macula protocol 512 × 128) of 1095 eyes from 586 patients at a single site that were used to train a fully 3D convolutional neural network (CNN). Referable glaucoma included true glaucoma, pre-perimetric glaucoma, and high-risk suspects, based on qualitative fundus photographs, visual fields, OCT reports, and clinical examinations, including intraocular pressure (IOP) and treatment history as the binary (two class) ground truth. The curated real-world dataset did not include eyes with retinal disease or nonglaucomatous optic neuropathies. The cubes were first homogenized using layer segmentation with the Orion Software (Voxeleron) to achieve standardization. The algorithm was tested on two separate external validation sets from different glaucoma studies, comprised of Cirrus macular cube scans of 505 and 336 eyes, respectively., Results: The area under the receiver operating characteristic (AUROC) curve for the development dataset for distinguishing referable glaucoma was 0.88 for our CNN using homogenization, 0.82 without homogenization, and 0.81 for a CNN architecture from the existing literature. For the external validation datasets, which had different glaucoma definitions, the AUCs were 0.78 and 0.95, respectively. The performance of the model across myopia severity distribution has been assessed in the dataset from the United States and was found to have an AUC of 0.85, 0.92, and 0.95 in the severe, moderate, and mild myopia, respectively., Conclusions: A 3D deep learning algorithm trained on macular OCT volumes without retinal disease to detect referable glaucoma performs better with retinal segmentation preprocessing and performs reasonably well across all levels of myopia., Translational Relevance: Interpretation of OCT macula volumes based on normative data color distributions is highly influenced by population demographics and characteristics, such as refractive error, as well as the size of the normative database. Referable glaucoma, in this study, was chosen to include cases that should be seen by a specialist. This study is unique because it uses multimodal patient data for the glaucoma definition, and includes all severities of myopia as well as validates the algorithm with international data to understand generalizability potential., Competing Interests: Disclosure: D.B. Russakoff, None; S.S. Mannil, None; J.D. Oakley, None; A.R. Ran, None; C.Y. Cheung, None; S. Dasari, None; M. Riyazzuddin, None; S. Nagaraj, None; H.L. Rao, None; D. Chang, None; R.T. Chang, None, (Copyright 2020 The Authors.)
- Published
- 2020
- Full Text
- View/download PDF
39. Artificial intelligence deep learning algorithm for discriminating ungradable optical coherence tomography three-dimensional volumetric optic disc scans.
- Author
-
Ran AR, Shi J, Ngai AK, Chan WY, Chan PP, Young AL, Yung HW, Tham CC, and Cheung CY
- Abstract
Spectral-domain optical coherence tomography (SDOCT) is a noncontact and noninvasive imaging technology offering three-dimensional (3-D), objective, and quantitative assessment of optic nerve head (ONH) in human eyes in vivo . The image quality of SDOCT scans is crucial for an accurate and reliable interpretation of ONH structure and for further detection of diseases. Traditionally, signal strength (SS) is used as an index to include or exclude SDOCT scans for further analysis. However, it is insufficient to assess other image quality issues such as off-centration, out of registration, missing data, motion artifacts, mirror artifacts, or blurriness, which require specialized knowledge in SDOCT for such assessment. We proposed a deep learning system (DLS) as an automated tool for filtering out ungradable SDOCT volumes. In total, 5599 SDOCT ONH volumes were collected for training (80%) and primary validation (20%). Other 711 and 298 volumes from two independent datasets, respectively, were used for external validation. An SDOCT volume was labeled as ungradable when SS was < 5 or when any artifacts influenced the measurement circle or > 25 % of the peripheral area. Artifacts included (1) off-centration, (2) out of registration, (3) missing signal, (4) motion artifacts, (5) mirror artifacts, and (6) blurriness. An SDOCT volume was labeled as gradable when SS was ≥ 5 , and there was an absence of any artifacts or artifacts only influenced < 25 % peripheral area but not the retinal nerve fiber layer calculation circle. We developed and validated a 3-D DLS based on squeeze-and-excitation ResNeXt blocks and experimented with different training strategies. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy were calculated to evaluate the performance. Heatmaps were generated by gradient-weighted class activation map. Our findings show that the presented DLS achieved a good performance in both primary and external validations, which could potentially increase the efficiency and accuracy of SDOCT volumetric scans quality control by filtering out ungradable ones automatically., (© The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.)
- Published
- 2019
- Full Text
- View/download PDF
40. The association of TGFB1 genetic polymorphisms with high myopia: a systematic review and meta-analysis.
- Author
-
Meng B, Li SM, Yang Y, Yang ZR, Sun F, Kang MT, Sun YY, Ran AR, Wang JN, Yan R, BaI YW, Wang NL, and Zhan SY
- Abstract
Objective: The TGFB1 gene is among the most studied genes in high myopia due to its role in scleral remodeling. But reported findings of association on TGFB1 and high myopia are inconsistent. This present study is to evaluate the association of TGFB1 polymorphisms and high myopia., Methods: A comprehensive literature search was conducted on studies published up to April 5, 2015. Summary odds ratios (ORs) and 95% confidence intervals were analyzed. Heterogeneity across studies was evaluated by Cochran Q statistic test and the I(2) index. Sensitivity analyses were conducted by the approach of one-study remove to assess the influence of single study on the combined effect., Results: Eight studies were included in this study for meta-analysis. Rs1982073 was associated with high myopia in dominant model (OR=1.64; 95% CI=1.04~2.58; P<0.05), heterozygous model (OR=1.54; 95% CI=1.02~2.33; P<0.05), homozygous model (OR=1.90; 95% CI=1.01~3.55; P=0.05) and allelic model (OR=1.36; 95% CI=1.01~1.84; P=0.05). However, there was no statistical significance when Bonferroni correction was considered. Rs4803455 was associated with high myopia in recessive model (OR=0.40; 95% CI=0.25~0.64; P<0.01) and homozygous model (OR=0.42; 95% CI=0.26~0.68; P<0.01). Rs1800469 was associated with high myopia in allelic model (OR=0.78; 95% CI=0.64~0.96; P<0.05). And the associations can withstand Bonferroni correction in models mentioned above when referring to rs4803455 (P<0.01) and rs1800469 (P<0.05)., Conclusions: Meta-analysis of existing data revealed a suggestive association of TGFB1 rs1982073 and rs4803455 with high myopia.
- Published
- 2015
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.