15 results on '"Daniel S W Ting"'
Search Results
2. Deep-learning-based cardiovascular risk stratification using coronary artery calcium scores predicted from retinal photographs
- Author
-
Tyler Hyungtaek Rim, MD, Chan Joo Lee, MD, Yih-Chung Tham, PhD, Ning Cheung, MD, Marco Yu, PhD, Geunyoung Lee, BS, Youngnam Kim, MS, Daniel S W Ting, MD, Crystal Chun Yuen Chong, BS, Yoon Seong Choi, MD, Tae Keun Yoo, MD, Ik Hee Ryu, MD, Su Jung Baik, MD, Young Ah Kim, MD, Sung Kyu Kim, MD, Sang-Hak Lee, ProfMD, Byoung Kwon Lee, ProfMD, Seok-Min Kang, ProfMD, Edmund Yick Mun Wong, FRCSEd, Hyeon Chang Kim, ProfMD, Sung Soo Kim, ProfMD, Sungha Park, ProfMD, Ching-Yu Cheng, ProfMD, and Tien Yin Wong, ProfMD
- Subjects
Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
Summary: Background: Coronary artery calcium (CAC) score is a clinically validated marker of cardiovascular disease risk. We developed and validated a novel cardiovascular risk stratification system based on deep-learning-predicted CAC from retinal photographs. Methods: We used 216 152 retinal photographs from five datasets from South Korea, Singapore, and the UK to train and validate the algorithms. First, using one dataset from a South Korean health-screening centre, we trained a deep-learning algorithm to predict the probability of the presence of CAC (ie, deep-learning retinal CAC score, RetiCAC). We stratified RetiCAC scores into tertiles and used Cox proportional hazards models to evaluate the ability of RetiCAC to predict cardiovascular events based on external test sets from South Korea, Singapore, and the UK Biobank. We evaluated the incremental values of RetiCAC when added to the Pooled Cohort Equation (PCE) for participants in the UK Biobank. Findings: RetiCAC outperformed all single clinical parameter models in predicting the presence of CAC (area under the receiver operating characteristic curve of 0·742, 95% CI 0·732–0·753). Among the 527 participants in the South Korean clinical cohort, 33 (6·3%) had cardiovascular events during the 5-year follow-up. When compared with the current CAC risk stratification (0, >0–100, and >100), the three-strata RetiCAC showed comparable prognostic performance with a concordance index of 0·71. In the Singapore population-based cohort (n=8551), 310 (3·6%) participants had fatal cardiovascular events over 10 years, and the three-strata RetiCAC was significantly associated with increased risk of fatal cardiovascular events (hazard ratio [HR] trend 1·33, 95% CI 1·04–1·71). In the UK Biobank (n=47 679), 337 (0·7%) participants had fatal cardiovascular events over 10 years. When added to the PCE, the three-strata RetiCAC improved cardiovascular risk stratification in the intermediate-risk group (HR trend 1·28, 95% CI 1·07–1·54) and borderline-risk group (1·62, 1·04–2·54), and the continuous net reclassification index was 0·261 (95% CI 0·124–0·364). Interpretation: A deep learning and retinal photograph-derived CAC score is comparable to CT scan-measured CAC in predicting cardiovascular events, and improves on current risk stratification approaches for cardiovascular disease events. These data suggest retinal photograph-based deep learning has the potential to be used as an alternative measure of CAC, especially in low-resource settings. Funding: Yonsei University College of Medicine; Ministry of Health and Welfare, Korea Institute for Advancement of Technology, South Korea; Agency for Science, Technology, and Research; and National Medical Research Council, Singapore.
- Published
- 2021
- Full Text
- View/download PDF
3. Prediction of systemic biomarkers from retinal photographs: development and validation of deep-learning algorithms
- Author
-
Tyler Hyungtaek Rim, MD, Geunyoung Lee, BS, Youngnam Kim, MSc, Yih-Chung Tham, PhD, Chan Joo Lee, MD, Su Jung Baik, MD, Young Ah Kim, PhD, Marco Yu, PhD, Mihir Deshmukh, MSc, Byoung Kwon Lee, ProfMD, Sungha Park, ProfMD, Hyeon Chang Kim, ProfMD, Charumathi Sabayanagam, PhD, Daniel S W Ting, MD, Ya Xing Wang, MD, Jost B Jonas, ProfMD, Sung Soo Kim, MD, Tien Yin Wong, ProfMD, and Ching-Yu Cheng, ProfMD
- Subjects
Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
Summary: Background: The application of deep learning to retinal photographs has yielded promising results in predicting age, sex, blood pressure, and haematological parameters. However, the broader applicability of retinal photograph-based deep learning for predicting other systemic biomarkers and the generalisability of this approach to various populations remains unexplored. Methods: With use of 236 257 retinal photographs from seven diverse Asian and European cohorts (two health screening centres in South Korea, the Beijing Eye Study, three cohorts in the Singapore Epidemiology of Eye Diseases study, and the UK Biobank), we evaluated the capacities of 47 deep-learning algorithms to predict 47 systemic biomarkers as outcome variables, including demographic factors (age and sex); body composition measurements; blood pressure; haematological parameters; lipid profiles; biochemical measures; biomarkers related to liver function, thyroid function, kidney function, and inflammation; and diabetes. The standard neural network architecture of VGG16 was adopted for model development. Findings: In addition to previously reported systemic biomarkers, we showed quantification of body composition indices (muscle mass, height, and bodyweight) and creatinine from retinal photographs. Body muscle mass could be predicted with an R2 of 0·52 (95% CI 0·51–0·53) in the internal test set, and of 0·33 (0·30–0·35) in one external test set with muscle mass measurement available. The R2 value for the prediction of height was 0·42 (0·40–0·43), of bodyweight was 0·36 (0·34–0·37), and of creatinine was 0·38 (0·37–0·40) in the internal test set. However, the performances were poorer in external test sets (with the lowest performance in the European cohort), with R2 values ranging between 0·08 and 0·28 for height, 0·04 and 0·19 for bodyweight, and 0·01 and 0·26 for creatinine. Of the 47 systemic biomarkers, 37 could not be predicted well from retinal photographs via deep learning (R2≤0·14 across all external test sets). Interpretation: Our work provides new insights into the potential use of retinal photographs to predict systemic biomarkers, including body composition indices and serum creatinine, using deep learning in populations with a similar ethnic background. Further evaluations are warranted to validate these findings and evaluate the clinical utility of these algorithms. Funding: Agency for Science, Technology, and Research and National Medical Research Council, Singapore; Korea Institute for Advancement of Technology.
- Published
- 2020
- Full Text
- View/download PDF
4. Artificial intelligence using deep learning to screen for referable and vision-threatening diabetic retinopathy in Africa: a clinical validation study
- Author
-
Valentina Bellemo, MSc, Zhan W Lim, PhD, Gilbert Lim, PhD, Quang D Nguyen, BEng, Yuchen Xie, MScPH, Michelle Y T Yip, BA, Haslina Hamzah, BSc, Jinyi Ho, DFST, Xin Q Lee, BSc (Hons), Wynne Hsu, PhD, Mong L Lee, PhD, Lillian Musonda, MD, Manju Chandran, FRCOphth, Grace Chipalo-Mutati, FCOphth (ECSA), Mulenga Muma, FCOphth (ECSA), Gavin S W Tan, MD, Sobha Sivaprasad, FRCOphth, Geeta Menon, FRCOphth, Tien Y Wong, MD, and Daniel S W Ting, MD
- Subjects
Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
Summary: Background: Radical measures are required to identify and reduce blindness due to diabetes to achieve the Sustainable Development Goals by 2030. Therefore, we evaluated the accuracy of an artificial intelligence (AI) model using deep learning in a population-based diabetic retinopathy screening programme in Zambia, a lower-middle-income country. Methods: We adopted an ensemble AI model consisting of a combination of two convolutional neural networks (an adapted VGGNet architecture and a residual neural network architecture) for classifying retinal colour fundus images. We trained our model on 76 370 retinal fundus images from 13 099 patients with diabetes who had participated in the Singapore Integrated Diabetic Retinopathy Program, between 2010 and 2013, which has been published previously. In this clinical validation study, we included all patients with a diagnosis of diabetes that attended a mobile screening unit in five urban centres in the Copperbelt province of Zambia from Feb 1 to June 31, 2012. In our model, referable diabetic retinopathy was defined as moderate non-proliferative diabetic retinopathy or worse, diabetic macular oedema, and ungradable images. Vision-threatening diabetic retinopathy comprised severe non-proliferative and proliferative diabetic retinopathy. We calculated the area under the curve (AUC), sensitivity, and specificity for referable diabetic retinopathy, and sensitivities of vision-threatening diabetic retinopathy and diabetic macular oedema compared with the grading by retinal specialists. We did a multivariate analysis for systemic risk factors and referable diabetic retinopathy between AI and human graders. Findings: A total of 4504 retinal fundus images from 3093 eyes of 1574 Zambians with diabetes were prospectively recruited. Referable diabetic retinopathy was found in 697 (22·5%) eyes, vision-threatening diabetic retinopathy in 171 (5·5%) eyes, and diabetic macular oedema in 249 (8·1%) eyes. The AUC of the AI system for referable diabetic retinopathy was 0·973 (95% CI 0·969–0·978), with corresponding sensitivity of 92·25% (90·10–94·12) and specificity of 89·04% (87·85–90·28). Vision-threatening diabetic retinopathy sensitivity was 99·42% (99·15–99·68) and diabetic macular oedema sensitivity was 97·19% (96·61–97·77). The AI model and human graders showed similar outcomes in referable diabetic retinopathy prevalence detection and systemic risk factors associations. Both the AI model and human graders identified longer duration of diabetes, higher level of glycated haemoglobin, and increased systolic blood pressure as risk factors associated with referable diabetic retinopathy. Interpretation: An AI system shows clinically acceptable performance in detecting referable diabetic retinopathy, vision-threatening diabetic retinopathy, and diabetic macular oedema in population-based diabetic retinopathy screening. This shows the potential application and adoption of such AI technology in an under-resourced African population to reduce the incidence of preventable blindness, even when the model is trained in a different population. Funding: National Medical Research Council Health Service Research Grant, Large Collaborative Grant, Ministry of Health, Singapore; the SingHealth Foundation; and the Tanoto Foundation.
- Published
- 2019
- Full Text
- View/download PDF
5. Deep learning for detection of Fuchs endothelial dystrophy from widefield specular microscopy imaging: a pilot study
- Author
-
Valencia Hui Xian Foo, Gilbert Y. S. Lim, Yu-Chi Liu, Hon Shing Ong, Evan Wong, Stacy Chan, Jipson Wong, Jodhbir S. Mehta, Daniel S. W. Ting, and Marcus Ang
- Subjects
Deep learning ,Cornea ,Endothelium ,Artificial intelligence ,Ophthalmology ,RE1-994 - Abstract
Abstract Background To describe the diagnostic performance of a deep learning (DL) algorithm in detecting Fuchs endothelial corneal dystrophy (FECD) based on specular microscopy (SM) and to reliably detect widefield peripheral SM images with an endothelial cell density (ECD) > 1000 cells/mm2. Methods Five hundred and forty-seven subjects had SM imaging performed for the central cornea endothelium. One hundred and seventy-three images had FECD, while 602 images had other diagnoses. Using fivefold cross-validation on the dataset containing 775 central SM images combined with ECD, coefficient of variation (CV) and hexagonal endothelial cell ratio (HEX), the first DL model was trained to discriminate FECD from other images and was further tested on an external set of 180 images. In eyes with FECD, a separate DL model was trained with 753 central/paracentral SM images to detect SM with ECD > 1000 cells/mm2 and tested on 557 peripheral SM images. Area under curve (AUC), sensitivity and specificity were evaluated. Results The first model achieved an AUC of 0.96 with 0.91 sensitivity and 0.91 specificity in detecting FECD from other images. With an external validation set, the model achieved an AUC of 0.77, with a sensitivity of 0.69 and specificity of 0.68 in differentiating FECD from other diagnoses. The second model achieved an AUC of 0.88 with 0.79 sensitivity and 0.78 specificity in detecting peripheral SM images with ECD > 1000 cells/mm2. Conclusions Our pilot study developed a DL model that could reliably detect FECD from other SM images and identify widefield SM images with ECD > 1000 cells/mm2 in eyes with FECD. This could be the foundation for future DL models to track progression of eyes with FECD and identify candidates suitable for therapies such as Descemet stripping only.
- Published
- 2024
- Full Text
- View/download PDF
6. The promise of digital healthcare technologies
- Author
-
Andy Wai Kan Yeung, Ali Torkamani, Atul J. Butte, Benjamin S. Glicksberg, Björn Schuller, Blanca Rodriguez, Daniel S. W. Ting, David Bates, Eva Schaden, Hanchuan Peng, Harald Willschke, Jeroen van der Laak, Josip Car, Kazem Rahimi, Leo Anthony Celi, Maciej Banach, Maria Kletecka-Pulker, Oliver Kimberger, Roland Eils, Sheikh Mohammed Shariful Islam, Stephen T. Wong, Tien Yin Wong, Wei Gao, Søren Brunak, and Atanas G. Atanasov
- Subjects
digital health ,biosensors ,bioinformatics ,telehealth ,precision medicine ,Public aspects of medicine ,RA1-1270 - Abstract
Digital health technologies have been in use for many years in a wide spectrum of healthcare scenarios. This narrative review outlines the current use and the future strategies and significance of digital health technologies in modern healthcare applications. It covers the current state of the scientific field (delineating major strengths, limitations, and applications) and envisions the future impact of relevant emerging key technologies. Furthermore, we attempt to provide recommendations for innovative approaches that would accelerate and benefit the research, translation and utilization of digital health technologies.
- Published
- 2023
- Full Text
- View/download PDF
7. Artificial intelligence and digital solutions for myopia
- Author
-
Yong Li, Michelle Y T. Yip, Daniel S W. Ting, and Marcus Ang
- Subjects
artificial intelligence ,digital technology ,myopia ,telemedicine ,Ophthalmology ,RE1-994 - Abstract
Myopia as an uncorrected visual impairment is recognized as a global public health issue with an increasing burden on health-care systems. Moreover, high myopia increases one's risk of developing pathologic myopia, which can lead to irreversible visual impairment. Thus, increased resources are needed for the early identification of complications, timely intervention to prevent myopia progression, and treatment of complications. Emerging artificial intelligence (AI) and digital technologies may have the potential to tackle these unmet needs through automated detection for screening and risk stratification, individualized prediction, and prognostication of myopia progression. AI applications in myopia for children and adults have been developed for the detection, diagnosis, and prediction of progression. Novel AI technologies, including multimodal AI, explainable AI, federated learning, automated machine learning, and blockchain, may further improve prediction performance, safety, accessibility, and also circumvent concerns of explainability. Digital technology advancements include digital therapeutics, self-monitoring devices, virtual reality or augmented reality technology, and wearable devices – which provide possible avenues for monitoring myopia progression and control. However, there are challenges in the implementation of these technologies, which include requirements for specific infrastructure and resources, demonstrating clinically acceptable performance and safety of data management. Nonetheless, this remains an evolving field with the potential to address the growing global burden of myopia.
- Published
- 2023
- Full Text
- View/download PDF
8. Deep learning system to predict the 5-year risk of high myopia using fundus imaging in children
- Author
-
Li Lian Foo, Gilbert Yong San Lim, Carla Lanca, Chee Wai Wong, Quan V. Hoang, Xiu Juan Zhang, Jason C. Yam, Leopold Schmetterer, Audrey Chia, Tien Yin Wong, Daniel S. W. Ting, Seang-Mei Saw, and Marcus Ang
- Subjects
Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
Abstract Our study aims to identify children at risk of developing high myopia for timely assessment and intervention, preventing myopia progression and complications in adulthood through the development of a deep learning system (DLS). Using a school-based cohort in Singapore comprising of 998 children (aged 6–12 years old), we train and perform primary validation of the DLS using 7456 baseline fundus images of 1878 eyes; with external validation using an independent test dataset of 821 baseline fundus images of 189 eyes together with clinical data (age, gender, race, parental myopia, and baseline spherical equivalent (SE)). We derive three distinct algorithms – image, clinical and mix (image + clinical) models to predict high myopia development (SE ≤ −6.00 diopter) during teenage years (5 years later, age 11–17). Model performance is evaluated using area under the receiver operating curve (AUC). Our image models (Primary dataset AUC 0.93–0.95; Test dataset 0.91–0.93), clinical models (Primary dataset AUC 0.90–0.97; Test dataset 0.93–0.94) and mixed (image + clinical) models (Primary dataset AUC 0.97; Test dataset 0.97–0.98) achieve clinically acceptable performance. The addition of 1 year SE progression variable has minimal impact on the DLS performance (clinical model AUC 0.98 versus 0.97 in primary dataset, 0.97 versus 0.94 in test dataset; mixed model AUC 0.99 versus 0.97 in primary dataset, 0.95 versus 0.98 in test dataset). Thus, our DLS allows prediction of the development of high myopia by teenage years amongst school-going children. This has potential utility as a clinical-decision support tool to identify “at-risk” children for early intervention.
- Published
- 2023
- Full Text
- View/download PDF
9. Big data in corneal diseases and cataract: Current applications and future directions
- Author
-
Darren S. J. Ting, Rashmi Deshmukh, Daniel S. W. Ting, and Marcus Ang
- Subjects
big data ,cornea ,cataract ,clinical registry ,artificial intelligence ,electronic health record (EHR) ,Information technology ,T58.5-58.64 - Abstract
The accelerated growth in electronic health records (EHR), Internet-of-Things, mHealth, telemedicine, and artificial intelligence (AI) in the recent years have significantly fuelled the interest and development in big data research. Big data refer to complex datasets that are characterized by the attributes of “5 Vs”—variety, volume, velocity, veracity, and value. Big data analytics research has so far benefitted many fields of medicine, including ophthalmology. The availability of these big data not only allow for comprehensive and timely examinations of the epidemiology, trends, characteristics, outcomes, and prognostic factors of many diseases, but also enable the development of highly accurate AI algorithms in diagnosing a wide range of medical diseases as well as discovering new patterns or associations of diseases that are previously unknown to clinicians and researchers. Within the field of ophthalmology, there is a rapidly expanding pool of large clinical registries, epidemiological studies, omics studies, and biobanks through which big data can be accessed. National corneal transplant registries, genome-wide association studies, national cataract databases, and large ophthalmology-related EHR-based registries (e.g., AAO IRIS Registry) are some of the key resources. In this review, we aim to provide a succinct overview of the availability and clinical applicability of big data in ophthalmology, particularly from the perspective of corneal diseases and cataract, the synergistic potential of big data, AI technologies, internet of things, mHealth, and wearable smart devices, and the potential barriers for realizing the clinical and research potential of big data in this field.
- Published
- 2023
- Full Text
- View/download PDF
10. Acceptance and Perception of Artificial Intelligence Usability in Eye Care (APPRAISE) for Ophthalmologists: A Multinational Perspective
- Author
-
Dinesh V. Gunasekeran, Feihui Zheng, Gilbert Y. S. Lim, Crystal C. Y. Chong, Shihao Zhang, Wei Yan Ng, Stuart Keel, Yifan Xiang, Ki Ho Park, Sang Jun Park, Aman Chandra, Lihteh Wu, J. Peter Campbel, Aaron Y. Lee, Pearse A. Keane, Alastair Denniston, Dennis S. C. Lam, Adrian T. Fung, Paul R. V. Chan, SriniVas R. Sadda, Anat Loewenstein, Andrzej Grzybowski, Kenneth C. S. Fong, Wei-chi Wu, Lucas M. Bachmann, Xiulan Zhang, Jason C. Yam, Carol Y. Cheung, Pear Pongsachareonnont, Paisan Ruamviboonsuk, Rajiv Raman, Taiji Sakamoto, Ranya Habash, Michael Girard, Dan Milea, Marcus Ang, Gavin S. W. Tan, Leopold Schmetterer, Ching-Yu Cheng, Ecosse Lamoureux, Haotian Lin, Peter van Wijngaarden, Tien Y. Wong, and Daniel S. W. Ting
- Subjects
ophthalmology ,artificial intelligence (AI) ,regulation ,implementation ,translation ,Medicine (General) ,R5-920 - Abstract
BackgroundMany artificial intelligence (AI) studies have focused on development of AI models, novel techniques, and reporting guidelines. However, little is understood about clinicians' perspectives of AI applications in medical fields including ophthalmology, particularly in light of recent regulatory guidelines. The aim for this study was to evaluate the perspectives of ophthalmologists regarding AI in 4 major eye conditions: diabetic retinopathy (DR), glaucoma, age-related macular degeneration (AMD) and cataract.MethodsThis was a multi-national survey of ophthalmologists between March 1st, 2020 to February 29th, 2021 disseminated via the major global ophthalmology societies. The survey was designed based on microsystem, mesosystem and macrosystem questions, and the software as a medical device (SaMD) regulatory framework chaired by the Food and Drug Administration (FDA). Factors associated with AI adoption for ophthalmology analyzed with multivariable logistic regression random forest machine learning.ResultsOne thousand one hundred seventy-six ophthalmologists from 70 countries participated with a response rate ranging from 78.8 to 85.8% per question. Ophthalmologists were more willing to use AI as clinical assistive tools (88.1%, n = 890/1,010) especially those with over 20 years' experience (OR 3.70, 95% CI: 1.10–12.5, p = 0.035), as compared to clinical decision support tools (78.8%, n = 796/1,010) or diagnostic tools (64.5%, n = 651). A majority of Ophthalmologists felt that AI is most relevant to DR (78.2%), followed by glaucoma (70.7%), AMD (66.8%), and cataract (51.4%) detection. Many participants were confident their roles will not be replaced (68.2%, n = 632/927), and felt COVID-19 catalyzed willingness to adopt AI (80.9%, n = 750/927). Common barriers to implementation include medical liability from errors (72.5%, n = 672/927) whereas enablers include improving access (94.5%, n = 876/927). Machine learning modeling predicted acceptance from participant demographics with moderate to high accuracy, and area under the receiver operating curves of 0.63–0.83.ConclusionOphthalmologists are receptive to adopting AI as assistive tools for DR, glaucoma, and AMD. Furthermore, ML is a useful method that can be applied to evaluate predictive factors on clinical qualitative questionnaires. This study outlines actionable insights for future research and facilitation interventions to drive adoption and operationalization of AI tools for Ophthalmology.
- Published
- 2022
- Full Text
- View/download PDF
11. Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis
- Author
-
Ravi Aggarwal, Viknesh Sounderajah, Guy Martin, Daniel S. W. Ting, Alan Karthikesalingam, Dominic King, Hutan Ashrafian, and Ara Darzi
- Subjects
Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
Abstract Deep learning (DL) has the potential to transform medical diagnostics. However, the diagnostic accuracy of DL is uncertain. Our aim was to evaluate the diagnostic accuracy of DL algorithms to identify pathology in medical imaging. Searches were conducted in Medline and EMBASE up to January 2020. We identified 11,921 studies, of which 503 were included in the systematic review. Eighty-two studies in ophthalmology, 82 in breast disease and 115 in respiratory disease were included for meta-analysis. Two hundred twenty-four studies in other specialities were included for qualitative review. Peer-reviewed studies that reported on the diagnostic accuracy of DL algorithms to identify pathology using medical imaging were included. Primary outcomes were measures of diagnostic accuracy, study design and reporting standards in the literature. Estimates were pooled using random-effects meta-analysis. In ophthalmology, AUC’s ranged between 0.933 and 1 for diagnosing diabetic retinopathy, age-related macular degeneration and glaucoma on retinal fundus photographs and optical coherence tomography. In respiratory imaging, AUC’s ranged between 0.864 and 0.937 for diagnosing lung nodules or lung cancer on chest X-ray or CT scan. For breast imaging, AUC’s ranged between 0.868 and 0.909 for diagnosing breast cancer on mammogram, ultrasound, MRI and digital breast tomosynthesis. Heterogeneity was high between studies and extensive variation in methodology, terminology and outcome measures was noted. This can lead to an overestimation of the diagnostic accuracy of DL algorithms on medical imaging. There is an immediate need for the development of artificial intelligence-specific EQUATOR guidelines, particularly STARD, in order to provide guidance around key issues in this field.
- Published
- 2021
- Full Text
- View/download PDF
12. Development and clinical deployment of a smartphone-based visual field deep learning system for glaucoma detection
- Author
-
Fei Li, Diping Song, Han Chen, Jian Xiong, Xingyi Li, Hua Zhong, Guangxian Tang, Sujie Fan, Dennis S. C. Lam, Weihua Pan, Yajuan Zheng, Ying Li, Guoxiang Qu, Junjun He, Zhe Wang, Ling Jin, Rouxi Zhou, Yunhe Song, Yi Sun, Weijing Cheng, Chunman Yang, Yazhi Fan, Yingjie Li, Hengli Zhang, Ye Yuan, Yang Xu, Yunfan Xiong, Lingfei Jin, Aiguo Lv, Lingzhi Niu, Yuhong Liu, Shaoli Li, Jiani Zhang, Linda M. Zangwill, Alejandro F. Frangi, Tin Aung, Ching-yu Cheng, Yu Qiao, Xiulan Zhang, and Daniel S. W. Ting
- Subjects
Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
Abstract By 2040, ~100 million people will have glaucoma. To date, there are a lack of high-efficiency glaucoma diagnostic tools based on visual fields (VFs). Herein, we develop and evaluate the performance of ‘iGlaucoma’, a smartphone application-based deep learning system (DLS) in detecting glaucomatous VF changes. A total of 1,614,808 data points of 10,784 VFs (5542 patients) from seven centers in China were included in this study, divided over two phases. In Phase I, 1,581,060 data points from 10,135 VFs of 5105 patients were included to train (8424 VFs), validate (598 VFs) and test (3 independent test sets—200, 406, 507 samples) the diagnostic performance of the DLS. In Phase II, using the same DLS, iGlaucoma cloud-based application further tested on 33,748 data points from 649 VFs of 437 patients from three glaucoma clinics. With reference to three experienced expert glaucomatologists, the diagnostic performance (area under curve [AUC], sensitivity and specificity) of the DLS and six ophthalmologists were evaluated in detecting glaucoma. In Phase I, the DLS outperformed all six ophthalmologists in the three test sets (AUC of 0.834–0.877, with a sensitivity of 0.831–0.922 and a specificity of 0.676–0.709). In Phase II, iGlaucoma had 0.99 accuracy in recognizing different patterns in pattern deviation probability plots region, with corresponding AUC, sensitivity and specificity of 0.966 (0.953–0.979), 0.954 (0.930–0.977), and 0.873 (0.838–0.908), respectively. The ‘iGlaucoma’ is a clinically effective glaucoma diagnostic tool to detect glaucoma from humphrey VFs, although the target population will need to be carefully identified with glaucoma expertise input.
- Published
- 2020
- Full Text
- View/download PDF
13. Different fundus imaging modalities and technical factors in AI screening for diabetic retinopathy: a review
- Author
-
Gilbert Lim, Valentina Bellemo, Yuchen Xie, Xin Q. Lee, Michelle Y. T. Yip, and Daniel S. W. Ting
- Subjects
Artificial intelligence ,Deep learning ,Diabetic retinopathy ,Fundus photographs ,Retinal imaging modalities ,Survey ,Ophthalmology ,RE1-994 - Abstract
Abstract Background Effective screening is a desirable method for the early detection and successful treatment for diabetic retinopathy, and fundus photography is currently the dominant medium for retinal imaging due to its convenience and accessibility. Manual screening using fundus photographs has however involved considerable costs for patients, clinicians and national health systems, which has limited its application particularly in less-developed countries. The advent of artificial intelligence, and in particular deep learning techniques, has however raised the possibility of widespread automated screening. Main text In this review, we first briefly survey major published advances in retinal analysis using artificial intelligence. We take care to separately describe standard multiple-field fundus photography, and the newer modalities of ultra-wide field photography and smartphone-based photography. Finally, we consider several machine learning concepts that have been particularly relevant to the domain and illustrate their usage with extant works. Conclusions In the ophthalmology field, it was demonstrated that deep learning tools for diabetic retinopathy show clinically acceptable diagnostic performance when using colour retinal fundus images. Artificial intelligence models are among the most promising solutions to tackle the burden of diabetic retinopathy management in a comprehensive manner. However, future research is crucial to assess the potential clinical deployment, evaluate the cost-effectiveness of different DL systems in clinical practice and improve clinical acceptance.
- Published
- 2020
- Full Text
- View/download PDF
14. Technical and imaging factors influencing performance of deep learning systems for diabetic retinopathy
- Author
-
Michelle Y. T. Yip, Gilbert Lim, Zhan Wei Lim, Quang D. Nguyen, Crystal C. Y. Chong, Marco Yu, Valentina Bellemo, Yuchen Xie, Xin Qi Lee, Haslina Hamzah, Jinyi Ho, Tien-En Tan, Charumathi Sabanayagam, Andrzej Grzybowski, Gavin S. W. Tan, Wynne Hsu, Mong Li Lee, Tien Yin Wong, and Daniel S. W. Ting
- Subjects
Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
Abstract Deep learning (DL) has been shown to be effective in developing diabetic retinopathy (DR) algorithms, possibly tackling financial and manpower challenges hindering implementation of DR screening. However, our systematic review of the literature reveals few studies studied the impact of different factors on these DL algorithms, that are important for clinical deployment in real-world settings. Using 455,491 retinal images, we evaluated two technical and three image-related factors in detection of referable DR. For technical factors, the performances of four DL models (VGGNet, ResNet, DenseNet, Ensemble) and two computational frameworks (Caffe, TensorFlow) were evaluated while for image-related factors, we evaluated image compression levels (reducing image size, 350, 300, 250, 200, 150 KB), number of fields (7-field, 2-field, 1-field) and media clarity (pseudophakic vs phakic). In detection of referable DR, four DL models showed comparable diagnostic performance (AUC 0.936-0.944). To develop the VGGNet model, two computational frameworks had similar AUC (0.936). The DL performance dropped when image size decreased below 250 KB (AUC 0.936, 0.900, p
- Published
- 2020
- Full Text
- View/download PDF
15. Author Correction: Development and clinical deployment of a smartphone-based visual field deep learning system for glaucoma detection
- Author
-
Fei Li, Diping Song, Han Chen, Jian Xiong, Xingyi Li, Hua Zhong, Guangxian Tang, Sujie Fan, Dennis S. C. Lam, Weihua Pan, Yajuan Zheng, Ying Li, Guoxiang Qu, Junjun He, Zhe Wang, Ling Jin, Rouxi Zhou, Yunhe Song, Yi Sun, Weijing Cheng, Chunman Yang, Yazhi Fan, Yingjie Li, Hengli Zhang, Ye Yuan, Yang Xu, Yunfan Xiong, Lingfei Jin, Aiguo Lv, Lingzhi Niu, Yuhong Liu, Shaoli Li, Jiani Zhang, Linda M. Zangwill, Alejandro F. Frangi, Tin Aung, Ching-yu Cheng, Yu Qiao, Xiulan Zhang, and Daniel S. W. Ting
- Subjects
Computer applications to medicine. Medical informatics ,R858-859.7 - Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.