14 results on '"Avinash V. Varadarajan"'
Search Results
2. A deep learning model for novel systemic biomarkers in photographs of the external eye: a retrospective study
- Author
-
Boris Babenko, Ilana Traynis, Christina Chen, Preeti Singh, Akib Uddin, Jorge Cuadros, Lauren P Daskivich, April Y Maa, Ramasamy Kim, Eugene Yu-Chuan Kang, Yossi Matias, Greg S Corrado, Lily Peng, Dale R Webster, Christopher Semturs, Jonathan Krause, Avinash V Varadarajan, Naama Hammel, and Yun Liu
- Subjects
Health Information Management ,Medicine (miscellaneous) ,Decision Sciences (miscellaneous) ,Health Informatics - Published
- 2023
- Full Text
- View/download PDF
3. Deep Learning to Detect OCT-derived Diabetic Macular Edema from Color Retinal Photographs
- Author
-
Xinle Liu, Tayyeba K. Ali, Preeti Singh, Ami Shah, Scott Mayer McKinney, Paisan Ruamviboonsuk, Angus W. Turner, Pearse A. Keane, Peranut Chotcomwongse, Variya Nganthavee, Mark Chia, Josef Huemer, Jorge Cuadros, Rajiv Raman, Greg S. Corrado, Lily Peng, Dale R. Webster, Naama Hammel, Avinash V. Varadarajan, Yun Liu, Reena Chopra, and Pinal Bavishi
- Subjects
Ophthalmology - Published
- 2022
- Full Text
- View/download PDF
4. Predicting optical coherence tomography-derived diabetic macular edema grades from fundus photographs using deep learning
- Author
-
Lily Peng, Mongkol Tadarati, Subhashini Venugopalan, Pinal Bavishi, Greg S. Corrado, George H. Bresnick, Variya Nganthavee, Jirawut Limwattanayingyong, Kuniyoshi Kanai, Paisan Ruamviboonsuk, Joseph R. Ledsam, Avinash V. Varadarajan, Jorge Cuadros, Sukhum Silpa-archa, Dale R. Webster, Peranut Chotcomwongse, Pearse A. Keane, and Arunachalam Narayanaswamy
- Subjects
FOS: Computer and information sciences ,Male ,Computer Science - Machine Learning ,genetic structures ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,General Physics and Astronomy ,Fundus (eye) ,Eye ,Machine Learning (cs.LG) ,Imaging ,chemistry.chemical_compound ,0302 clinical medicine ,Statistics - Machine Learning ,Photography ,Medicine ,lcsh:Science ,Tomography ,Developing world ,screening and diagnosis ,Multidisciplinary ,medicine.diagnostic_test ,Diabetes ,Middle Aged ,Detection ,Biomedical Imaging ,Female ,Biomedical engineering ,Tomography, Optical Coherence ,4.2 Evaluation of markers and technologies ,medicine.medical_specialty ,Science ,Diabetic macular edema ,Machine Learning (stat.ML) ,Bioengineering ,General Biochemistry, Genetics and Molecular Biology ,Diabetic Eye Disease ,Article ,Retina ,Macular Edema ,03 medical and health sciences ,Imaging, Three-Dimensional ,Deep Learning ,Optical coherence tomography ,Clinical Research ,Ophthalmology ,Medical imaging ,Humans ,Eye Disease and Disorders of Vision ,Aged ,Diabetic Retinopathy ,business.industry ,Fundus photography ,Retinal ,General Chemistry ,Gold standard (test) ,eye diseases ,4.1 Discovery and preclinical testing of markers and technologies ,chemistry ,Optical Coherence ,Three-Dimensional ,Mutation ,030221 ophthalmology & optometry ,lcsh:Q ,sense organs ,business ,030217 neurology & neurosurgery - Abstract
Center-involved diabetic macular edema (ci-DME) is a major cause of vision loss. Although the gold standard for diagnosis involves 3D imaging, 2D imaging by fundus photography is usually used in screening settings, resulting in high false-positive and false-negative calls. To address this, we train a deep learning model to predict ci-DME from fundus photographs, with an ROC–AUC of 0.89 (95% CI: 0.87–0.91), corresponding to 85% sensitivity at 80% specificity. In comparison, retinal specialists have similar sensitivities (82–85%), but only half the specificity (45–50%, p, Diabetic eye disease is a cause of preventable blindness and accurate and timely referral of patients with diabetic macular edema is important to start treatment. Here the authors present a deep learning model that can predict the presence of diabetic macular edema from color fundus photographs with superior specificity and positive predictive value compared to retinal specialists.
- Published
- 2020
- Full Text
- View/download PDF
5. Retinal fundus photographs capture hemoglobin loss after blood donation
- Author
-
Akinori Mitani, Ilana Traynis, Preeti Singh, Greg S. Corrado, Dale R. Webster, Lily H. Peng, Avinash V. Varadarajan, Yun Liu, and Naama Hammel
- Abstract
Recently it was shown that blood hemoglobin concentration could be predicted from retinal fundus photographs by deep learning models. However, it is unclear whether the models were quantifying current blood hemoglobin level, or estimating based on subjects’ pretest probability of having anemia. Here, we conducted an observational study with 14 volunteers who donated blood at an on site blood drive held by the local blood center (ie, at which time approximately 10% of their blood was removed). When the deep learning model was applied to retinal fundus photographs taken before and after blood donation, it detected a decrease in blood hemoglobin concentration within each subject at 2-3 days after donation, suggesting that the model was quantifying subacute hemoglobin changes instead of predicting subjects’ risk. Additional randomized or controlled studies can further validate this finding.
- Published
- 2022
- Full Text
- View/download PDF
6. Detection of anaemia from retinal fundus images via deep learning
- Author
-
Akinori Mitani, Greg S. Corrado, Abigail E. Huang, Avinash V. Varadarajan, Subhashini Venugopalan, Dale R. Webster, Lily Peng, Naama Hammel, and Yun Liu
- Subjects
0301 basic medicine ,medicine.medical_specialty ,Biomedical Engineering ,Medicine (miscellaneous) ,Bioengineering ,Fundus (eye) ,03 medical and health sciences ,chemistry.chemical_compound ,0302 clinical medicine ,Diabetes mellitus ,Ophthalmology ,Medical imaging ,medicine ,Prospective cohort study ,Receiver operating characteristic ,business.industry ,Retinal ,medicine.disease ,Confidence interval ,Computer Science Applications ,030104 developmental biology ,Blood pressure ,chemistry ,business ,030217 neurology & neurosurgery ,Biotechnology - Abstract
Owing to the invasiveness of diagnostic tests for anaemia and the costs associated with screening for it, the condition is often undetected. Here, we show that anaemia can be detected via machine-learning algorithms trained using retinal fundus images, study participant metadata (including race or ethnicity, age, sex and blood pressure) or the combination of both data types (images and study participant metadata). In a validation dataset of 11,388 study participants from the UK Biobank, the metadata-only, fundus-image-only and combined models predicted haemoglobin concentration (in g dl–1) with mean absolute error values of 0.73 (95% confidence interval: 0.72–0.74), 0.67 (0.66–0.68) and 0.63 (0.62–0.64), respectively, and with areas under the receiver operating characteristic curve (AUC) values of 0.74 (0.71–0.76), 0.87 (0.85–0.89) and 0.88 (0.86–0.89), respectively. For 539 study participants with self-reported diabetes, the combined model predicted haemoglobin concentration with a mean absolute error of 0.73 (0.68–0.78) and anaemia an AUC of 0.89 (0.85–0.93). Automated anaemia screening on the basis of fundus images could particularly aid patients with diabetes undergoing regular retinal imaging and for whom anaemia can increase morbidity and mortality risks. Machine-learning algorithms trained with retinal fundus images, with subject metadata or with both data types, predict haemoglobin concentration with mean absolute errors lower than 0.75 g dl–1 and anaemia with areas under the curve in the range of 0.74–0.89.
- Published
- 2019
- Full Text
- View/download PDF
7. Predicting the risk of developing diabetic retinopathy using deep learning
- Author
-
Greg S. Corrado, Naama Hammel, Akinori Mitani, Ashish Bora, Subhashini Venugopalan, Dale R. Webster, Siva Balasubramanian, Paisan Ruamviboonsuk, Pinal Bavishi, Guilherme de Oliveira Marinho, Lily Peng, Boris Babenko, Avinash V. Varadarajan, Sunny Virmani, Jorge Cuadros, and Yun Liu
- Subjects
FOS: Computer and information sciences ,Male ,genetic structures ,Computer Vision and Pattern Recognition (cs.CV) ,MEDLINE ,Computer Science - Computer Vision and Pattern Recognition ,Medicine (miscellaneous) ,Health Informatics ,Kaplan-Meier Estimate ,Diagnostic Techniques, Ophthalmological ,Risk Assessment ,Deep Learning ,Health Information Management ,Diabetes mellitus ,FOS: Electrical engineering, electronic engineering, information engineering ,Photography ,Medicine ,Humans ,Decision Sciences (miscellaneous) ,In patient ,Internal validation ,Aged ,Diabetic Retinopathy ,Receiver operating characteristic ,business.industry ,Deep learning ,Image and Video Processing (eess.IV) ,Reproducibility of Results ,Diabetic retinopathy ,Electrical Engineering and Systems Science - Image and Video Processing ,Middle Aged ,medicine.disease ,Prognosis ,medicine.anatomical_structure ,ROC Curve ,Fundus (uterus) ,Area Under Curve ,Optometry ,Female ,Artificial intelligence ,business - Abstract
Diabetic retinopathy screening is instrumental to preventing blindness, but scaling up screening is challenging because of the increasing number of patients with all forms of diabetes. We aimed to create a deep-learning system to predict the risk of patients with diabetes developing diabetic retinopathy within 2 years.We created and validated two versions of a deep-learning system to predict the development of diabetic retinopathy in patients with diabetes who had had teleretinal diabetic retinopathy screening in a primary care setting. The input for the two versions was either a set of three-field or one-field colour fundus photographs. Of the 575 431 eyes in the development set 28 899 had known outcomes, with the remaining 546 532 eyes used to augment the training process via multitask learning. Validation was done on one eye (selected at random) per patient from two datasets: an internal validation (from EyePACS, a teleretinal screening service in the USA) set of 3678 eyes with known outcomes and an external validation (from Thailand) set of 2345 eyes with known outcomes.The three-field deep-learning system had an area under the receiver operating characteristic curve (AUC) of 0·79 (95% CI 0·77-0·81) in the internal validation set. Assessment of the external validation set-which contained only one-field colour fundus photographs-with the one-field deep-learning system gave an AUC of 0·70 (0·67-0·74). In the internal validation set, the AUC of available risk factors was 0·72 (0·68-0·76), which improved to 0·81 (0·77-0·84) after combining the deep-learning system with these risk factors (p0·0001). In the external validation set, the corresponding AUC improved from 0·62 (0·58-0·66) to 0·71 (0·68-0·75; p0·0001) following the addition of the deep-learning system to available risk factors.The deep-learning systems predicted diabetic retinopathy development using colour fundus photographs, and the systems were independent of and more informative than available risk factors. Such a risk stratification tool might help to optimise screening intervals to reduce costs while improving vision-related outcomes.Google.
- Published
- 2020
8. Scientific Discovery by Generating Counterfactuals Using Image Translation
- Author
-
Lily Peng, Pinal Bavishi, Dale R. Webster, Arunachalam Narayanaswamy, Michael Brenner, Philip C. Nelson, Avinash V. Varadarajan, Subhashini Venugopalan, Greg S. Corrado, and Paisan Ruamviboonsuk
- Subjects
Counterfactual conditional ,Mechanism (biology) ,Computer science ,business.industry ,Scientific discovery ,010501 environmental sciences ,Machine learning ,computer.software_genre ,01 natural sciences ,Image (mathematics) ,03 medical and health sciences ,0302 clinical medicine ,Prior probability ,030221 ophthalmology & optometry ,Image translation ,Artificial intelligence ,business ,computer ,Generative grammar ,0105 earth and related environmental sciences - Abstract
Model explanation techniques play a critical role in understanding the source of a model’s performance and making its decisions transparent. Here we investigate if explanation techniques can also be used as a mechanism for scientific discovery. We make three contributions: first, we propose a framework to convert predictions from explanation techniques to a mechanism of discovery. Second, we show how generative models in combination with black-box predictors can be used to generate hypotheses (without human priors) that can be critically examined. Third, with these techniques we study classification models for retinal images predicting Diabetic Macular Edema (DME), where recent work [30] showed that a CNN trained on these images is likely learning novel features in the image. We demonstrate that the proposed framework is able to explain the underlying scientific mechanism, thus bridging the gap between the model’s performance and human understanding.
- Published
- 2020
- Full Text
- View/download PDF
9. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning
- Author
-
Greg S. Corrado, Katy Blumer, Ryan Poplin, Lily Peng, Avinash V. Varadarajan, Yun Liu, Dale R. Webster, and Michael V. McConnell
- Subjects
FOS: Computer and information sciences ,Male ,0301 basic medicine ,Fundus Oculi ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Cardiovascular risk factors ,Computer Science - Computer Vision and Pattern Recognition ,Biomedical Engineering ,Medicine (miscellaneous) ,Bioengineering ,Fundus (eye) ,Retina ,03 medical and health sciences ,chemistry.chemical_compound ,Deep Learning ,0302 clinical medicine ,Risk Factors ,Image Interpretation, Computer-Assisted ,medicine ,Humans ,Aged ,Aged, 80 and over ,business.industry ,Deep learning ,Retinal ,Pattern recognition ,Middle Aged ,Computer Science Applications ,030104 developmental biology ,medicine.anatomical_structure ,chemistry ,Cardiovascular Diseases ,030221 ophthalmology & optometry ,Female ,Artificial intelligence ,business ,Algorithms ,Biotechnology ,Optic disc - Abstract
Traditionally, medical discoveries are made by observing associations, making hypotheses from them and then designing and running experiments to test the hypotheses. However, with medical images, observing and quantifying associations can often be difficult because of the wide variety of features, patterns, colours, values and shapes that are present in real data. Here, we show that deep learning can extract new knowledge from retinal fundus images. Using deep-learning models trained on data from 284,335 patients and validated on two independent datasets of 12,026 and 999 patients, we predicted cardiovascular risk factors not previously thought to be present or quantifiable in retinal images, such as age (mean absolute error within 3.26 years), gender (area under the receiver operating characteristic curve (AUC) = 0.97), smoking status (AUC = 0.71), systolic blood pressure (mean absolute error within 11.23 mmHg) and major adverse cardiac events (AUC = 0.70). We also show that the trained deep-learning models used anatomical features, such as the optic disc or blood vessels, to generate each prediction.
- Published
- 2018
- Full Text
- View/download PDF
10. Author Correction: Detection of anaemia from retinal fundus images via deep learning
- Author
-
Abigail E. Huang, Avinash V. Varadarajan, Akinori Mitani, Lily Peng, Dale R. Webster, Naama Hammel, Greg S. Corrado, Subhashini Venugopalan, and Yun Liu
- Subjects
medicine.medical_specialty ,Computer science ,business.industry ,Deep learning ,Biomedical Engineering ,Medicine (miscellaneous) ,Bioengineering ,Retinal ,Fundus (eye) ,Computer Science Applications ,chemistry.chemical_compound ,chemistry ,Ophthalmology ,medicine ,Artificial intelligence ,business ,Biotechnology - Published
- 2020
- Full Text
- View/download PDF
11. Deep learning in ophthalmology: The technical and clinical considerations
- Author
-
Leopold Schmetterer, Michael D. Abràmoff, Pearse A. Keane, Philippe Burlina, Michael F. Chiang, Daniel Shu Wei Ting, Dale R. Webster, Avinash V. Varadarajan, Tien Yin Wong, Neil M. Bressler, Louis R. Pasquale, and Lily Peng
- Subjects
0301 basic medicine ,medicine.medical_specialty ,genetic structures ,Eye Diseases ,MEDLINE ,Glaucoma ,Disease ,Diagnostic Techniques, Ophthalmological ,03 medical and health sciences ,0302 clinical medicine ,Deep Learning ,Ophthalmology ,Health care ,medicine ,Humans ,business.industry ,Retinopathy of prematurity ,Diabetic retinopathy ,Macular degeneration ,medicine.disease ,eye diseases ,Sensory Systems ,Clinical trial ,030104 developmental biology ,030221 ophthalmology & optometry ,business - Abstract
The advent of computer graphic processing units, improvement in mathematical models and availability of big data has allowed artificial intelligence (AI) using machine learning (ML) and deep learning (DL) techniques to achieve robust performance for broad applications in social-media, the internet of things, the automotive industry and healthcare. DL systems in particular provide improved capability in image, speech and motion recognition as well as in natural language processing. In medicine, significant progress of AI and DL systems has been demonstrated in image-centric specialties such as radiology, dermatology, pathology and ophthalmology. New studies, including pre-registered prospective clinical trials, have shown DL systems are accurate and effective in detecting diabetic retinopathy (DR), glaucoma, age-related macular degeneration (AMD), retinopathy of prematurity, refractive error and in identifying cardiovascular risk factors and diseases, from digital fundus photographs. There is also increasing attention on the use of AI and DL systems in identifying disease features, progression and treatment response for retinal diseases such as neovascular AMD and diabetic macular edema using optical coherence tomography (OCT). Additionally, the application of ML to visual fields may be useful in detecting glaucoma progression. There are limited studies that incorporate clinical data including electronic health records, in AL and DL algorithms, and no prospective studies to demonstrate that AI and DL algorithms can predict the development of clinical eye disease. This article describes global eye disease burden, unmet needs and common conditions of public health importance for which AI and DL systems may be applicable. Technical and clinical aspects to build a DL system to address those needs, and the potential challenges for clinical adoption are discussed. AI, ML and DL will likely play a crucial role in clinical ophthalmology practice, with implications for screening, diagnosis and follow up of the major causes of vision impairment in the setting of ageing populations globally.
- Published
- 2018
12. Deep learning for predicting refractive error from retinal fundus images
- Author
-
Ryan Poplin, Reena Chopra, Christof Angermueller, Avinash V. Varadarajan, Lily Peng, Dale R. Webster, Katy Blumer, Joseph R. Ledsam, Pearse A. Keane, and Greg S. Corrado
- Subjects
0301 basic medicine ,Adult ,Male ,FOS: Computer and information sciences ,Refractive error ,Fundus Oculi ,Computer Vision and Pattern Recognition (cs.CV) ,Population ,Computer Science - Computer Vision and Pattern Recognition ,Datasets as Topic ,Fundus (eye) ,Refraction, Ocular ,Retina ,03 medical and health sciences ,0302 clinical medicine ,Deep Learning ,medicine ,Humans ,Vision test ,education ,Dioptre ,Mathematics ,Aged ,education.field_of_study ,Vision Tests ,Middle Aged ,medicine.disease ,Refractive Errors ,Subjective refraction ,Confidence interval ,Data set ,030104 developmental biology ,030221 ophthalmology & optometry ,Optometry ,Female ,Visual Fields ,Algorithms - Abstract
Refractive error, one of the leading cause of visual impairment, can be corrected by simple interventions like prescribing eyeglasses. We trained a deep learning algorithm to predict refractive error from the fundus photographs from participants in the UK Biobank cohort, which were 45 degree field of view images and the AREDS clinical trial, which contained 30 degree field of view images. Our model use the "attention" method to identify features that are correlated with refractive error. Mean absolute error (MAE) of the algorithm's prediction compared to the refractive error obtained in the AREDS and UK Biobank. The resulting algorithm had a MAE of 0.56 diopters (95% CI: 0.55-0.56) for estimating spherical equivalent on the UK Biobank dataset and 0.91 diopters (95% CI: 0.89-0.92) for the AREDS dataset. The baseline expected MAE (obtained by simply predicting the mean of this population) was 1.81 diopters (95% CI: 1.79-1.84) for UK Biobank and 1.63 (95% CI: 1.60-1.67) for AREDS. Attention maps suggested that the foveal region was one of the most important areas used by the algorithm to make this prediction, though other regions also contribute to the prediction. The ability to estimate refractive error with high accuracy from retinal fundus photos has not been previously known and demonstrates that deep learning can be applied to make novel predictions from medical images. Given that several groups have recently shown that it is feasible to obtain retinal fundus photos using mobile phones and inexpensive attachments, this work may be particularly relevant in regions of the world where autorefractors may not be readily available.
- Published
- 2017
13. Data structures for limited oblivious execution of programs while preserving locality of reference
- Author
-
Avinash V. Varadarajan, C. Pandu Rangan, and Ramarathnam Venkatesan
- Subjects
TheoryofComputation_MISCELLANEOUS ,Theoretical computer science ,Computer science ,business.industry ,Cryptography ,Parallel computing ,Splay tree ,Data structure ,Upper and lower bounds ,Randomized algorithm ,Variable (computer science) ,Locality of reference ,business ,Execution model - Abstract
We introduce a data structure for program execution under a limited oblivious execution model. For fully oblivious execution along the lines of Goldreich and Ostrovsky [2], one transforms a given program into a one that has totally random looking execution, based on some cryptographic assumptions and the existence of secure hardware. Totally random memory access patterns do not respect the locality of reference in programs to which the programs generally owe their efficiency. We propose a model that limits the obliviousness so as to enable efficient execution of the program; here the adversary marks a variable and tries to produce a list of candidate locations where it may be stored in after $T$-steps ofexecution. We propose a randomized algorithm based on splay trees,and prove a lower bound on such lists.
- Published
- 2007
- Full Text
- View/download PDF
14. Tools for simulating evolution of aligned genomic regions with integrated parameter estimation
- Author
-
Robert K. Bradley, Ian Holmes, and Avinash V. Varadarajan
- Subjects
Genome evolution ,RNA, Untranslated ,Genomics ,Biology ,computer.software_genre ,Extensibility ,Genome ,Evolution, Molecular ,03 medical and health sciences ,0302 clinical medicine ,Software ,Databases, Genetic ,Codon ,030304 developmental biology ,0303 health sciences ,Sequence ,Estimation theory ,business.industry ,Proteins ,DNA ,Benchmarking ,Evolutionary biology ,Data mining ,business ,Sequence Alignment ,computer ,030217 neurology & neurosurgery - Abstract
Three tools for simulating genome evolution are presented: for neutrally evolving DNA, for phylogenetic context-free grammars and for richly structured syntenic blocks of genome sequence., Controlled simulations of genome evolution are useful for benchmarking tools. However, many simulators lack extensibility and cannot measure parameters directly from data. These issues are addressed by three new open-source programs: GSIMULATOR (for neutrally evolving DNA), SIMGRAM (for generic structured features) and SIMGENOME (for syntenic genome blocks). Each offers algorithms for parameter measurement and reconstruction of ancestral sequence. All three tools out-perform the leading neutral DNA simulator (DAWG) in benchmarks. The programs are available at .
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.