13 results on '"David R. Gagnon"'
Search Results
2. Visualizing novel connections and genetic similarities across diseases using a network-medicine based approach
- Author
-
Brian, Ferolito, Italo Faria, do Valle, Hanna, Gerlovin, Lauren, Costa, Juan P, Casas, J Michael, Gaziano, David R, Gagnon, Edmon, Begoli, Albert-László, Barabási, and Kelly, Cho
- Subjects
Phenotype ,Multidisciplinary ,Humans ,Computer Simulation ,Genetic Predisposition to Disease ,Comorbidity ,Polymorphism, Single Nucleotide ,Biological Specimen Banks ,Genome-Wide Association Study - Abstract
Understanding the genetic relationships between human disorders could lead to better treatment and prevention strategies, especially for individuals with multiple comorbidities. A common resource for studying genetic-disease relationships is the GWAS Catalog, a large and well curated repository of SNP-trait associations from various studies and populations. Some of these populations are contained within mega-biobanks such as the Million Veteran Program (MVP), which has enabled the genetic classification of several diseases in a large well-characterized and heterogeneous population. Here we aim to provide a network of the genetic relationships among diseases and to demonstrate the utility of quantifying the extent to which a given resource such as MVP has contributed to the discovery of such relations. We use a network-based approach to evaluate shared variants among thousands of traits in the GWAS Catalog repository. Our results indicate many more novel disease relationships that did not exist in early studies and demonstrate that the network can reveal clusters of diseases mechanistically related. Finally, we show novel disease connections that emerge when MVP data is included, highlighting methodology that can be used to indicate the contributions of a given biobank.
- Published
- 2022
- Full Text
- View/download PDF
3. Clinical Performance Measures for Neurocritical Care: A Statement for Healthcare Professionals from the Neurocritical Care Society
- Author
-
Herbert I. Fried, Sarah Livesay, Fabio Silvio Taccone, Asma M. Moheet, David R. Gagnon, Abhijit V. Lele, J. Claude Hemphill, Navaz Karanja, David L. Tirschwell, Casey Olm-Shipman, and Wendy L. Wright
- Subjects
Health professionals ,business.industry ,Clinical performance ,Neurointensive care ,030208 emergency & critical care medicine ,Inpatient setting ,Process of care ,Critical Care and Intensive Care Medicine ,medicine.disease ,Clinical Practice ,03 medical and health sciences ,0302 clinical medicine ,Multidisciplinary approach ,Medicine ,Neurology (clinical) ,Medical emergency ,business ,030217 neurology & neurosurgery ,Medical literature - Abstract
Performance measures are tools to measure the quality of clinical care. To date, there is no organized set of performance measures for neurocritical care. The Neurocritical Care Society convened a multidisciplinary writing committee to develop performance measures relevant to neurocritical care delivery in the inpatient setting. A formal methodology was used that included systematic review of the medical literature for 13 major neurocritical care conditions, extraction of high-level recommendations from clinical practice guidelines, and development of a measurement specification form. A total of 50,257 citations were reviewed of which 150 contained strong recommendations deemed suitable for consideration as neurocritical care performance measures. Twenty-one measures were developed across nine different conditions and two neurocritical care processes of care. This is the first organized Neurocritical Care Performance Measure Set. Next steps should focus on field testing to refine measure criteria and assess implementation.
- Published
- 2019
- Full Text
- View/download PDF
4. High-throughput phenotyping with electronic medical record data using a common semi-supervised approach (PheCAP)
- Author
-
Jiehuan Sun, Victor M. Castro, Sicong Huang, David R. Gagnon, Ashwin N. Ananthakrishnan, Tianxi Cai, Jacqueline Honerlaw, Yuk-Lam Ho, Isaac S. Kohane, Peter Szolovits, Sheng Yu, Susanne Churchill, Yichi Zhang, Stanley Y. Shaw, Zongqi Xia, Shawn N. Murphy, Robert M. Plenge, Katherine P. Liao, J. Michael Gaziano, Nicholas Link, Kelly Cho, Elizabeth W. Karlson, Chuan Hong, Tianrun Cai, Vivian S. Gainer, Guergana Savova, Christopher J. O'Donnell, and Jie Huang
- Subjects
Data Analysis ,Computer science ,Machine learning ,computer.software_genre ,Article ,General Biochemistry, Genetics and Molecular Biology ,Machine Learning ,03 medical and health sciences ,0302 clinical medicine ,Chart review ,Electronic Health Records ,Humans ,Throughput (business) ,Natural Language Processing ,030304 developmental biology ,0303 health sciences ,business.industry ,Medical record ,Electronic medical record ,Gold standard (test) ,Biobank ,Pipeline (software) ,High-Throughput Screening Assays ,ComputingMethodologies_PATTERNRECOGNITION ,Phenotype ,Data Interpretation, Statistical ,Disease risk ,Artificial intelligence ,business ,computer ,Algorithms ,030217 neurology & neurosurgery - Abstract
Phenotypes are the foundation for clinical and genetic studies of disease risk and outcomes. The growth of biobanks linked to electronic medical record (EMR) data has both facilitated and increased the demand for efficient, accurate, and robust approaches for phenotyping millions of patients. Challenges to phenotyping with EMR data include variation in the accuracy of codes, as well as the high level of manual input required to identify features for the algorithm and to obtain gold standard labels. To address these challenges, we developed PheCAP, a high-throughput semi-supervised phenotyping pipeline. PheCAP begins with data from the EMR, including structured data and information extracted from the narrative notes using natural language processing (NLP). The standardized steps integrate automated procedures, which reduce the level of manual input, and machine learning approaches for algorithm training. PheCAP itself can be executed in 1–2 d if all data are available; however, the timing is largely dependent on the chart review stage, which typically requires at least 2 weeks. The final products of PheCAP include a phenotype algorithm, the probability of the phenotype for all patients, and a phenotype classification (yes or no). PheCAP takes structured data and narrative notes from electronic medical records and enables patients with a particular clinical phenotype to be identified.
- Published
- 2019
- Full Text
- View/download PDF
5. Revisiting methods for modeling longitudinal and survival data: Framingham Heart Study
- Author
-
Julius S. Ngwa, Michael P. LaValley, David R. Gagnon, Debbie M. Cheng, L. Adrienne Cupples, and Howard Cabral
- Subjects
Mixed model ,Epidemiology ,Computer science ,Bayesian probability ,Health Informatics ,Residual ,Framingham Heart Study ,Bias ,Time dependent covariate models ,Statistics ,Covariate ,Humans ,Computer Simulation ,Longitudinal Studies ,Survival analysis ,lcsh:R5-920 ,Residual variance ,Models, Statistical ,Proportional hazards model ,Bayes Theorem ,Random effects model ,Survival Analysis ,Two-step approach ,Cox model ,Joint longitudinal and survival model ,Weibull distribution ,lcsh:Medicine (General) ,Research Article ,Mixed effect modeling - Abstract
Background Statistical methods for modeling longitudinal and time-to-event data has received much attention in medical research and is becoming increasingly useful. In clinical studies, such as cancer and AIDS, longitudinal biomarkers are used to monitor disease progression and to predict survival. These longitudinal measures are often missing at failure times and may be prone to measurement errors. More importantly, time-dependent survival models that include the raw longitudinal measurements may lead to biased results. In previous studies these two types of data are frequently analyzed separately where a mixed effects model is used for the longitudinal data and a survival model is applied to the event outcome. Methods In this paper we compare joint maximum likelihood methods, a two-step approach and a time dependent covariate method that link longitudinal data to survival data with emphasis on using longitudinal measures to predict survival. We apply a Bayesian semi-parametric joint method and maximum likelihood joint method that maximizes the joint likelihood of the time-to-event and longitudinal measures. We also implement the Two-Step approach, which estimates random effects separately, and a classic Time Dependent Covariate Model. We use simulation studies to assess bias, accuracy, and coverage probabilities for the estimates of the link parameter that connects the longitudinal measures to survival times. Results Simulation results demonstrate that the Two-Step approach performed best at estimating the link parameter when variability in the longitudinal measure is low but is somewhat biased downwards when the variability is high. Bayesian semi-parametric and maximum likelihood joint methods yield higher link parameter estimates with low and high variability in the longitudinal measure. The Time Dependent Covariate method resulted in consistent underestimation of the link parameter. We illustrate these methods using data from the Framingham Heart Study in which lipid measurements and Myocardial Infarction data were collected over a period of 26 years. Conclusions Traditional methods for modeling longitudinal and survival data, such as the time dependent covariate method, that use the observed longitudinal data, tend to provide downwardly biased estimates. The two-step approach and joint models provide better estimates, although a comparison of these methods may depend on the underlying residual variance.
- Published
- 2021
- Full Text
- View/download PDF
6. Authors’ response to letter to the editor by Zhiqiang Wu, Jiazhang Wu, and Zhibin Lan
- Author
-
Carlos G. Tun, Antonio A. Lazzari, Merilee Teylan, Jaime E. Hart, Eric Garshick, Kristopher Clark, David R. Gagnon, and Rebekah L. Goldstein
- Subjects
Letter to the editor ,Neurology ,business.industry ,Medicine ,Neurology (clinical) ,General Medicine ,Theology ,business - Published
- 2020
- Full Text
- View/download PDF
7. Correction: Plasma vitamin D, past chest illness, and risk of future chest illness in chronic spinal cord injury (SCI): a longitudinal observational study
- Author
-
Kristopher Clark, Rebekah L. Goldstein, Jaime E. Hart, Merilee Teylan, Antonio A. Lazzari, David R. Gagnon, Carlos G. Tun, and Eric Garshick
- Subjects
Neurology ,Neurology (clinical) ,General Medicine - Published
- 2020
- Full Text
- View/download PDF
8. Development and validation of a heart failure with preserved ejection fraction cohort using electronic medical records
- Author
-
Tasnim F. Imran, Jeremy M. Robbins, Robert R. McLean, Yuk-Lam Ho, Yash R. Patel, J. Michael Gaziano, Kelly Cho, Ariela R. Orkaby, Jacob Joseph, David R. Gagnon, Katherine E. Kurgansky, and Luc Djoussé
- Subjects
Male ,lcsh:Diseases of the circulatory (Cardiovascular) system ,medicine.medical_specialty ,Databases, Factual ,Epidemiology ,Heart failure ,030204 cardiovascular system & hematology ,Ventricular Function, Left ,03 medical and health sciences ,0302 clinical medicine ,International Classification of Diseases ,Internal medicine ,Natriuretic Peptide, Brain ,Validation ,medicine ,Data Mining ,Electronic Health Records ,Humans ,030212 general & internal medicine ,Diuretics ,Electronic medical records ,Veterans Affairs ,Aged ,Natural Language Processing ,Angiology ,Aged, 80 and over ,Ejection fraction ,business.industry ,Medical record ,Reproducibility of Results ,Stroke Volume ,Stroke volume ,Middle Aged ,Preserved ejection fraction ,medicine.disease ,Peptide Fragments ,United States ,United States Department of Veterans Affairs ,Echocardiography ,lcsh:RC666-701 ,Cohort ,Cardiology ,Female ,Cardiology and Cardiovascular Medicine ,business ,Heart failure with preserved ejection fraction ,Biomarkers ,Research Article - Abstract
Background Heart failure (HF) with preserved ejection fraction (HFpEF) comprises nearly half of prevalent HF, yet is challenging to curate in a large database of electronic medical records (EMR) since it requires both accurate HF diagnosis and left ventricular ejection fraction (EF) values to be consistently ≥50%. Methods We used the national Veterans Affairs EMR to curate a cohort of HFpEF patients from 2002 to 2014. EF values were extracted from clinical documents utilizing natural language processing and an iterative approach was used to refine the algorithm for verification of clinical HFpEF. The final algorithm utilized the following inclusion criteria: any International Classification of Diseases-9 (ICD-9) code of HF (428.xx); all recorded EF ≥50%; and either B-type natriuretic peptide (BNP) or aminoterminal pro-BNP (NT-proBNP) values recorded OR diuretic use within one month of diagnosis of HF. Validation of the algorithm was performed by 3 independent reviewers doing manual chart review of 100 HFpEF cases and 100 controls. Results We established a HFpEF cohort of 80,248 patients (out of a total 1,155,376 patients with the ICD-9 diagnosis of HF). Mean age was 72 years; 96% were males and 12% were African-Americans. Validation analysis of the HFpEF algorithm had a sensitivity of 88%, specificity of 96%, positive predictive value of 96%, and a negative predictive value of 87% to identify HFpEF cases. Conclusion We developed a sensitive, highly specific algorithm for detecting HFpEF in a large national database. This approach may be applicable to other large EMR databases to identify HFpEF patients.
- Published
- 2018
- Full Text
- View/download PDF
9. Participatory Scaling Through Augmented Reality Learning Through Local Games
- Author
-
David R. Gagnon, John Martin, Kurt Squire, and Seann Dikkers
- Subjects
Social construction of technology ,Multimedia ,Design-based research ,Computer science ,Teaching method ,Educational technology ,Mobile computing ,Citizen journalism ,computer.software_genre ,Computer Science Applications ,Education ,ComputingMilieux_COMPUTERSANDEDUCATION ,Augmented reality ,computer ,Mobile device - Abstract
The proliferation of broadband mobile devices, which many students bring to school with them as mobile phones, makes the widespread adoption of AR pedagogies a possibility, but pedagogical, distribution, and training models are needed to make this innovation an integrated part of education, This paper employs Social Construction of Technology (SCOT) to argue for a participatory model of scaling by key stakeholders groups (students, teachers, researchers, administrators), and demonstrates through various cases how ARIS (arisgames.org) — a free, open-source tool for educators to create and disseminate mobile AR learning experiences — may be such a model.
- Published
- 2013
- Full Text
- View/download PDF
10. [Untitled]
- Author
-
Mark E. Glickman and David R. Gagnon
- Subjects
Framingham Heart Study ,Applied Mathematics ,Causal inference ,Statistics ,Covariate ,Genetic predisposition ,Population study ,Observational study ,General Medicine ,Disease ,Biology ,Demography ,Cohort study - Abstract
Many late-onset diseases are caused by what appears to be a combination of a genetic predisposition to disease and environmental factors. The use of existing cohort studies provides an opportunity to infer genetic predisposition to disease on a representative sample of a study population, now that many such studies are gathering genetic information on the participants. One feature to using existing cohorts is that subjects may be censored due to death prior to genetic sampling, thereby adding a layer of complexity to the analysis. We develop a statistical framework to infer parameters of a latent variables model for disease onset. The latent variables model describes the role of genetic and modifiable risk factors on the onset ages of multiple diseases, and accounts for right-censoring of disease onset ages. The framework also allows for missing genetic information by inferring a subject's unknown genotype through appropriately incorporated covariate information. The model is applied to data gathered in the Framingham Heart Study for measuring the effect of different Apo-E genotypes on the occurrence of various cardiovascular disease events.
- Published
- 2002
- Full Text
- View/download PDF
11. Impact of misspecifying the distribution of a prognostic factor on power and sample size for testing treatment interactions in clinical trials
- Author
-
David R. Gagnon, Elena Losina, Michael P. LaValley, and William M. Reichmann
- Subjects
Interaction ,Sample size re-estimation ,Epidemiology ,Health Informatics ,030204 cardiovascular system & hematology ,03 medical and health sciences ,0302 clinical medicine ,Clinical Protocols ,Statistics ,Econometrics ,Humans ,030212 general & internal medicine ,Mathematics ,Clinical Trials as Topic ,Models, Statistical ,Adaptive design ,Conditional power ,Prognosis ,Outcome (probability) ,Power (physics) ,Clinical trial ,Distribution (mathematics) ,Research Design ,Sample size determination ,Sample Size ,Quota sampling ,Simulation design ,Research Article ,Type I and type II errors - Abstract
Background: Interaction in clinical trials presents challenges for design and appropriate sample size estimation. Here we considered interaction between treatment assignment and a dichotomous prognostic factor with a continuous outcome. Our objectives were to describe differences in power and sample size requirements across alternative distributions of a prognostic factor and magnitudes of the interaction effect, describe the effect of misspecification of the distribution of the prognostic factor on the power to detect an interaction effect, and discuss and compare three methods of handling the misspecification of the prognostic factor distribution. Methods: We examined the impact of the distribution of the dichotomous prognostic factor on power and sample size for the interaction effect using traditional one-stage sample size calculation. We varied the magnitude of the interaction effect, the distribution of the prognostic factor, and the magnitude and direction of the misspecification of the distribution of the prognostic factor. We compared quota sampling, modified quota sampling, and sample size re-estimation using conditional power as three strategies for ensuring adequate power and type I error in the presence of a misspecification of the prognostic factor distribution. Results: The sample size required to detect an interaction effect with 80% power increases as the distribution of the prognostic factor becomes less balanced. Misspecification such that the actual distribution of the prognostic factor was more skewed than planned led to a decrease in power with the greatest loss in power seen as the distribution of the prognostic factor became less balanced. Quota sampling was able to maintain the empirical power at 80% and the empirical type I error at 5%. The performance of the modified quota sampling procedure was related to the percentage of trials switching the quota sampling scheme. Sample size re-estimation using conditional power was able to improve the empirical power under negative misspecifications (i.e. skewed distributions) but it was not able to reach the target of 80% in all situations. Conclusions: Misspecifying the distribution of a dichotomous prognostic factor can greatly impact power to detect an interaction effect. Modified quota sampling and sample size re-estimation using conditional power improve the power when the distribution of the prognostic factor is misspecified. Quota sampling is simple and can prevent misspecification of the prognostic factor, while maintaining power and type I error.
- Published
- 2013
- Full Text
- View/download PDF
12. Conductivity anisotropy in oriented poly(p-Phenylene vinylene)
- Author
-
David R. Gagnon, Frank E. Karasz, Robert W. Lenz, and James D. Capistran
- Subjects
Conductive polymer ,chemistry.chemical_classification ,Materials science ,Polymers and Plastics ,Doping ,Poly(p-phenylene vinylene) ,General Chemistry ,Polymer ,Conductivity ,Condensed Matter Physics ,Amorphous solid ,Crystallography ,chemistry.chemical_compound ,chemistry ,Electrical resistivity and conductivity ,Polymer chemistry ,Materials Chemistry ,Anisotropy - Abstract
The study of charge transport mechanisms in highly conjugated conducting polymers has historically been hampered by the complex and invariant morphologies of the best conductors. We have prepared amorphous and uniaxially oriented films of poly(p-phenylene vinylene) (PPV) which exhibit a large conductivity anisotropy proportional to the degree of molecular orientation. The conductivity of the AsF5 doped PPV, together with wide angle x-ray and IR characterization of these samples is reported.
- Published
- 1984
- Full Text
- View/download PDF
13. Synthesis and electrical conductivity of AsF5-doped poly(arylene vinylenes)
- Author
-
David R. Gagnon, Frank E. Karasz, R. W. Lenz, and S. Antoun
- Subjects
chemistry.chemical_classification ,Aqueous solution ,Materials science ,Polymers and Plastics ,Sulfonium ,Arylene ,Doping ,General Chemistry ,Polymer ,Condensed Matter Physics ,Polyelectrolyte ,chemistry.chemical_compound ,Polymerization ,chemistry ,Phenylene ,Polymer chemistry ,Materials Chemistry - Abstract
A series of polymers containing 2,5-disubstituted phenylene vinylene units, and the polymer containing 1,4-naphthalene vinylene units, were prepared by polymerization of their bis(sulfonium salts) through a base elimination reaction in solution. Films of these polymers were cast from aqueous solution and chemically treated (doped) with AsF5 vapor. The electrically conductivities of the doped films varied greatly with changes in polymer structure, with the highest value obtained of 1.8 ohm−1 cm−1 for poly (2,5-dimethoxyphenylene vinylene).
- Published
- 1986
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.