26 results on '"Schuemie MJ"'
Search Results
2. Serially Combining Epidemiological Designs Does Not Improve Overall Signal Detection in Vaccine Safety Surveillance.
- Author
-
Arshad F, Schuemie MJ, Bu F, Minty EP, Alshammari TM, Lai LYH, Duarte-Salles T, Fortin S, Nyberg F, Ryan PB, Hripcsak G, Prieto-Alhambra D, and Suchard MA
- Subjects
- Humans, Sensitivity and Specificity, Research Design, Databases, Factual, Electronic Health Records, Vaccines adverse effects
- Abstract
Introduction: Vaccine safety surveillance commonly includes a serial testing approach with a sensitive method for 'signal generation' and specific method for 'signal validation.' The extent to which serial testing in real-world studies improves or hinders overall performance in terms of sensitivity and specificity remains unknown., Methods: We assessed the overall performance of serial testing using three administrative claims and one electronic health record database. We compared type I and II errors before and after empirical calibration for historical comparator, self-controlled case series (SCCS), and the serial combination of those designs against six vaccine exposure groups with 93 negative control and 279 imputed positive control outcomes., Results: The historical comparator design mostly had fewer type II errors than SCCS. SCCS had fewer type I errors than the historical comparator. Before empirical calibration, the serial combination increased specificity and decreased sensitivity. Type II errors mostly exceeded 50%. After empirical calibration, type I errors returned to nominal; sensitivity was lowest when the methods were combined., Conclusion: While serial combination produced fewer false-positive signals compared with the most specific method, it generated more false-negative signals compared with the most sensitive method. Using a historical comparator design followed by an SCCS analysis yielded decreased sensitivity in evaluating safety signals relative to a one-stage SCCS approach. While the current use of serial testing in vaccine surveillance may provide a practical paradigm for signal identification and triage, single epidemiological designs should be explored as valuable approaches to detecting signals., (© 2023. The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
3. Hip Fracture Risk After Treatment with Tramadol or Codeine: An Observational Study.
- Author
-
Voss EA, Ali SR, Singh A, Rijnbeek PR, Schuemie MJ, and Fife D
- Subjects
- Aged, Analgesics, Opioid adverse effects, Codeine adverse effects, Humans, Pain drug therapy, Quality of Life, Hip Fractures chemically induced, Hip Fractures drug therapy, Hip Fractures epidemiology, Tramadol adverse effects
- Abstract
Introduction: Hip fractures among older people are a major public health issue, which can impact quality of life and increase mortality within the year after they occur. A recent observational study found an increased risk of hip fracture in subjects who were new users of tramadol compared with codeine. These drugs have somewhat different indications. Tramadol is indicated for moderate to severe pain and can be used for an extended period; codeine is indicated for mild to moderate pain and cough suppression., Objective: In this observational study, we compared the risk of hip fracture in new users of tramadol or codeine, using multiple databases and analytical methods., Methods: Using data from the Clinical Practice Research Datalink and three US claims databases, we compared the risk of hip fracture after exposure to tramadol or codeine in subjects aged 50-89 years. To ensure comparability, large-scale propensity scores were used to adjust for confounding., Results: We observed a calibrated hazard ratio of 1.10 (95% calibrated confidence interval 0.99-1.21) in the Clinical Practice Research Datalink database, and a pooled estimate across the US databases yielded a calibrated hazard ratio of 1.06 (95% calibrated confidence interval 0.97-1.16)., Conclusions: Our results did not demonstrate a statistically significant difference between subjects treated for pain with tramadol compared with codeine for the outcome of hip fracture risk., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
4. Channeling Bias in the Analysis of Risk of Myocardial Infarction, Stroke, Gastrointestinal Bleeding, and Acute Renal Failure with the Use of Paracetamol Compared with Ibuprofen.
- Author
-
Weinstein RB, Ryan PB, Berlin JA, Schuemie MJ, Swerdel J, and Fife D
- Subjects
- Acute Kidney Injury chemically induced, Acute Kidney Injury epidemiology, Adolescent, Adult, Aged, Aged, 80 and over, Bias, Cohort Studies, Female, Gastrointestinal Hemorrhage chemically induced, Gastrointestinal Hemorrhage epidemiology, Humans, Male, Middle Aged, Myocardial Infarction chemically induced, Myocardial Infarction epidemiology, Risk Factors, Stroke chemically induced, Stroke epidemiology, United Kingdom epidemiology, Young Adult, Acetaminophen adverse effects, Anti-Inflammatory Agents, Non-Steroidal adverse effects, Drug-Related Side Effects and Adverse Reactions epidemiology, Ibuprofen adverse effects, Propensity Score
- Abstract
Introduction: Observational studies estimating severe outcomes for paracetamol versus ibuprofen use have acknowledged the specific challenge of channeling bias. A previous study relying on negative controls suggested that using large-scale propensity score (LSPS) matching may mitigate bias better than models using limited lists of covariates., Objective: The aim was to assess whether using LSPS matching would enable the evaluation of paracetamol, compared to ibuprofen, and increased risk of myocardial infarction, stroke, gastrointestinal (GI) bleeding, or acute renal failure., Study Design and Setting: In a new-user cohort study, we used two propensity score model strategies for confounder controls. One replicated the approach of controlling for a hand-picked list. The second used LSPSs based on all available covariates for matching. Positive and negative controls assessed residual confounding and calibrated confidence intervals. The data source was the Clinical Practices Research Datalink (CPRD)., Results: A substantial proportion of negative controls were statistically significant after propensity score matching on the publication covariates, indicating considerable systematic error. LSPS adjustment was less biased, but residual error remained. The calibrated estimates resulted in very wide confidence intervals, indicating large uncertainty in effect estimates once residual error was incorporated., Conclusions: For paracetamol versus ibuprofen, when using LSPS methods in the CPRD, it is only possible to distinguish true effects if those effects are large (hazard ratio > 2). Due to their smaller hazard ratios, the outcomes under study cannot be differentiated from null effects (represented by negative controls) even if there were a true effect. Based on these data, we conclude that we are unable to determine whether paracetamol is associated with an increased risk of myocardial infarction, stroke, GI bleeding, and acute renal failure compared to ibuprofen, due to residual confounding.
- Published
- 2020
- Full Text
- View/download PDF
5. Atypical Antipsychotics and the Risks of Acute Kidney Injury and Related Outcomes Among Older Adults: A Replication Analysis and an Evaluation of Adapted Confounding Control Strategies.
- Author
-
Ryan PB, Schuemie MJ, Ramcharran D, and Stang PE
- Subjects
- Aged, Aged, 80 and over, Benzodiazepines adverse effects, Bipolar Disorder drug therapy, Depressive Disorder, Major drug therapy, Female, Humans, Male, Olanzapine, Quetiapine Fumarate adverse effects, Risperidone adverse effects, Schizophrenia drug therapy, Acute Kidney Injury chemically induced, Antipsychotic Agents adverse effects
- Abstract
Objective: A recently published analysis of population-based claims data from Ontario, Canada reported higher risks of acute kidney injury (AKI) and related outcomes among older adults who were new users of atypical antipsychotics (AAPs) compared with unexposed patients. In light of these findings, the objective of the current study was to further investigate the risks of AKI and related outcomes among older adults receiving AAPs., Methods: A replication of the previously published analysis was performed using the US Truven MarketScan Medicare Supplemental database (MDCR) among patients aged 65 years and older. Compared with non-users of AAPs, the study compared the risk of AKI and related outcomes with users of AAPs (quetiapine, risperidone, olanzapine, aripiprazole, or paliperidone) using a 1-to-1 propensity score matched analysis. In addition, we performed adapted analyses that: (1) included all covariates used to fit propensity score models in outcome models; and (2) required patients to have a diagnosis of schizophrenia, bipolar disorder, or major depression and a healthcare visit within 90 days prior to the index date., Results: AKI effect estimates [as odds ratios (ORs) with 95% confidence intervals (CIs)] were significantly elevated in our MDCR replication analyses (OR 1.45, 95% CI 1.32-1.60); however, in adapted analyses, associations were not significant (OR 0.91, 95% CI 0.78-1.07)). In analyses of AKI and related outcomes, results were mostly consistent between the previously published and the MDCR replication analyses. The primary change that attenuated associations in adapted analyses was the requirement for patients to have a mental health condition and a healthcare visit prior to the index date., Conclusions: The MDCR analysis yielded similar results when the methodology of the previously published analysis was replicated, but, in adapted analyses, we did not find significantly higher risks of AKI and related outcomes. The contrast of results between our replication and adapted analyses may be due to the analytic approach used to compare patients (and potential confounding by indication). Further research is warranted to evaluate these associations, while also examining methods to account for differences in older adults who do and do not use these medications.
- Published
- 2017
- Full Text
- View/download PDF
6. Useful Interplay Between Spontaneous ADR Reports and Electronic Healthcare Records in Signal Detection.
- Author
-
Pacurariu AC, Straus SM, Trifirò G, Schuemie MJ, Gini R, Herings R, Mazzaglia G, Picelli G, Scotti L, Pedersen L, Arlett P, van der Lei J, Sturkenboom MC, and Coloma PM
- Subjects
- Drug-Related Side Effects and Adverse Reactions epidemiology, Europe, Humans, Medical Records Systems, Computerized statistics & numerical data, Pharmacovigilance, Adverse Drug Reaction Reporting Systems, Data Mining, Databases, Factual, Electronic Health Records statistics & numerical data
- Abstract
Background and Objective: Spontaneous reporting systems (SRSs) remain the cornerstone of post-marketing drug safety surveillance despite their well-known limitations. Judicious use of other available data sources is essential to enable better detection, strengthening and validation of signals. In this study, we investigated the potential of electronic healthcare records (EHRs) to be used alongside an SRS as an independent system, with the aim of improving signal detection., Methods: A signal detection strategy, focused on a limited set of adverse events deemed important in pharmacovigilance, was performed retrospectively in two data sources-(1) the Exploring and Understanding Adverse Drug Reactions (EU-ADR) database network and (2) the EudraVigilance database-using data between 2000 and 2010. Five events were considered for analysis: (1) acute myocardial infarction (AMI); (2) bullous eruption; (3) hip fracture; (4) acute pancreatitis; and (5) upper gastrointestinal bleeding (UGIB). Potential signals identified in each system were verified using the current published literature. The complementarity of the two systems to detect signals was expressed as the percentage of the unilaterally identified signals out of the total number of confirmed signals. As a proxy for the associated costs, the number of signals that needed to be reviewed to detect one true signal (number needed to detect [NND]) was calculated. The relationship between the background frequency of the events and the capability of each system to detect signals was also investigated., Results: The contribution of each system to signal detection appeared to be correlated with the background incidence of the events, being directly proportional to the incidence in EU-ADR and inversely proportional in EudraVigilance. EudraVigilance was particularly valuable in identifying bullous eruption and acute pancreatitis (71 and 42 % of signals were correctly identified from the total pool of known associations, respectively), while EU-ADR was most useful in identifying hip fractures (60 %). Both systems contributed reasonably well to identification of signals related to UGIB (45 % in EudraVigilance, 40 % in EU-ADR) but only fairly for signals related to AMI (25 % in EU-ADR, 20 % in EudraVigilance). The costs associated with detection of signals were variable across events; however, it was often more costly to detect safety signals in EU-ADR than in EudraVigilance (median NNDs: 7 versus 5)., Conclusion: An EHR-based system may have additional value for signal detection, alongside already established systems, especially in the presence of adverse events with a high background incidence. While the SRS appeared to be more cost effective overall, for some events the costs associated with signal detection in the EHR might be justifiable.
- Published
- 2015
- Full Text
- View/download PDF
7. Use of adjectives in abstracts when reporting results of randomized, controlled trials from industry and academia.
- Author
-
Cepeda MS, Berlin JA, Glasser SC, Battisti WP, and Schuemie MJ
- Subjects
- Authorship standards, Humans, Publishing standards, Trust, Abstracting and Indexing standards, Periodicals as Topic standards, Randomized Controlled Trials as Topic standards, Terminology as Topic
- Abstract
Objective: Accurate representation of study findings is crucial to preserve public trust. The language used to describe results could affect perceptions of the efficacy or safety of interventions. We sought to compare the adjectives used in clinical trial reports of industry-authored and non-industry-authored research., Methods: We included studies in PubMed that were randomized trials and had an abstract. Studies were classified as "non-industry-authored" when all authors had academic or governmental affiliations, or as "industry-authored" when any of the authors had industry affiliations. Abstracts were analyzed using a part-of-speech tagger to identify adjectives. To reduce the risk of false positives, the analysis was restricted to adjectives considered relevant to "coloring" (influencing interpretation) of trial results. Differences between groups were determined using exact tests, stratifying by journal., Results: A total of 306,007 publications met the inclusion criteria. We were able to classify 16,789 abstracts; 9,085 were industry-authored research, and 7,704 were non-industry-authored research. We found a differential use of adjectives between industry-authored and non-industry-authored reports. Adjectives such as "well tolerated" and "meaningful" were more commonly used in the title or conclusion of the abstract by industry authors, while adjectives such as "feasible" were more commonly used by non-industry authors., Conclusions: There are differences in the adjectives used when study findings are described in industry-authored reports compared with non-industry-authored reports. Authors should avoid overusing adjectives that could be inaccurate or result in misperceptions. Editors and peer reviewers should be attentive to the use of adjectives and assess whether the usage is context appropriate.
- Published
- 2015
- Full Text
- View/download PDF
8. Authors' reply to Hennessy and Leonard's comment on "Desideratum for evidence-based epidemiology".
- Author
-
Overhage JM, Ryan PB, Schuemie MJ, and Stang PE
- Subjects
- Humans, Drug-Related Side Effects and Adverse Reactions diagnosis, Epidemiologic Studies, Research Design
- Published
- 2015
- Full Text
- View/download PDF
9. Bridging islands of information to establish an integrated knowledge base of drugs and health outcomes of interest.
- Author
-
Boyce RD, Ryan PB, Norén GN, Schuemie MJ, Reich C, Duke J, Tatonetti NP, Trifirò G, Harpaz R, Overhage JM, Hartzema AG, Khayter M, Voss EA, Lambert CG, Huser V, and Dumontier M
- Subjects
- Humans, Reference Standards, Databases, Pharmaceutical standards, Evidence-Based Medicine
- Abstract
The entire drug safety enterprise has a need to search, retrieve, evaluate, and synthesize scientific evidence more efficiently. This discovery and synthesis process would be greatly accelerated through access to a common framework that brings all relevant information sources together within a standardized structure. This presents an opportunity to establish an open-source community effort to develop a global knowledge base, one that brings together and standardizes all available information for all drugs and all health outcomes of interest (HOIs) from all electronic sources pertinent to drug safety. To make this vision a reality, we have established a workgroup within the Observational Health Data Sciences and Informatics (OHDSI, http://ohdsi.org) collaborative. The workgroup's mission is to develop an open-source standardized knowledge base for the effects of medical products and an efficient procedure for maintaining and expanding it. The knowledge base will make it simpler for practitioners to access, retrieve, and synthesize evidence so that they can reach a rigorous and accurate assessment of causal relationships between a given drug and HOI. Development of the knowledge base will proceed with the measureable goal of supporting an efficient and thorough evidence-based assessment of the effects of 1,000 active ingredients across 100 HOIs. This non-trivial task will result in a high-quality and generally applicable drug safety knowledge base. It will also yield a reference standard of drug-HOI pairs that will enable more advanced methodological research that empirically evaluates the performance of drug safety analysis methods.
- Published
- 2014
- Full Text
- View/download PDF
10. Signal detection of potentially drug-induced acute liver injury in children using a multi-country healthcare database network.
- Author
-
Ferrajolo C, Coloma PM, Verhamme KM, Schuemie MJ, de Bie S, Gini R, Herings R, Mazzaglia G, Picelli G, Giaquinto C, Scotti L, Avillach P, Pedersen L, Rossi F, Capuano A, van der Lei J, Trifiró G, and Sturkenboom MC
- Subjects
- Adolescent, Chemical and Drug Induced Liver Injury etiology, Child, Child Welfare, Child, Preschool, Databases, Factual, Drug-Related Side Effects and Adverse Reactions etiology, Electronic Health Records, European Union, Humans, Infant, Infant, Newborn, International Cooperation, Liver Failure, Acute etiology, Adverse Drug Reaction Reporting Systems organization & administration, Adverse Drug Reaction Reporting Systems statistics & numerical data, Chemical and Drug Induced Liver Injury epidemiology, Data Mining, Drug-Related Side Effects and Adverse Reactions epidemiology, Liver Failure, Acute epidemiology
- Abstract
Background: Data mining in spontaneous reporting databases has shown that drug-induced liver injury is infrequently reported in children., Objectives: Our objectives were to (i) identify drugs potentially associated with acute liver injury (ALI) in children and adolescents using electronic healthcare record (EHR) data; and (ii) to evaluate the significance and novelty of these associations., Methods: We identified potential cases of ALI during exposure to any prescribed/dispensed drug for individuals <18 years old from the EU-ADR network, which includes seven databases from three countries, covering the years 1996-2010. Several new methods for signal detection were applied to identify all statistically significant associations between drugs and ALI. A drug was considered statistically significantly associated with ALI, using all other time as a reference category, if the 95% CI lower band of the relative risk was >1 and in the presence of at least three exposed cases of ALI. Potentially new signals were distinguished from already known associations concerning ALI (whether in adults and/or in the paediatric population) through manual review of published literature and drug product labels., Results: The study population comprised 4,838,146 individuals aged <18 years, who contributed an overall 25,575,132 person-years of follow-up. Within this population, we identified 1,015 potential cases of ALI. Overall, 20 positive drug-ALI associations were detected. The associations between ALI and domperidone, flunisolide and human insulin were considered as potentially new signals. Citalopram and cetirizine have been previously described as hepatotoxic in adults but not in children, while all remaining associations were already known in both adults and children., Conclusions: Data mining of multiple EHR databases for signal detection confirmed known associations between ALI and several drugs, and identified some potentially new signals in children that require further investigation through formal epidemiologic studies. This study shows that EHRs may complement traditional spontaneous reporting systems for signal detection and strengthening.
- Published
- 2014
- Full Text
- View/download PDF
11. Idiopathic acute liver injury in paediatric outpatients: incidence and signal detection in two European countries.
- Author
-
Ferrajolo C, Verhamme KM, Trifirò G, 't Jong GW, Giaquinto C, Picelli G, Oteri A, de Bie S, Valkhoff VE, Schuemie MJ, Mazzaglia G, Cricelli C, Rossi F, Capuano A, and Sturkenboom MC
- Subjects
- Adolescent, Ambulatory Care, Child, Child, Preschool, Cohort Studies, Databases, Factual, Electronic Health Records, Female, Humans, Incidence, Infant, Infant, Newborn, Italy epidemiology, Male, Netherlands epidemiology, Retrospective Studies, Liver Failure, Acute diagnosis, Liver Failure, Acute epidemiology, Liver Failure, Acute etiology
- Abstract
Background: Acute liver failure is idiopathic and drug-related in, respectively, around 50 and 15 % of children. Population-based, epidemiologic data about the pattern of disease manifestation and incidence of less severe acute liver injury, either idiopathic or potentially drug-attributed are limited in children and adolescents., Objectives: (i) To assess the incidence of idiopathic acute liver injury (ALI) and its clinical features in children and adolescent outpatients; and (ii) to investigate the role of the drug as a potential cause of ALI which is considered idiopathic., Methods: A retrospective cohort study was performed during the years 2000-2008. Data were retrieved from three longitudinal electronic healthcare databases in two European countries: Pedianet and Health Search/CSD Longitudinal Patient Database from Italy and the Integrated Primary Care Information database from The Netherlands. Cases of idiopathic acute liver injury in population aged <18 years were identified by exclusion of all competing causes of liver injury (e.g. viral, autoimmune hepatitis), according to CIOMS criteria. The potential role of drug exposure as actual underlying cause of idiopathic ALI was detected through signal detection mining techniques. Both pooled and country-specific incidence rates [IR/100,000 person-years (PYs)] of idiopathic ALI and pooled adjusted rate ratios (RR) of drugs identified as a potential cause of idiopathic ALI, plus 95 % confidence intervals (CI) were estimated using the custom-built software Jerboa., Results: Among 785 definite cases of idiopathic ALI, the pooled IR was 62.4/100,000 PYs (95 % CI 58.1-66.8). The country-specific IR was higher in Italy (73.0/100,000 PYs, 95 % CI 67.8-78.4) than in The Netherlands (21.0/100,000 PYs, 95 % CI 16.0-27.2) and increased with age in both countries. Isolated elevations of liver enzymes were reported in around two-thirds of cases in Italy, while in The Netherlands the cases were more often identified by a combination of signs/symptoms. Among drugs detected as potential underlying cause of idiopathic ALI, clarithromycin (RR 25.9, 95 % CI 13.4-50), amoxicillin/clavulanic acid (RR 18.6, 95 % CI 11.3-30.6), and amoxicillin (RR 7.5, 95 % CI 3.4-16.8) were associated with the highest risk compared to non-use., Conclusion: The incidence of idiopathic ALI in paediatrics is relatively low and comparable with adults. Clinical presentations differ between the two European countries. Signal detection in healthcare databases allowed identifying antibiotics as the drugs mostly associated with ALI with apparently unknown aetiology.
- Published
- 2013
- Full Text
- View/download PDF
12. Evaluation of disproportionality safety signaling applied to healthcare databases.
- Author
-
DuMouchel W, Ryan PB, Schuemie MJ, and Madigan D
- Subjects
- Acute Kidney Injury chemically induced, Area Under Curve, Chemical and Drug Induced Liver Injury diagnosis, Gastrointestinal Hemorrhage chemically induced, Humans, Myocardial Infarction chemically induced, Probability, Retrospective Studies, Databases, Factual, Drug-Related Side Effects and Adverse Reactions diagnosis, Research Design
- Abstract
Objective: To evaluate the performance of a disproportionality design, commonly used for analysis of spontaneous reports data such as the FDA Adverse Event Reporting System database, as a potential analytical method for an adverse drug reaction risk identification system using healthcare data., Research Design: We tested the disproportionality design in 5 real observational healthcare databases and 6 simulated datasets, retrospectively studying the predictive accuracy of the method when applied to a collection of 165 positive controls and 234 negative controls across 4 outcomes: acute liver injury, acute myocardial infarction, acute kidney injury, and upper gastrointestinal bleeding., Measures: We estimate how well the method can be expected to identify true effects and discriminate from false findings and explore the statistical properties of the estimates the design generates. The primary measure was the area under the curve (AUC) of the receiver operating characteristic (ROC) curve., Results: For each combination of 4 outcomes and 5 databases, 48 versions of disproportionality analysis (DPA) were carried out and the AUC computed. The majority of the AUC values were in the range of 0.35 < AUC < 0.6, which is considered to be poor predictive accuracy, since the value AUC = 0.5 would be expected from mere random assignment. Several DPA versions achieved AUC of about 0.7 for the outcome Acute Renal Failure within the GE database. The overall highest DPA version across all 20 outcome-database combinations was the Bayesian Information Component method with no stratification by age and gender, using first occurrence of outcome and with assumed time-at-risk equal to duration of exposure + 30 d, but none were uniformly optimal. The relative risk estimates for the negative control drug-event combinations were very often biased either upward or downward by a factor of 2 or more. Coverage probabilities of confidence intervals from all methods were far below nominal., Conclusions: The disproportionality methods that we evaluated did not discriminate true positives from true negatives using healthcare data as they seem to do using spontaneous report data.
- Published
- 2013
- Full Text
- View/download PDF
13. Empirical performance of the calibrated self-controlled cohort analysis within temporal pattern discovery: lessons for developing a risk identification and analysis system.
- Author
-
Norén GN, Bergvall T, Ryan PB, Juhlin K, Schuemie MJ, and Madigan D
- Subjects
- Area Under Curve, Bias, Calibration, Chemical and Drug Induced Liver Injury diagnosis, Databases, Factual, Electronic Health Records, Humans, Cohort Studies, Drug-Related Side Effects and Adverse Reactions diagnosis, Research Design, Risk Assessment methods
- Abstract
Background: Observational healthcare data offer the potential to identify adverse drug reactions that may be missed by spontaneous reporting. The self-controlled cohort analysis within the Temporal Pattern Discovery framework compares the observed-to-expected ratio of medical outcomes during post-exposure surveillance periods with those during a set of distinct pre-exposure control periods in the same patients. It utilizes an external control group to account for systematic differences between the different time periods, thus combining within- and between-patient confounder adjustment in a single measure., Objectives: To evaluate the performance of the calibrated self-controlled cohort analysis within Temporal Pattern Discovery as a tool for risk identification in observational healthcare data., Research Design: Different implementations of the calibrated self-controlled cohort analysis were applied to 399 drug-outcome pairs (165 positive and 234 negative test cases across 4 health outcomes of interest) in 5 real observational databases (four with administrative claims and one with electronic health records)., Measures: Performance was evaluated on real data through sensitivity/specificity, the area under receiver operator characteristics curve (AUC), and bias., Results: The calibrated self-controlled cohort analysis achieved good predictive accuracy across the outcomes and databases under study. The optimal design based on this reference set uses a 360 days surveillance period and a single control period 180 days prior to new prescriptions. It achieved an average AUC of 0.75 and AUC >0.70 in all but one scenario. A design with three separate control periods performed better for the electronic health records database and for acute renal failure across all data sets. The estimates for negative test cases were generally unbiased, but a minor negative bias of up to 0.2 on the RR-scale was observed with the configurations using multiple control periods, for acute liver injury and upper gastrointestinal bleeding., Conclusions: The calibrated self-controlled cohort analysis within Temporal Pattern Discovery shows promise as a tool for risk identification; it performs well at discriminating positive from negative test cases. The optimal parameter configuration may vary with the data set and medical outcome of interest.
- Published
- 2013
- Full Text
- View/download PDF
14. Empirical performance of a new user cohort method: lessons for developing a risk identification and analysis system.
- Author
-
Ryan PB, Schuemie MJ, Gruber S, Zorych I, and Madigan D
- Subjects
- Area Under Curve, Humans, Cohort Studies, Drug-Related Side Effects and Adverse Reactions diagnosis, Research Design, Risk Assessment methods
- Abstract
Background: Observational healthcare data offer the potential to enable identification of risks of medical products, but appropriate methodology has not yet been defined. The new user cohort method, which compares the post-exposure rate among the target drug to a referent comparator group, is the prevailing approach for many pharmacoepidemiology evaluations and has been proposed as a promising approach for risk identification but its performance in this context has not been fully assessed., Objectives: To evaluate the performance of the new user cohort method as a tool for risk identification in observational healthcare data., Research Design: The method was applied to 399 drug-outcome scenarios (165 positive controls and 234 negative controls across 4 health outcomes of interest) in 5 real observational databases (4 administrative claims and 1 electronic health record) and in 6 simulated datasets with no effect and injected relative risks of 1.25, 1.5, 2, 4, and 10, respectively., Measures: Method performance was evaluated through Area Under ROC Curve (AUC), bias, and coverage probability., Results: The new user cohort method achieved modest predictive accuracy across the outcomes and databases under study, with the top-performing analysis near AUC >0.70 in most scenarios. The performance of the method was particularly sensitive to the choice of comparator population. For almost all drug-outcome pairs there was a large difference, either positive or negative, between the true effect size and the estimate produced by the method, although this error was near zero on average. Simulation studies showed that in the majority of cases, the true effect estimate was not within the 95 % confidence interval produced by the method., Conclusion: The new user cohort method can contribute useful information toward a risk identification system, but should not be considered definitive evidence given the degree of error observed within the effect estimates. Careful consideration of the comparator selection and appropriate calibration of the effect estimates is required in order to properly interpret study findings.
- Published
- 2013
- Full Text
- View/download PDF
15. Replication of the OMOP experiment in Europe: evaluating methods for risk identification in electronic health record databases.
- Author
-
Schuemie MJ, Gini R, Coloma PM, Straatman H, Herings RM, Pedersen L, Innocenti F, Mazzaglia G, Picelli G, van der Lei J, and Sturkenboom MC
- Subjects
- Area Under Curve, Europe, Humans, Databases, Factual, Drug-Related Side Effects and Adverse Reactions diagnosis, Electronic Health Records, Research Design, Risk Assessment methods
- Abstract
Background: The Observational Medical Outcomes Partnership (OMOP) has just completed a large scale empirical evaluation of statistical methods and analysis choices for risks identification in longitudinal observational healthcare data. This experiment drew data from four large US health insurance claims databases and one US electronic health record (EHR) database, but it is unclear to what extend the findings of this study apply to other data sources., Objective: To replicate the OMOP experiment in six European EHR databases., Research Design: Six databases of the EU-ADR (Exploring and Understanding Adverse Drug Reactions) database network participated in this study: Aarhus (Denmark), ARS (Italy), HealthSearch (Italy), IPCI (the Netherlands), Pedianet (Italy), and Pharmo (the Netherlands). All methods in the OMOP experiment were applied to a collection of 165 positive and 234 negative control drug-outcome pairs across four outcomes: acute liver injury, acute myocardial infarction, acute kidney injury, and upper gastrointestinal bleeding. Area under the receiver operator characteristics curve (AUC) was computed per database and for a combination of all six databases using meta-analysis for random effects. We provide expected values of estimation error as well, based on negative controls., Results: Similarly to the US experiment, high predictive accuracy was found (AUC >0.8) for some analyses. Self-controlled designs, such as self-controlled case series, IC temporal pattern discovery and self-controlled cohort achieved higher performance than other methods, both in terms of predictive accuracy and observed bias., Conclusions: The major findings of the recent OMOP experiment were also observed in the European databases.
- Published
- 2013
- Full Text
- View/download PDF
16. Alternative outcome definitions and their effect on the performance of methods for observational outcome studies.
- Author
-
Reich CG, Ryan PB, and Schuemie MJ
- Subjects
- Acute Kidney Injury chemically induced, Area Under Curve, Bias, Chemical and Drug Induced Liver Injury diagnosis, Humans, Myocardial Infarction chemically induced, Observational Studies as Topic, Drug-Related Side Effects and Adverse Reactions diagnosis, Research Design
- Abstract
Background: A systematic risk identification system has the potential to test marketed drugs for important Health Outcomes of Interest or HOI. For each HOI, multiple definitions are used in the literature, and some of them are validated for certain databases. However, little is known about the effect of different definitions on the ability of methods to estimate their association with medical products., Objectives: Alternative definitions of HOI were studied for their effect on the performance of analytical methods in observational outcome studies., Methods: A set of alternative definitions for three HOI were defined based on literature review and clinical diagnosis guidelines: acute kidney injury, acute liver injury and acute myocardial infarction. The definitions varied by the choice of diagnostic codes and the inclusion of procedure codes and lab values. They were then used to empirically study an array of analytical methods with various analytical choices in four observational healthcare databases. The methods were executed against predefined drug-HOI pairs to generate an effect estimate and standard error for each pair. These test cases included positive controls (active ingredients with evidence to suspect a positive association with the outcome) and negative controls (active ingredients with no evidence to expect an effect on the outcome). Three different performance metrics where used: (i) Area Under the Receiver Operator Characteristics (ROC) curve (AUC) as a measure of a method's ability to distinguish between positive and negative test cases, (ii) Measure of bias by estimation of distribution of observed effect estimates for the negative test pairs where the true effect can be assumed to be one (no relative risk), and (iii) Minimal Detectable Relative Risk (MDRR) as a measure of whether there is sufficient power to generate effect estimates., Results: In the three outcomes studied, different definitions of outcomes show comparable ability to differentiate true from false control cases (AUC) and a similar bias estimation. However, broader definitions generating larger outcome cohorts allowed more drugs to be studied with sufficient statistical power., Conclusions: Broader definitions are preferred since they allow studying drugs with lower prevalence than the more precise or narrow definitions while showing comparable performance characteristics in differentiation of signal vs. no signal as well as effect size estimation.
- Published
- 2013
- Full Text
- View/download PDF
17. Empirical performance of a self-controlled cohort method: lessons for developing a risk identification and analysis system.
- Author
-
Ryan PB, Schuemie MJ, and Madigan D
- Subjects
- Area Under Curve, Bias, Humans, Probability, Cohort Studies, Drug-Related Side Effects and Adverse Reactions diagnosis, Research Design, Risk Assessment methods
- Abstract
Background: Observational healthcare data offer the potential to enable identification of risks of medical products, but appropriate methodology has not yet been defined. The self-controlled cohort method, which compares the post-exposure outcome rate with the pre-exposure rate among an exposed cohort, has been proposed as a potential approach for risk identification but its performance has not been fully assessed., Objectives: To evaluate the performance of the self-controlled cohort method as a tool for risk identification in observational healthcare data., Research Design: The method was applied to 399 drug-outcome scenarios (165 positive controls and 234 negative controls across 4 health outcomes of interest) in 5 real observational databases (4 administrative claims and 1 electronic health record) and in 6 simulated datasets with no effect and injected relative risks of 1.25, 1.5, 2, 4, and 10, respectively., Measures: Method performance was evaluated through area under ROC curve (AUC), bias, and coverage probability., Results: The self-controlled cohort design achieved strong predictive accuracy across the outcomes and databases under study, with the top-performing settings exceeding AUC >0.76 in all scenarios. However, the estimates generated were observed to be highly biased with low coverage probability., Conclusions: If the objective for a risk identification system is one of discrimination, the self-controlled cohort method shows promise as a potential tool for risk identification. However, if a system is intended to generate effect estimates to quantify the magnitude of potential risks, the self-controlled cohort method may not be suitable, and requires substantial calibration to be properly interpreted under nominal properties.
- Published
- 2013
- Full Text
- View/download PDF
18. Variation in choice of study design: findings from the Epidemiology Design Decision Inventory and Evaluation (EDDIE) survey.
- Author
-
Stang PE, Ryan PB, Overhage JM, Schuemie MJ, Hartzema AG, and Welebob E
- Subjects
- Data Collection, Databases, Factual, Humans, Drug-Related Side Effects and Adverse Reactions diagnosis, Epidemiologic Studies, Research Design
- Abstract
Background: Researchers using observational data to understand drug effects must make a number of analytic design choices that suit the characteristics of the data and the subject of the study. Review of the published literature suggests that there is a lack of consistency even when addressing the same research question in the same database., Objective: To characterize the degree of similarity or difference in the method and analysis choices made by observational database research experts when presented with research study scenarios., Research Design: On-line survey using research scenarios on drug-effect studies to capture method selection and analysis choices that follow a dependency branching based on response to key questions., Subjects: Voluntary participants experienced in epidemiological study design solicited for participation through registration on the Observational Medical Outcomes Partnership website, membership in particular professional organizations, or links in relevant newsletters., Measures: Description (proportion) of respondents selecting particular methods and making specific analysis choices based on individual drug-outcome scenario pairs. The number of questions/decisions differed based on stem questions of study design, time-at-risk, outcome definition, and comparator., Results: There is little consistency across scenarios, by drug or by outcome of interest, in the decisions made for design and analyses in scenarios using large healthcare databases. The most consistent choice was the cohort study design but variability in the other critical decisions was common., Conclusions: There is great variation among epidemiologists in the design and analytical choices that they make when implementing analyses in observational healthcare databases. These findings confirm that it will be important to generate empiric evidence to inform these decisions and to promote a better understanding of the impact of standardization on research implementation.
- Published
- 2013
- Full Text
- View/download PDF
19. Empirical performance of LGPS and LEOPARD: lessons for developing a risk identification and analysis system.
- Author
-
Schuemie MJ, Madigan D, and Ryan PB
- Subjects
- Area Under Curve, Bias, Databases, Factual, Humans, Probability, Retrospective Studies, Drug-Related Side Effects and Adverse Reactions diagnosis, Research Design, Risk Assessment methods
- Abstract
Background: The availability of large-scale observational healthcare data allows for the active monitoring of safety of drugs, but research is needed to determine which statistical methods are best suited for this task. Recently, the Longitudinal Gamma Poisson Shrinker (LGPS) and Longitudinal Evaluation of Observational Profiles of Adverse events Related to Drugs (LEOPARD) methods were developed specifically for this task. LGPS applies Bayesian shrinkage to an estimated incidence rate ratio, and LEOPARD aims to detect and discard associations due to protopathic bias. The operating characteristics of these methods still need to be determined., Objective: Establish the operating characteristics of LGPS and LEOPARD for large scale observational analysis in drug safety., Research Design: We empirically evaluated LGPS and LEOPARD in five real observational healthcare databases and six simulated datasets. We retrospectively studied the predictive accuracy of the methods when applied to a collection of 165 positive control and 234 negative control drug-outcome pairs across four outcomes: acute liver injury, acute myocardial infarction, acute kidney injury, and upper gastrointestinal bleeding., Results: In contrast to earlier findings, we found that LGPS and LEOPARD provide weak discrimination between positive and negative controls, although the use of LEOPARD does lead to higher performance in this respect. Furthermore, the methods produce biased estimates and confidence intervals that have poor coverage properties., Conclusions: For the four outcomes we examined, LGPS and LEOPARD may not be the designs of choice for risk identification.
- Published
- 2013
- Full Text
- View/download PDF
20. Empirical performance of the self-controlled case series design: lessons for developing a risk identification and analysis system.
- Author
-
Suchard MA, Zorych I, Simpson SE, Schuemie MJ, Ryan PB, and Madigan D
- Subjects
- Area Under Curve, Bias, Humans, Probability, Drug-Related Side Effects and Adverse Reactions diagnosis, Research Design, Risk Assessment methods
- Abstract
Background: The self-controlled case series (SCCS) offers potential as an statistical method for risk identification involving medical products from large-scale observational healthcare data. However, analytic design choices remain in encoding the longitudinal health records into the SCCS framework and its risk identification performance across real-world databases is unknown., Objectives: To evaluate the performance of SCCS and its design choices as a tool for risk identification in observational healthcare data., Research Design: We examined the risk identification performance of SCCS across five design choices using 399 drug-health outcome pairs in five real observational databases (four administrative claims and one electronic health records). In these databases, the pairs involve 165 positive controls and 234 negative controls. We also consider several synthetic databases with known relative risks between drug-outcome pairs., Measures: We evaluate risk identification performance through estimating the area under the receiver-operator characteristics curve (AUC) and bias and coverage probability in the synthetic examples., Results: The SCCS achieves strong predictive performance. Twelve of the twenty health outcome-database scenarios return AUCs >0.75 across all drugs. Including all adverse events instead of just the first per patient and applying a multivariate adjustment for concomitant drug use are the most important design choices. However, the SCCS as applied here returns relative risk point-estimates biased towards the null value of 1 with low coverage probability., Conclusions: The SCCS recently extended to apply a multivariate adjustment for concomitant drug use offers promise as a statistical tool for risk identification in large-scale observational healthcare databases. Poor estimator calibration dampens enthusiasm, but on-going work should correct this short-coming.
- Published
- 2013
- Full Text
- View/download PDF
21. Defining a reference set to support methodological research in drug safety.
- Author
-
Ryan PB, Schuemie MJ, Welebob E, Duke J, Valentine S, and Hartzema AG
- Subjects
- Acute Kidney Injury chemically induced, Chemical and Drug Induced Liver Injury diagnosis, Gastrointestinal Hemorrhage chemically induced, Humans, Myocardial Infarction chemically induced, Drug-Related Side Effects and Adverse Reactions diagnosis, Research Design standards
- Abstract
Background: Methodological research to evaluate the performance of methods requires a benchmark to serve as a referent comparison. In drug safety, the performance of analyses of spontaneous adverse event reporting databases and observational healthcare data, such as administrative claims and electronic health records, has been limited by the lack of such standards., Objectives: To establish a reference set of test cases that contain both positive and negative controls, which can serve the basis for methodological research in evaluating methods performance in identifying drug safety issues., Research Design: Systematic literature review and natural language processing of structured product labeling was performed to identify evidence to support the classification of drugs as either positive controls or negative controls for four outcomes: acute liver injury, acute kidney injury, acute myocardial infarction, and upper gastrointestinal bleeding., Results: Three-hundred and ninety-nine test cases comprised of 165 positive controls and 234 negative controls were identified across the four outcomes. The majority of positive controls for acute kidney injury and upper gastrointestinal bleeding were supported by randomized clinical trial evidence, while the majority of positive controls for acute liver injury and acute myocardial infarction were only supported based on published case reports. Literature estimates for the positive controls shows substantial variability that limits the ability to establish a reference set with known effect sizes., Conclusions: A reference set of test cases can be established to facilitate methodological research in drug safety. Creating a sufficient sample of drug-outcome pairs with binary classification of having no effect (negative controls) or having an increased effect (positive controls) is possible and can enable estimation of predictive accuracy through discrimination. Since the magnitude of the positive effects cannot be reliably obtained and the quality of evidence may vary across outcomes, assumptions are required to use the test cases in real data for purposes of measuring bias, mean squared error, or coverage probability.
- Published
- 2013
- Full Text
- View/download PDF
22. Evaluating performance of risk identification methods through a large-scale simulation of observational data.
- Author
-
Ryan PB and Schuemie MJ
- Subjects
- Adult, Computer Simulation, Databases, Factual, Female, Humans, Male, Observational Studies as Topic, Drug-Related Side Effects and Adverse Reactions diagnosis, Research Design, Risk Assessment methods
- Abstract
Background: There has been only limited evaluation of statistical methods for identifying safety risks of drug exposure in observational healthcare data. Simulations can support empirical evaluation, but have not been shown to adequately model the real-world phenomena that challenge observational analyses., Objectives: To design and evaluate a probabilistic framework (OSIM2) for generating simulated observational healthcare data, and to use this data for evaluating the performance of methods in identifying associations between drug exposure and health outcomes of interest., Research Design: Seven observational designs, including case-control, cohort, self-controlled case series, and self-controlled cohort design were applied to 399 drug-outcome scenarios in 6 simulated datasets with no effect and injected relative risks of 1.25, 1.5, 2, 4, and 10, respectively., Subjects: Longitudinal data for 10 million simulated patients were generated using a model derived from an administrative claims database, with associated demographics, periods of drug exposure derived from pharmacy dispensings, and medical conditions derived from diagnoses on medical claims., Measures: Simulation validation was performed through descriptive comparison with real source data. Method performance was evaluated using Area Under ROC Curve (AUC), bias, and mean squared error., Results: OSIM2 replicates prevalence and types of confounding observed in real claims data. When simulated data are injected with relative risks (RR) ≥ 2, all designs have good predictive accuracy (AUC > 0.90), but when RR < 2, no methods achieve 100 % predictions. Each method exhibits a different bias profile, which changes with the effect size., Conclusions: OSIM2 can support methodological research. Results from simulation suggest method operating characteristics are far from nominal properties.
- Published
- 2013
- Full Text
- View/download PDF
23. Empirical performance of the case-control method: lessons for developing a risk identification and analysis system.
- Author
-
Madigan D, Schuemie MJ, and Ryan PB
- Subjects
- Acute Kidney Injury chemically induced, Area Under Curve, Chemical and Drug Induced Liver Injury diagnosis, Gastrointestinal Hemorrhage chemically induced, Humans, Myocardial Infarction chemically induced, Probability, Retrospective Studies, Case-Control Studies, Drug-Related Side Effects and Adverse Reactions diagnosis, Research Design, Risk Assessment methods
- Abstract
Background: Considerable attention now focuses on the use of large-scale observational healthcare data for understanding drug safety. In this context, analysts utilize a variety of statistical and epidemiological approaches such as case-control, cohort, and self-controlled methods. The operating characteristics of these methods are poorly understood., Objective: Establish the operating characteristics of the case-control method for large scale observational analysis in drug safety., Research Design: We empirically evaluated the case-control approach in 5 real observational healthcare databases and 6 simulated datasets. We retrospectively studied the predictive accuracy of the method when applied to a collection of 165 positive controls and 234 negative controls across 4 outcomes: acute liver injury, acute myocardial infarction, acute kidney injury, and upper gastrointestinal bleeding., Results: In our experiment, the case-control method provided weak discrimination between positive and negative controls. Furthermore, the method yielded positively biased estimates and confidence intervals that had poor coverage properties., Conclusions: For the four outcomes we examined, the case-control method may not be the method of choice for estimating potentially harmful effects of drugs.
- Published
- 2013
- Full Text
- View/download PDF
24. Desideratum for evidence based epidemiology.
- Author
-
Overhage JM, Ryan PB, Schuemie MJ, and Stang PE
- Subjects
- Area Under Curve, Databases, Factual, Humans, Drug-Related Side Effects and Adverse Reactions diagnosis, Epidemiologic Studies, Research Design
- Abstract
Background: There is great variation in choices of method and specific analytical details in epidemiological studies, resulting in widely varying results even when studying the same drug and outcome in the same database. Not only does this variation undermine the credibility of the research but it limits our ability to improve the methods., Methods: In order to evaluate the performance of methods and analysis choices we used standard references and a literature review to identify 164 positive controls (drug-outcome pairs believed to represent true adverse drug reactions), and 234 negative controls (drug-outcome pairs for which we have confidence there is no direct causal relationship). We tested 3,748 unique analyses (methods in combination with specific analysis choices) that represent the full range of approaches to adjusting for confounding in five large observational datasets on these controls. We also evaluated the impact of increasingly specific outcome definitions, and performed a replication study in six additional datasets. We characterized the performance of each method using the area under the receiver operator curve (AUC), bias, and coverage probability. In addition, we developed simulated datasets that closely matched the characteristics of the observational datasets into which we inserted data consistent with known drug-outcome relationships in order to measure the accuracy of estimates generated by the analyses., Discussion: We expect the results of this systematic, empirical evaluation of the performance of these analyses across a moderate range of outcomes and databases to provide important insights into the methods used in epidemiological studies and to increase the consistency with which methods are applied, thereby increasing the confidence in results and our ability to systematically improve our approaches.
- Published
- 2013
- Full Text
- View/download PDF
25. A comparison of the empirical performance of methods for a risk identification system.
- Author
-
Ryan PB, Stang PE, Overhage JM, Suchard MA, Hartzema AG, DuMouchel W, Reich CG, Schuemie MJ, and Madigan D
- Subjects
- Area Under Curve, Databases, Factual, Humans, Drug-Related Side Effects and Adverse Reactions diagnosis, Research Design, Risk Assessment methods
- Abstract
Background: Observational healthcare data offer the potential to enable identification of risks of medical products, and the medical literature is replete with analyses that aim to accomplish this objective. A number of established analytic methods dominate the literature but their operating characteristics in real-world settings remain unknown., Objectives: To compare the performance of seven methods (new user cohort, case control, self-controlled case series, self-controlled cohort, disproportionality analysis, temporal pattern discovery, and longitudinal gamma poisson shrinker) as tools for risk identification in observational healthcare data., Research Design: The experiment applied each method to 399 drug-outcome scenarios (165 positive controls and 234 negative controls across 4 health outcomes of interest) in 5 real observational databases (4 administrative claims and 1 electronic health record)., Measures: Method performance was evaluated through Area Under the receiver operator characteristics Curve (AUC), bias, mean square error, and confidence interval coverage probability., Results: Multiple methods offer strong predictive accuracy, with AUC > 0.70 achievable for all outcomes and databases with more than one analytical approach. Self-controlled methods (self-controlled case series, temporal pattern discovery, self-controlled cohort) had higher predictive accuracy than cohort and case-control methods across all databases and outcomes. Methods differed in the expected value and variance of the error distribution. All methods had lower coverage probability than the expected nominal properties., Conclusions: Observational healthcare data can inform risk identification of medical product effects on acute liver injury, acute myocardial infarction, acute renal failure and gastrointestinal bleeding. However, effect estimates from all methods require calibration to address inconsistency in method operating characteristics. Further empirical evaluation is required to gauge the generalizability of these findings to other databases and outcomes.
- Published
- 2013
- Full Text
- View/download PDF
26. A reference standard for evaluation of methods for drug safety signal detection using electronic healthcare record databases.
- Author
-
Coloma PM, Avillach P, Salvo F, Schuemie MJ, Ferrajolo C, Pariente A, Fourrier-Réglat A, Molokhia M, Patadia V, van der Lei J, Sturkenboom M, and Trifirò G
- Subjects
- Adverse Drug Reaction Reporting Systems organization & administration, Adverse Drug Reaction Reporting Systems statistics & numerical data, Databases, Factual statistics & numerical data, Humans, Pharmacovigilance, Reference Standards, Drug-Related Side Effects and Adverse Reactions, Electronic Health Records statistics & numerical data, Product Surveillance, Postmarketing methods
- Abstract
Background: The growing interest in using electronic healthcare record (EHR) databases for drug safety surveillance has spurred development of new methodologies for signal detection. Although several drugs have been withdrawn postmarketing by regulatory authorities after scientific evaluation of harms and benefits, there is no definitive list of confirmed signals (i.e. list of all known adverse reactions and which drugs can cause them). As there is no true gold standard, prospective evaluation of signal detection methods remains a challenge., Objective: Within the context of methods development and evaluation in the EU-ADR Project (Exploring and Understanding Adverse Drug Reactions by integrative mining of clinical records and biomedical knowledge), we propose a surrogate reference standard of drug-adverse event associations based on existing scientific literature and expert opinion., Methods: The reference standard was constructed for ten top-ranked events judged as important in pharmacovigilance. A stepwise approach was employed to identify which, among a list of drug-event associations, are well recognized (known positive associations) or highly unlikely ('negative controls') based on MEDLINE-indexed publications, drug product labels, spontaneous reports made to the WHO's pharmacovigilance database, and expert opinion. Only drugs with adequate exposure in the EU-ADR database network (comprising ≈60 million person-years of healthcare data) to allow detection of an association were considered. Manual verification of positive associations and negative controls was independently performed by two experts proficient in clinical medicine, pharmacoepidemiology and pharmacovigilance. A third expert adjudicated equivocal cases and arbitrated any disagreement between evaluators., Results: Overall, 94 drug-event associations comprised the reference standard, which included 44 positive associations and 50 negative controls for the ten events of interest: bullous eruptions; acute renal failure; anaphylactic shock; acute myocardial infarction; rhabdomyolysis; aplastic anaemia/pancytopenia; neutropenia/agranulocytosis; cardiac valve fibrosis; acute liver injury; and upper gastrointestinal bleeding. For cardiac valve fibrosis, there was no drug with adequate exposure in the database network that satisfied the criteria for a positive association., Conclusion: A strategy for the construction of a reference standard to evaluate signal detection methods that use EHR has been proposed. The resulting reference standard is by no means definitive, however, and should be seen as dynamic. As knowledge on drug safety evolves over time and new issues in drug safety arise, this reference standard can be re-evaluated.
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.