5 results on '"Joie Ensor"'
Search Results
2. Minimum sample size for external validation of a clinical prediction model with a continuous outcome
- Author
-
Kym I E Snell, Gary S. Collins, Joie Ensor, Richard D Riley, Mohammed T Hudda, and Lucinda Archer
- Subjects
Statistics and Probability ,Epidemiology ,Computer science ,Calibration (statistics) ,Population ,Linear prediction ,01 natural sciences ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,RA0421 ,Statistics ,Humans ,030212 general & internal medicine ,0101 mathematics ,education ,Child ,Event (probability theory) ,education.field_of_study ,Models, Statistical ,Variance (accounting) ,Prognosis ,R1 ,Outcome (probability) ,Confidence interval ,Sample size determination ,Sample Size ,Calibration ,RA - Abstract
In prediction model research, external validation is needed to examine an existing model's performance using data independent to that for model development. Current external validation studies often suffer from small sample sizes and consequently imprecise predictive performance estimates. To address this, we propose how to determine the minimum sample size needed for a new external validation study of a prediction model for a binary outcome. Our calculations aim to precisely estimate calibration (Observed/Expected and calibration slope), discrimination (C-statistic), and clinical utility (net benefit). For each measure, we propose closed-form and iterative solutions for calculating the minimum sample size required. These require specifying: (i) target SEs (confidence interval widths) for each estimate of interest, (ii) the anticipated outcome event proportion in the validation population, (iii) the prediction model's anticipated (mis)calibration and variance of linear predictor values in the validation population, and (iv) potential risk thresholds for clinical decision-making. The calculations can also be used to inform whether the sample size of an existing (already collected) dataset is adequate for external validation. We illustrate our proposal for external validation of a prediction model for mechanical heart valve failure with an expected outcome event proportion of 0.018. Calculations suggest at least 9835 participants (177 events) are required to precisely estimate the calibration and discrimination measures, with this number driven by the calibration slope criterion, which we anticipate will often be the case. Also, 6443 participants (116 events) are required to precisely estimate net benefit at a risk threshold of 8%. Software code is provided.
- Published
- 2020
3. Guidance for deriving and presenting percentage study weights in meta-analysis of test accuracy studies
- Author
-
Daniëlle A W M van der Windt, Kym I E Snell, Richard D Riley, Joie Ensor, and Danielle L. Burke
- Subjects
Fever ,Thermometers ,Computer science ,Bivariate analysis ,Sensitivity and Specificity ,01 natural sciences ,Education ,010104 statistics & probability ,03 medical and health sciences ,symbols.namesake ,0302 clinical medicine ,Meta-Analysis as Topic ,Alzheimer Disease ,Statistics ,Humans ,030212 general & internal medicine ,Sensitivity (control systems) ,0101 mathematics ,Fisher information ,Receiver operating characteristic ,Reproducibility of Results ,R1 ,Standard error ,ROC Curve ,Sample size determination ,Sample Size ,Meta-analysis ,Multivariate Analysis ,Outlier ,symbols ,Regression Analysis ,RA ,Algorithms ,Software - Abstract
Percentage study weights in meta-analysis reveal the contribution of each study toward the overall summary results and are especially important when some studies are considered outliers or at high risk of bias. In meta-analyses of test accuracy reviews, such as a bivariate meta-analysis of sensitivity and specificity, the percentage study weights are not currently derived. Rather, the focus is on representing the precision of study estimates on receiver operating characteristic plots by scaling the points relative to the study sample size or to their standard error. In this article, we recommend that researchers should also provide the percentage study weights directly, and we propose a method to derive them based on a decomposition of Fisher information matrix. This method also generalises to a bivariate meta-regression so that percentage study weights can also be derived for estimates of study-level modifiers of test accuracy. Application is made to two meta-analyses examining test accuracy: one of ear temperature for diagnosis of fever in children and the other of positron emission tomography for diagnosis of Alzheimer's disease. These highlight that the percentage study weights provide important information that is otherwise hidden if the presentation only focuses on precision based on sample size or standard errors. Software code is provided for Stata, and we suggest that our proposed percentage weights should be routinely added on forest and receiver operating characteristic plots for sensitivity and specificity, to provide transparency of the contribution of each study toward the results. This has implications for the PRISMA-diagnostic test accuracy guidelines that are currently being produced.
- Published
- 2018
- Full Text
- View/download PDF
4. One-stage individual participant data meta-analysis models: estimation of treatment-covariate interactions must avoid ecological bias by separating out within-trial and across-trial information
- Author
-
Danielle L. Burke, Hairui Hua, Joie Ensor, Catrin Tudur Smith, Michael J. Crowther, and Richard D Riley
- Subjects
Statistics and Probability ,Estimation ,Epidemiology ,Computer science ,Individual participant data ,One stage ,01 natural sciences ,Medical statistics ,3. Good health ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Statistical significance ,Meta-analysis ,Statistics ,Covariate ,Econometrics ,030212 general & internal medicine ,0101 mathematics ,Ecological bias - Abstract
Stratified medicine utilizes individual-level covariates that are associated with a differential treatment effect, also known as treatment-covariate interactions. When multiple trials are available, meta-analysis is used to help detect true treatment-covariate interactions by combining their data. Meta-regression of trial-level information is prone to low power and ecological bias, and therefore, individual participant data (IPD) meta-analyses are preferable to examine interactions utilizing individual-level information. However, one-stage IPD models are often wrongly specified, such that interactions are based on amalgamating within- and across-trial information. We compare, through simulations and an applied example, fixed-effect and random-effects models for a one-stage IPD meta-analysis of time-to-event data where the goal is to estimate a treatment-covariate interaction. We show that it is crucial to centre patient-level covariates by their mean value in each trial, in order to separate out within-trial and across-trial information. Otherwise, bias and coverage of interaction estimates may be adversely affected, leading to potentially erroneous conclusions driven by ecological bias. We revisit an IPD meta-analysis of five epilepsy trials and examine age as a treatment effect modifier. The interaction is -0.011 (95% CI: -0.019 to -0.003; p = 0.004), and thus highly significant, when amalgamating within-trial and across-trial information. However, when separating within-trial from across-trial information, the interaction is -0.007 (95% CI: -0.019 to 0.005; p = 0.22), and thus its magnitude and statistical significance are greatly reduced. We recommend that meta-analysts should only use within-trial information to examine individual predictors of treatment effect and that one-stage IPD models should separate within-trial from across-trial information to avoid ecological bias. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
- Published
- 2016
- Full Text
- View/download PDF
5. Meta-analysis of test accuracy studies using imputation for partial reporting of multiple thresholds
- Author
-
Richard D Riley, Joie Ensor, Jonathan J Deeks, and Emma C. Martin
- Subjects
Research methodology ,imputation ,01 natural sciences ,Sensitivity and Specificity ,Education ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Bias ,Meta-Analysis as Topic ,Bounding overwatch ,Statistics ,Prevalence ,Humans ,Computer Simulation ,False Positive Reactions ,030212 general & internal medicine ,Imputation (statistics) ,0101 mathematics ,Research Articles ,Mathematics ,publication bias ,multiple thresholds ,Models, Statistical ,Receiver operating characteristic ,R735 ,Missing data ,R1 ,diagnostic test accuracy ,Standard error ,ROC Curve ,meta‐analysis ,Meta-analysis ,Sample Size ,Linear Models ,Test performance ,Algorithms ,Software ,Research Article - Abstract
Introduction For tests reporting continuous results, primary studies usually provide test performance at multiple but often different thresholds. This creates missing data when performing a meta-analysis at each threshold. A standard meta-analysis (no imputation [NI]) ignores such missing data. A single imputation (SI) approach was recently proposed to recover missing threshold results. Here, we propose a new method that performs multiple imputation of the missing threshold results using discrete combinations (MIDC). Methods The new MIDC method imputes missing threshold results by randomly selecting from the set of all possible discrete combinations which lie between the results for 2 known bounding thresholds. Imputed and observed results are then synthesised at each threshold. This is repeated multiple times, and the multiple pooled results at each threshold are combined using Rubin's rules to give final estimates. We compared the NI, SI, and MIDC approaches via simulation. Results Both imputation methods outperform the NI method in simulations. There was generally little difference in the SI and MIDC methods, but the latter was noticeably better in terms of estimating the between-study variances and generally gave better coverage, due to slightly larger standard errors of pooled estimates. Given selective reporting of thresholds, the imputation methods also reduced bias in the summary receiver operating characteristic curve. Simulations demonstrate the imputation methods rely on an equal threshold spacing assumption. A real example is presented. Conclusions The SI and, in particular, MIDC methods can be used to examine the impact of missing threshold results in meta-analysis of test accuracy studies.
- Published
- 2018
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.