42 results on '"Grayling MJ"'
Search Results
2. Blinded and unblinded sample size re-estimation procedures for stepped-wedge cluster randomized trials
- Author
-
Grayling, MJ, Mander, Adrian, Wason, James, Grayling, Michael [0000-0002-0680-6668], Mander, Adrian [0000-0002-0742-9040], Wason, James [0000-0002-4691-126X], and Apollo - University of Cambridge Repository
- Subjects
Methodology (stat.ME) ,FOS: Computer and information sciences ,blinded ,stepped-wedge ,cluster randomized trial ,sample size re-estimation ,internal pilot ,Statistics - Methodology - Abstract
The ability to accurately estimate the sample size required by a stepped-wedge (SW) cluster randomized trial (CRT) routinely depends upon the specification of several nuisance parameters. If these parameters are mis-specified, the trial could be over-powered, leading to increased cost, or under-powered, enhancing the likelihood of a false negative. We address this issue here for cross-sectional SW-CRTs, analyzed with a particular linear mixed model, by proposing methods for blinded and unblinded sample size re-estimation (SSRE). Blinded estimators for the variance parameters of a SW-CRT analyzed using the Hussey and Hughes model are derived. Then, procedures for blinded and unblinded SSRE after any time period in a SW-CRT are detailed. The performance of these procedures is then examined and contrasted using two example trial design scenarios. We find that if the two key variance parameters were under-specified by 50%, the SSRE procedures were able to increase power over the conventional SW-CRT design by up to 29%, resulting in an empirical power above the desired level. Moreover, the performance of the re-estimation procedures was relatively insensitive to the timing of the interim assessment. Thus, the considered SSRE procedures can bring substantial gains in power when the underlying variance parameters are mis-specified. Though there are practical issues to consider, the procedure's performance means researchers should consider incorporating SSRE in to future SW-CRTs.
- Published
- 2018
- Full Text
- View/download PDF
3. Stepped wedge cluster randomized controlled trial designs: a review of reporting quality and design features
- Author
-
Grayling, MJ, Wason, JMS, and Mander, AP
- Subjects
cluster randomized controlled trial ,review ,reporting quality ,stepped wedge ,3. Good health - Abstract
Background The stepped wedge (SW) cluster randomized controlled trial (CRCT) design is being used with increasing frequency. However, there is limited published research on the quality of reporting of SW-CRCTs. We address this issue by conducting a literature review. Methods Medline, Ovid, Web of Knowledge, the Cochrane Library, PsycINFO, the ISRCTN registry, and ClinicalTrials.gov were searched to identify investigations employing the SW-CRCT design up to February 2015. For each included completed study, information was extracted on a selection of criteria, based on the CONSORT extension to CRCTs, to assess the quality of reporting. Results A total of 123 studies were included in our review, of which 39 were completed trial reports. The standard of reporting of SW-CRCTs varied in quality. The percentage of trials reporting each criterion varied to as low as 15.4%, with a median of 66.7%. Conclusions There is much room for improvement in the quality of reporting of SW-CRCTs. This is consistent with recent findings for CRCTs. A CONSORT extension for SW-CRCTs is warranted to standardize the reporting of SW-CRCTs.
4. A hybrid approach to sample size re-estimation in cluster randomized trials with continuous outcomes.
- Author
-
Sarkodie SK, Wason JM, and Grayling MJ
- Subjects
- Sample Size, Humans, Cluster Analysis, Models, Statistical, Computer Simulation, Randomized Controlled Trials as Topic methods, Bayes Theorem
- Abstract
This study presents a hybrid (Bayesian-frequentist) approach to sample size re-estimation (SSRE) for cluster randomised trials with continuous outcome data, allowing for uncertainty in the intra-cluster correlation (ICC). In the hybrid framework, pre-trial knowledge about the ICC is captured by placing a Truncated Normal prior on it, which is then updated at an interim analysis using the study data, and used in expected power control. On average, both the hybrid and frequentist approaches mitigate against the implications of misspecifying the ICC at the trial's design stage. In addition, both frameworks lead to SSRE designs with approximate control of the type I error-rate at the desired level. It is clearly demonstrated how the hybrid approach is able to reduce the high variability in the re-estimated sample size observed within the frequentist framework, based on the informativeness of the prior. However, misspecification of a highly informative prior can cause significant power loss. In conclusion, a hybrid approach could offer advantages to cluster randomised trials using SSRE. Specifically, when there is available data or expert opinion to help guide the choice of prior for the ICC, the hybrid approach can reduce the variance of the re-estimated required sample size compared to a frequentist approach. As SSRE is unlikely to be employed when there is substantial amounts of such data available (ie, when a constructed prior is highly informative), the greatest utility of a hybrid approach to SSRE likely lies when there is low-quality evidence available to guide the choice of prior., (© 2024 The Author(s). Statistics in Medicine published by John Wiley & Sons Ltd.)
- Published
- 2024
- Full Text
- View/download PDF
5. A Bayesian approach to pilot-pivotal trials for bioequivalence assessment.
- Author
-
Lv D, Grayling MJ, Zhang X, Zhao Q, and Zheng H
- Subjects
- Humans, Bayes Theorem, Computer Simulation, Sample Size, Therapeutic Equivalency, Clinical Trials as Topic, Research Design
- Abstract
Background: To demonstrate bioequivalence between two drug formulations, a pilot trial is often conducted prior to a pivotal trial to assess feasibility and gain preliminary information about the treatment effect. Due to the limited sample size, it is not recommended to perform significance tests at the conventional 5% level using pilot data to determine if a pivotal trial should take place. Whilst some authors suggest to relax the significance level, a Bayesian framework provides an alternative for informing the decision-making. Moreover, a Bayesian approach also readily permits possible incorporation of pilot data in priors for the parameters that underpin the pivotal trial., Methods: We consider two-sequence, two-period crossover designs that compare test (T) and reference (R) treatments. We propose a robust Bayesian hierarchical model, embedded with a scaling factor, to elicit a Go/No-Go decision using predictive probabilities. Following a Go decision, the final analysis to formally establish bioequivalence can leverage both the pilot and pivotal trial data jointly. A simulation study is performed to evaluate trial operating characteristics., Results: Compared with conventional procedures, our proposed method improves the decision-making to correctly allocate a Go decision in scenarios of bioequivalence. By choosing an appropriate threshold, the probability of correctly (incorrectly) making a No-Go (Go) decision can be ensured at a desired target level. Using both pilot and pivotal trial data in the final analysis can result in a higher chance of declaring bioequivalence. The false positive rate can be maintained in situations when T and R are not bioequivalent., Conclusions: The proposed methodology is novel and effective in different stages of bioequivalence assessment. It can greatly enhance the decision-making process in bioequivalence trials, particularly in situations with a small sample size., (© 2023. The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
6. Bayesian sample size determination in basket trials borrowing information between subsets.
- Author
-
Zheng H, Grayling MJ, Mozgunov P, Jaki T, and Wason JMS
- Subjects
- Humans, Sample Size, Bayes Theorem, Computer Simulation, Research Design
- Abstract
Basket trials are increasingly used for the simultaneous evaluation of a new treatment in various patient subgroups under one overarching protocol. We propose a Bayesian approach to sample size determination in basket trials that permit borrowing of information between commensurate subsets. Specifically, we consider a randomized basket trial design where patients are randomly assigned to the new treatment or control within each trial subset ("subtrial" for short). Closed-form sample size formulae are derived to ensure that each subtrial has a specified chance of correctly deciding whether the new treatment is superior to or not better than the control by some clinically relevant difference. Given prespecified levels of pairwise (in)commensurability, the subtrial sample sizes are solved simultaneously. The proposed Bayesian approach resembles the frequentist formulation of the problem in yielding comparable sample sizes for circumstances of no borrowing. When borrowing is enabled between commensurate subtrials, a considerably smaller trial sample size is required compared to the widely implemented approach of no borrowing. We illustrate the use of our sample size formulae with two examples based on real basket trials. A comprehensive simulation study further shows that the proposed methodology can maintain the true positive and false positive rates at desired levels., (© The Author 2022. Published by Oxford University Press.)
- Published
- 2023
- Full Text
- View/download PDF
7. A hybrid approach to comparing parallel-group and stepped-wedge cluster-randomized trials with a continuous primary outcome when there is uncertainty in the intra-cluster correlation.
- Author
-
Sarkodie SK, Wason JM, and Grayling MJ
- Subjects
- Humans, Cross-Sectional Studies, Uncertainty, Randomized Controlled Trials as Topic, Sample Size, Cluster Analysis, Research Design
- Abstract
Background/aims: To evaluate how uncertainty in the intra-cluster correlation impacts whether a parallel-group or stepped-wedge cluster-randomized trial design is more efficient in terms of the required sample size, in the case of cross-sectional stepped-wedge cluster-randomized trials and continuous outcome data., Methods: We motivate our work by reviewing how the intra-cluster correlation and standard deviation were justified in 54 health technology assessment reports on cluster-randomized trials. To enable uncertainty at the design stage to be incorporated into the design specification, we then describe how sample size calculation can be performed for cluster- randomized trials in the 'hybrid' framework, which places priors on design parameters and controls the expected power in place of the conventional frequentist power. Comparison of the parallel-group and stepped-wedge cluster-randomized trial designs is conducted by placing Beta and truncated Normal priors on the intra-cluster correlation, and a Gamma prior on the standard deviation., Results: Many Health Technology Assessment reports did not adhere to the Consolidated Standards of Reporting Trials guideline of indicating the uncertainty around the assumed intra-cluster correlation, while others did not justify the assumed intra-cluster correlation or standard deviation. Even for a prior intra-cluster correlation distribution with a small mode, moderate prior densities on high intra-cluster correlation values can lead to a stepped-wedge cluster-randomized trial being more efficient because of the degree to which a stepped-wedge cluster-randomized trial is more efficient for high intra-cluster correlations. With careful specification of the priors, the designs in the hybrid framework can become more robust to, for example, an unexpectedly large value of the outcome variance., Conclusion: When there is difficulty obtaining a reliable value for the intra-cluster correlation to assume at the design stage, the proposed methodology offers an appealing approach to sample size calculation. Often, uncertainty in the intra-cluster correlation will mean a stepped-wedge cluster-randomized trial is more efficient than a parallel-group cluster-randomized trial design.
- Published
- 2023
- Full Text
- View/download PDF
8. Point estimation following a two-stage group sequential trial.
- Author
-
Grayling MJ and Wason JM
- Subjects
- Bias, Likelihood Functions
- Abstract
Repeated testing in a group sequential trial can result in bias in the maximum likelihood estimate of the unknown parameter of interest. Many authors have therefore proposed adjusted point estimation procedures, which attempt to reduce such bias. Here, we describe nine possible point estimators within a common general framework for a two-stage group sequential trial. We then contrast their performance in five example trial settings, examining their conditional and marginal biases and residual mean square error. By focusing on the case of a trial with a single interim analysis, additional new results aiding the determination of the estimators are given. Our findings demonstrate that the uniform minimum variance unbiased estimator, whilst being marginally unbiased, often has large conditional bias and residual mean square error. If one is concerned solely about inference on progression to the second trial stage, the conditional uniform minimum variance unbiased estimator may be preferred. Two estimators, termed mean adjusted estimators, which attempt to reduce the marginal bias, arguably perform best in terms of the marginal residual mean square error. In all, one should choose an estimator accounting for its conditional and marginal biases and residual mean square error; the most suitable estimator will depend on relative desires to minimise each of these factors. If one cares solely about the conditional and marginal biases, the conditional maximum likelihood estimate may be preferred provided lower and upper stopping boundaries are included. If the conditional and marginal residual mean square error are also of concern, two mean adjusted estimators perform well.
- Published
- 2023
- Full Text
- View/download PDF
9. Optimised point estimators for multi-stage single-arm phase II oncology trials.
- Author
-
Grayling MJ and Mander AP
- Subjects
- Humans, Medical Oncology, Bias, Neoplasms
- Abstract
The uniform minimum variance unbiased estimator (UMVUE) is, by definition, a solution to removing bias in estimation following a multi-stage single-arm trial with a primary dichotomous outcome. However, the UMVUE is known to have large residual mean squared error (RMSE). Therefore, we develop an optimisation approach to finding estimators with reduced RMSE for many response rates, which attain low bias. We demonstrate that careful choice of the optimisation parameters can lead to an estimator with often substantially reduced RMSE, without the introduction of appreciable bias.
- Published
- 2022
- Full Text
- View/download PDF
10. Bayesian modelling strategies for borrowing of information in randomised basket trials.
- Author
-
Ouma LO, Grayling MJ, Wason JMS, and Zheng H
- Abstract
Basket trials are an innovative precision medicine clinical trial design evaluating a single targeted therapy across multiple diseases that share a common characteristic. To date, most basket trials have been conducted in early-phase oncology settings, for which several Bayesian methods permitting information sharing across subtrials have been proposed. With the increasing interest of implementing randomised basket trials, information borrowing could be exploited in two ways; considering the commensurability of either the treatment effects or the outcomes specific to each of the treatment groups between the subtrials. In this article, we extend a previous analysis model based on distributional discrepancy for borrowing over the subtrial treatment effects ('treatment effect borrowing', TEB) to borrowing over the subtrial groupwise responses ('treatment response borrowing', TRB). Simulation results demonstrate that both modelling strategies provide substantial gains over an approach with no borrowing. TRB outperforms TEB especially when subtrial sample sizes are small on all operational characteristics, while the latter has considerable gains in performance over TRB when subtrial sample sizes are large, or the treatment effects and groupwise mean responses are noticeably heterogeneous across subtrials. Further, we notice that TRB, and TEB can potentially lead to different conclusions in the analysis of real data., Competing Interests: None to be declared., (© 2022 The Authors. Journal of the Royal Statistical Society: Series C (Applied Statistics) published by John Wiley & Sons Ltd on behalf of Royal Statistical Society.)
- Published
- 2022
- Full Text
- View/download PDF
11. Subgroup analyses in randomized controlled trials frequently categorized continuous subgroup information.
- Author
-
Williamson SF, Grayling MJ, Mander AP, Noor NM, Savage JS, Yap C, and Wason JMS
- Subjects
- Humans, Sample Size, Randomized Controlled Trials as Topic
- Abstract
Background and Objectives: To investigate how subgroup analyses of published Randomized Controlled Trials (RCTs) are performed when subgroups are created from continuous variables., Methods: We carried out a review of RCTs published in 2016-2021 that included subgroup analyses. Information was extracted on whether any of the subgroups were based on continuous variables and, if so, how they were analyzed., Results: Out of 428 reviewed papers, 258 (60.4%) reported RCTs with a subgroup analysis. Of these, 178/258 (69%) had at least one subgroup formed from a continuous variable and 14/258 (5.4%) were unclear. The vast majority (169/178, 94.9%) dichotomized the continuous variable and treated the subgroup as categorical. The most common way of dichotomizing was using a pre-specified cutpoint (129/169, 76.3%), followed by a data-driven cutpoint (26/169, 15.4%), such as the median., Conclusion: It is common for subgroup analyses to use continuous variables to define subgroups. The vast majority dichotomize the continuous variable and, consequently, may lose substantial amounts of statistical information (equivalent to reducing the sample size by at least a third). More advanced methods that can improve efficiency, through optimally choosing cutpoints or directly using the continuous information, are rarely used., (Copyright © 2022 The Author(s). Published by Elsevier Inc. All rights reserved.)
- Published
- 2022
- Full Text
- View/download PDF
12. A stochastically curtailed single-arm phase II trial design for binary outcomes.
- Author
-
Law M, Grayling MJ, and Mander AP
- Subjects
- Computer Simulation, Humans, Sample Size, Research Design
- Abstract
Phase II clinical trials are a critical aspect of the drug development process. With drug development costs ever increasing, novel designs that can improve the efficiency of phase II trials are extremely valuable.Phase II clinical trials for cancer treatments often measure a binary outcome. The final trial decision is generally to continue or cease development. When this decision is based solely on the result of a hypothesis test, the result may be known with certainty before the planned end of the trial. Unfortunately, there is often no opportunity for early stopping when this occurs.Some existing designs do permit early stopping in this case, accordingly reducing the required sample size and potentially speeding up drug development. However, more improvements can be achieved by stopping early when the final trial decision is very likely, rather than certain, known as stochastic curtailment. While some authors have proposed approaches of this form, these approaches have various limitations.In this work we address these limitations by proposing new design approaches for single-arm phase II binary outcome trials that use stochastic curtailment. We use exact distributions, avoid simulation, consider a wider range of possible designs and permit early stopping for promising treatments. As a result, we are able to obtain trial designs that have considerably reduced sample sizes on average.
- Published
- 2022
- Full Text
- View/download PDF
13. When is a two-stage single-arm trial efficient? An evaluation of the impact of outcome delay.
- Author
-
Mukherjee A, Wason JMS, and Grayling MJ
- Subjects
- Humans, Medical Oncology, Sample Size, Treatment Outcome, Neoplasms therapy, Research Design
- Abstract
Background: Simon's two-stage design is a widely used adaptive design, particularly in phase II oncology trials due to its simplicity and efficiency. However, its efficiency can be adversely affected when the primary end-point takes time to observe, as is common in practice., Methods: We propose an optimal design, taking the delay in observing treatment outcome into consideration and compare the efficiency gained from using Simon's design over a single-stage design for real-life oncology trials. Based on the results, we provide a general rule-of-thumb for determining whether a two-stage single-arm design can provide any added advantage over a single-stage design, given the recruitment rate and primary end-point length., Results: We observed an average 15-30% loss in the estimated efficiency gain in real oncology trials that used Simon's design due to the delay in observing the treatment outcome. The delay-optimal design provides some advantage over Simon's design in terms of reduced sample size when the delay is large compared to the recruitment length., Discussion: Simon's two-stage design provides large benefit over a single-stage design, in terms of reduced sample size, when the primary end-point length is no more than 10% of the total recruitment time. It provides no efficiency advantage when this ratio is above 50%., Competing Interests: Conflict of interest statement The authors declare no conflict of interest., (Copyright © 2022 Elsevier Ltd. All rights reserved.)
- Published
- 2022
- Full Text
- View/download PDF
14. Adaptive Designs: Benefits and Cautions for Neurosurgery Trials.
- Author
-
Mukherjee A, Grayling MJ, and Wason JMS
- Subjects
- Humans, Neurosurgical Procedures, Randomized Controlled Trials as Topic, Research Design, Neurosurgery
- Abstract
Background: It is well accepted that randomized controlled trials provide the greatest quality of evidence about effectiveness and safety of new interventions. In neurosurgery, randomized controlled trials face challenges, with their use remaining relatively low compared with other clinical areas. Adaptive designs have emerged as a method for improving the efficiency and patient benefit of trials. They allow modifications to the trial design to be made as patient outcome data are collected. The benefit they provide is highly variable, predominantly governed by the time taken to observe the primary endpoint compared with the planned recruitment rate. They also face challenges in design, conduct, and reporting., Methods: We provide an overview of the benefits and challenges of adaptive designs, with a focus on neurosurgery applications. To investigate how often an adaptive design may be advantageous in neurosurgery, we extracted data on recruitment rates and endpoint lengths for ongoing neurosurgery trials registered in ClinicalTrials.gov., Results: We found that a majority of neurosurgery trials had a relatively short endpoint length compared with the planned recruitment period and therefore may benefit from an adaptive trial. However, we did not identify any ongoing ClinicalTrials.gov registered neurosurgery trials that mentioned using an adaptive design., Conclusions: Adaptive designs may provide benefits to neurosurgery trials and should be considered for use more widely. Use of some types of adaptive design, such as multiarm multistage, may further increase the number of interventions that can be tested with limited patient and financial resources., (Copyright © 2021 Elsevier Inc. All rights reserved.)
- Published
- 2022
- Full Text
- View/download PDF
15. Response adaptive intervention allocation in stepped-wedge cluster randomized trials.
- Author
-
Grayling MJ, Wason JMS, and Villar SS
- Subjects
- Cluster Analysis, Computer Simulation, Humans, Randomized Controlled Trials as Topic, Treatment Outcome, Research Design
- Abstract
Background: Stepped-wedge cluster randomized trial (SW-CRT) designs are often used when there is a desire to provide an intervention to all enrolled clusters, because of a belief that it will be effective. However, given there should be equipoise at trial commencement, there has been discussion around whether a pre-trial decision to provide the intervention to all clusters is appropriate. In pharmaceutical drug development, a solution to a similar desire to provide more patients with an effective treatment is to use a response adaptive (RA) design., Methods: We introduce a way in which RA design could be incorporated in an SW-CRT, permitting modification of the intervention allocation during the trial. The proposed framework explicitly permits a balance to be sought between power and patient benefit considerations. A simulation study evaluates the methodology., Results: In one scenario, for one particular RA design, the proportion of cluster-periods spent in the intervention condition was observed to increase from 32.2% to 67.9% as the intervention effect was increased. A cost of this was a 6.2% power drop compared to a design that maximized power by fixing the proportion of time in the intervention condition at 45.0%, regardless of the intervention effect., Conclusions: An RA approach may be most applicable to settings for which the intervention has substantial individual or societal benefit considerations, potentially in combination with notable safety concerns. In such a setting, the proposed methodology may routinely provide the desired adaptability of the roll-out speed, with only a small cost to the study's power., (© 2022 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.)
- Published
- 2022
- Full Text
- View/download PDF
16. Conditional power and friends: The why and how of (un)planned, unblinded sample size recalculations in confirmatory trials.
- Author
-
Kunzmann K, Grayling MJ, Lee KM, Robertson DS, Rufibach K, and Wason JMS
- Subjects
- Humans, Sample Size, Uncertainty, Friends, Research Design
- Abstract
Adapting the final sample size of a trial to the evidence accruing during the trial is a natural way to address planning uncertainty. Since the sample size is usually determined by an argument based on the power of the trial, an interim analysis raises the question of how the final sample size should be determined conditional on the accrued information. To this end, we first review and compare common approaches to estimating conditional power, which is often used in heuristic sample size recalculation rules. We then discuss the connection of heuristic sample size recalculation and optimal two-stage designs, demonstrating that the latter is the superior approach in a fully preplanned setting. Hence, unplanned design adaptations should only be conducted as reaction to trial-external new evidence, operational needs to violate the originally chosen design, or post hoc changes in the optimality criterion but not as a reaction to trial-internal data. We are able to show that commonly discussed sample size recalculation rules lead to paradoxical adaptations where an initially planned optimal design is not invariant under the adaptation rule even if the planning assumptions do not change. Finally, we propose two alternative ways of reacting to newly emerging trial-external evidence in ways that are consistent with the originally planned design to avoid such inconsistencies., (© 2022 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.)
- Published
- 2022
- Full Text
- View/download PDF
17. Advantages of multi-arm non-randomised sequentially allocated cohort designs for Phase II oncology trials.
- Author
-
Mossop H, Grayling MJ, Gallagher FA, Welsh SJ, Stewart GD, and Wason JMS
- Subjects
- Cohort Studies, Humans, Neoplasms pathology, Sample Size, Treatment Outcome, Adaptive Clinical Trials as Topic methods, Clinical Trials, Phase II as Topic methods, Computer Simulation standards, Medical Oncology methods, Neoplasms drug therapy, Non-Randomized Controlled Trials as Topic methods, Research Design standards
- Abstract
Background: Efficient trial designs are required to prioritise promising drugs within Phase II trials. Adaptive designs are examples of such designs, but their efficiency is reduced if there is a delay in assessing patient responses to treatment., Methods: Motivated by the WIRE trial in renal cell carcinoma (NCT03741426), we compare three trial approaches to testing multiple treatment arms: (1) single-arm trials in sequence with interim analyses; (2) a parallel multi-arm multi-stage trial and (3) the design used in WIRE, which we call the Multi-Arm Sequential Trial with Efficient Recruitment (MASTER) design. The MASTER design recruits patients to one arm at a time, pausing recruitment to an arm when it has recruited the required number for an interim analysis. We conduct a simulation study to compare how long the three different trial designs take to evaluate a number of new treatment arms., Results: The parallel multi-arm multi-stage and the MASTER design are much more efficient than separate trials. The MASTER design provides extra efficiency when there is endpoint delay, or recruitment is very quick., Conclusions: We recommend the MASTER design as an efficient way of testing multiple promising cancer treatments in non-comparative Phase II trials., (© 2021. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
18. Improving power in PSA response analyses of metastatic castration-resistant prostate cancer trials.
- Author
-
Grayling MJ, McMenamin M, Chandler R, Heer R, and Wason JMS
- Subjects
- Clinical Trials as Topic, Humans, Male, Prostate-Specific Antigen drug effects, Prostatic Neoplasms, Castration-Resistant immunology, Treatment Outcome, Drug Monitoring methods, Prostatic Neoplasms, Castration-Resistant drug therapy
- Abstract
Background: To determine how much an augmented analysis approach could improve the efficiency of prostate-specific antigen (PSA) response analyses in clinical practice. PSA response rates are commonly used outcome measures in metastatic castration-resistant prostate cancer (mCRPC) trial reports. PSA response is evaluated by comparing continuous PSA data (e.g., change from baseline) to a threshold (e.g., 50% reduction). Consequently, information in the continuous data is discarded. Recent papers have proposed an augmented approach that retains the conventional response rate, but employs the continuous data to improve precision of estimation., Methods: A literature review identified published prostate cancer trials that included a waterfall plot of continuous PSA data. This continuous data was extracted to enable the conventional and augmented approaches to be compared., Results: Sixty-four articles, reporting results for 78 mCRPC treatment arms, were re-analysed. The median efficiency gain from using the augmented analysis, in terms of the implied increase to the sample size of the original study, was 103.2% (IQR [89.8,190.9%])., Conclusions: Augmented PSA response analysis requires no additional data to be collected and can be performed easily using available software. It improves precision of estimation to a degree that is equivalent to a substantial sample size increase. The implication of this work is that prostate cancer trials using PSA response as a primary endpoint could be delivered with fewer participants and, therefore, more rapidly with reduced cost., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
19. Two-Stage Single-Arm Trials Are Rarely Analyzed Effectively or Reported Adequately.
- Author
-
Grayling MJ and Mander AP
- Subjects
- Clinical Trials as Topic methods, Clinical Trials as Topic standards, Humans, Reproducibility of Results, Research Design trends, Research Design standards
- Abstract
Purpose: Two-stage single-arm designs have historically been the most common design used in phase II oncology. They remain a mainstay today, particularly for trials in rare subgroups. Consequently, it is imperative such studies be designed, analyzed, and reported effectively. We comprehensively review such trials to examine whether this is the case., Methods: Oncology trials that used Simon's two-stage design over a 5-year period were identified and reviewed. They were evaluated for whether they reported sufficient design (eg, required sample size) and analysis (eg, CI) details. Articles that did not adjust their inference for the incorporation of an interim analysis were also reanalyzed., Results: Four-hundred twenty-five articles were included. Of these, just 47.5% provided the five components that ensure design reproducibility. Only 1.2% and 2.1% reported an adjusted point estimate or CI, respectively. Just 55.3% provided the final stage rejection bound, indicating many trials did not test a hypothesis for their primary outcome. Trial reanalyses suggested reported point estimates underestimated treatment effects and reported CIs were too narrow., Conclusion: Key design details of two-stage single-arm trials are often unreported. Their inference is rarely performed such as to remove bias introduced by the interim analysis. These findings are particular alarming when considered against the growing trend in which nonrandomized trials make up a large proportion of all evidence on a treatment's effectiveness in a rare biomarker-defined patient subgroup. Future studies must improve the way they are analyzed and reported., (© 2021 by American Society of Clinical Oncology.)
- Published
- 2021
- Full Text
- View/download PDF
20. Increasing power in the analysis of responder endpoints in rheumatology: a software tutorial.
- Author
-
McMenamin M, Grayling MJ, Berglind A, and Wason JMS
- Abstract
Background: Composite responder endpoints feature frequently in rheumatology due to the multifaceted nature of many of these conditions. Current analysis methods used to analyse these endpoints discard much of the data used to classify patients as responders and are therefore highly inefficient, resulting in low power. We highlight a novel augmented methodology that uses more of the information available to improve the precision of reported treatment effects. Since these methods are more challenging to implement, we developed free, user-friendly software available in a web-based interface and as R packages. The software consists of two programs: one that supports the analysis of responder endpoints; the second that facilitates sample size estimation. We demonstrate the use of the software to conduct the analysis with both the augmented and standard analysis method using the MUSE study, a phase IIb trial in patients with systemic lupus erythematosus., Results: The software outputs similar point estimates with smaller confidence intervals for the odds ratio, risk ratio and risk difference estimators using the augmented approach. The sample size required in each arm for a future trial using the novel approach based on the MUSE data is 50 versus 135 for the standard method, translating to a reduction in required sample size of approximately 63%., Conclusions: We encourage trialists to use the software demonstrated to implement the augmented methodology in future studies to improve efficiency., (© 2021. The Author(s).)
- Published
- 2021
- Full Text
- View/download PDF
21. Treatment allocation strategies for umbrella trials in the presence of multiple biomarkers: A comparison of methods.
- Author
-
Ouma LO, Grayling MJ, Zheng H, and Wason J
- Subjects
- Bayes Theorem, Biomarkers, Computer Simulation, Humans, Random Allocation, Research Design
- Abstract
Umbrella trials are an innovative trial design where different treatments are matched with subtypes of a disease, with the matching typically based on a set of biomarkers. Consequently, when patients can be positive for more than one biomarker, they may be eligible for multiple treatment arms. In practice, different approaches could be applied to allocate patients who are positive for multiple biomarkers to treatments. However, to date there has been little exploration of how these approaches compare statistically. We conduct a simulation study to compare five approaches to handling treatment allocation in the presence of multiple biomarkers - equal randomisation; randomisation with fixed probability of allocation to control; Bayesian adaptive randomisation (BAR); constrained randomisation; and hierarchy of biomarkers. We evaluate these approaches under different scenarios in the context of a hypothetical phase II biomarker-guided umbrella trial. We define the pairings representing the pre-trial expectations on efficacy as linked pairs, and the other biomarker-treatment pairings as unlinked. The hierarchy and BAR approaches have the highest power to detect a treatment-biomarker linked interaction. However, the hierarchy procedure performs poorly if the pre-specified treatment-biomarker pairings are incorrect. The BAR method allocates a higher proportion of patients who are positive for multiple biomarkers to promising treatments when an unlinked interaction is present. In most scenarios, the constrained randomisation approach best balances allocation to all treatment arms. Pre-specification of an approach to deal with treatment allocation in the presence of multiple biomarkers is important, especially when overlapping subgroups are likely., (© 2021 The Authors. Pharmaceutical Statistics published by John Wiley & Sons Ltd.)
- Published
- 2021
- Full Text
- View/download PDF
22. Accounting for variation in the required sample size in the design of group-sequential trials.
- Author
-
Grayling MJ and Mander AP
- Subjects
- Humans, Sample Size, Clinical Trials as Topic methods, Research Design
- Abstract
Introduction: Most literature on optimal group-sequential designs focuses on minimising the expected sample size. We highlight other factors for consideration., Methods: We discuss several quantities less-often considered in adaptive design: the median and standard deviation of the random required sample size, and the probability of committing an interim error. We consider how the optimal timing of interim analyses changes when these quantities are accounted for., Results: Incorporating the standard deviation of the required sample size into an optimality framework, we demonstrate how and when this quantity means using a group-sequential approach is not optimal. The optimal timing of an interim analysis is shown to be highly dependent on the pre-specified preference for minimising the expected sample size relative to its standard deviation., Conclusions: Examining multiple factors, which measure the advantages and disadvantages of group-sequential designs, helps determine the best design for a specific trial., (Copyright © 2021 The Authors. Published by Elsevier Inc. All rights reserved.)
- Published
- 2021
- Full Text
- View/download PDF
23. Innovative trial approaches in immune-mediated inflammatory diseases: current use and future potential.
- Author
-
Grayling MJ, Bigirumurame T, Cherlin S, Ouma L, Zheng H, and Wason JMS
- Abstract
Background: Despite progress that has been made in the treatment of many immune-mediated inflammatory diseases (IMIDs), there remains a need for improved treatments. Randomised controlled trials (RCTs) provide the highest form of evidence on the effectiveness of a potential new treatment regimen, but they are extremely expensive and time consuming to conduct. Consequently, much focus has been given in recent years to innovative design and analysis methods that could improve the efficiency of RCTs. In this article, we review the current use and future potential of these methods within the context of IMID trials., Methods: We provide a review of several innovative methods that would provide utility in IMID research. These include novel study designs (adaptive trials, Sequential Multi-Assignment Randomised Trials, basket, and umbrella trials) and data analysis methodologies (augmented analyses of composite responder endpoints, using high-dimensional biomarker information to stratify patients, and emulation of RCTs from routinely collected data). IMID trials are now well-placed to embrace innovative methods. For example, well-developed statistical frameworks for adaptive trial design are ready for implementation, whilst the growing availability of historical datasets makes the use of Bayesian methods particularly applicable. To assess whether and how these innovative methods have been used in practice, we conducted a review via PubMed of clinical trials pertaining to any of 51 IMIDs that were published between 2018 and 20 in five high impact factor clinical journals., Results: Amongst 97 articles included in the review, 19 (19.6%) used an innovative design method, but most of these were relatively straightforward examples of innovative approaches. Only two (2.1%) reported the use of evidence from routinely collected data, cohorts, or biobanks. Eight (9.2%) collected high-dimensional data., Conclusions: Application of innovative statistical methodology to IMID trials has the potential to greatly improve efficiency, to generalise and extrapolate trial results, and to further personalise treatment strategies. Currently, such methods are infrequently utilised in practice. New research is required to ensure that IMID trials can benefit from the most suitable methods.
- Published
- 2021
- Full Text
- View/download PDF
24. A stochastically curtailed two-arm randomised phase II trial design for binary outcomes.
- Author
-
Law M, Grayling MJ, and Mander AP
- Subjects
- Humans, Neoplasms drug therapy, Research Design
- Abstract
Randomised controlled trials are considered the gold standard in trial design. However, phase II oncology trials with a binary outcome are often single-arm. Although a number of reasons exist for choosing a single-arm trial, the primary reason is that single-arm designs require fewer participants than their randomised equivalents. Therefore, the development of novel methodology that makes randomised designs more efficient is of value to the trials community. This article introduces a randomised two-arm binary outcome trial design that includes stochastic curtailment (SC), allowing for the possibility of stopping a trial before the final conclusions are known with certainty. In addition to SC, the proposed design involves the use of a randomised block design, which allows investigators to control the number of interim analyses. This approach is compared with existing designs that also use early stopping, through the use of a loss function comprised of a weighted sum of design characteristics. Comparisons are also made using an example from a real trial. The comparisons show that for many possible loss functions, the proposed design is superior to existing designs. Further, the proposed design may be more practical, by allowing a flexible number of interim analyses. One existing design produces superior design realisations when the anticipated response rate is low. However, when using this design, the probability of rejecting the null hypothesis is sensitive to misspecification of the null response rate. Therefore, when considering randomised designs in phase II, we recommend the proposed approach be preferred over other sequential designs., (© 2020 The Authors. Pharmaceutical Statistics published by John Wiley & Sons Ltd.)
- Published
- 2021
- Full Text
- View/download PDF
25. Exact group sequential designs for two-arm experiments with Poisson distributed outcome variables.
- Author
-
Grayling MJ, Wason JMS, and Mander AP
- Abstract
We describe and compare two methods for the group sequential design of two-arm experiments with Poisson distributed data, which are based on a normal approximation and exact calculations respectively. A framework to determine near-optimal stopping boundaries is also presented. Using this framework, for a considered example, we demonstrate that a group sequential design could reduce the expected sample size under the null hypothesis by as much as 44% compared to a fixed sample approach. We conclude with a discussion of the advantages and disadvantages of the two presented procedures.
- Published
- 2021
- Full Text
- View/download PDF
26. A Review of Bayesian Perspectives on Sample Size Derivation for Confirmatory Trials.
- Author
-
Kunzmann K, Grayling MJ, Lee KM, Robertson DS, Rufibach K, and Wason JMS
- Abstract
Sample size derivation is a crucial element of planning any confirmatory trial. The required sample size is typically derived based on constraints on the maximal acceptable Type I error rate and minimal desired power. Power depends on the unknown true effect and tends to be calculated either for the smallest relevant effect or a likely point alternative. The former might be problematic if the minimal relevant effect is close to the null, thus requiring an excessively large sample size, while the latter is dubious since it does not account for the a priori uncertainty about the likely alternative effect. A Bayesian perspective on sample size derivation for a frequentist trial can reconcile arguments about the relative a priori plausibility of alternative effects with ideas based on the relevance of effect sizes. Many suggestions as to how such "hybrid" approaches could be implemented in practice have been put forward. However, key quantities are often defined in subtly different ways in the literature. Starting from the traditional entirely frequentist approach to sample size derivation, we derive consistent definitions for the most commonly used hybrid quantities and highlight connections, before discussing and demonstrating their use in sample size derivation for clinical trials.
- Published
- 2021
- Full Text
- View/download PDF
27. A review of available software for adaptive clinical trial design.
- Author
-
Grayling MJ and Wheeler GM
- Subjects
- Bayes Theorem, Biomarkers, Computer Simulation, Dose-Response Relationship, Drug, Humans, Sample Size, Adaptive Clinical Trials as Topic methods, Research Design, Software
- Abstract
Background/aims: The increasing cost of the drug development process has seen interest in the use of adaptive trial designs grow substantially. Accordingly, much research has been conducted to identify barriers to increasing the use of adaptive designs in practice. Several articles have argued that the availability of user-friendly software will be an important step in making adaptive designs easier to implement. Therefore, we present a review of the current state of software availability for adaptive trial design., Methods: We review articles from 31 journals published in 2013-2017 that relate to methodology for adaptive trials to assess how often code and software for implementing novel adaptive designs is made available at the time of publication. We contrast our findings against these journals' policies on code distribution. We also search popular code repositories, such as Comprehensive R Archive Network and GitHub, to identify further existing user-contributed software for adaptive designs. From this, we are able to direct interested parties toward solutions for their problem of interest., Results: Only 30% of included articles made their code available in some form. In many instances, articles published in journals that had mandatory requirements on code provision still did not make code available. There are several areas in which available software is currently limited or saturated. In particular, many packages are available to address group sequential design, but comparatively little code is present in the public domain to determine biomarker-guided adaptive designs., Conclusions: There is much room for improvement in the provision of software alongside adaptive design publications. In addition, while progress has been made, well-established software for various types of trial adaptation remains sparsely available.
- Published
- 2020
- Full Text
- View/download PDF
28. A web application for the design of multi-arm clinical trials.
- Author
-
Grayling MJ and Wason JM
- Subjects
- Data Interpretation, Statistical, Humans, Research Design, Sample Size, Software, Web Browser, Clinical Trials as Topic methods
- Abstract
Background: Multi-arm designs provide an effective means of evaluating several treatments within the same clinical trial. Given the large number of treatments now available for testing in many disease areas, it has been argued that their utilisation should increase. However, for any given clinical trial there are numerous possible multi-arm designs that could be used, and choosing between them can be a difficult task. This task is complicated further by a lack of available easy-to-use software for designing multi-arm trials., Results: To aid the wider implementation of multi-arm clinical trial designs, we have developed a web application for sample size calculation when using a variety of popular multiple comparison corrections. Furthermore, the application supports sample size calculation to control several varieties of power, as well as the determination of optimised arm-wise allocation ratios. It is built using the Shiny package in the R programming language, is free to access on any device with an internet browser, and requires no programming knowledge to use. It incorporates a variety of features to make it easier to use, including help boxes and warning messages. Using design parameters motivated by a recently completed phase II oncology trial, we demonstrate that the application can effectively determine and evaluate complex multi-arm trial designs., Conclusions: The application provides the core information required by statisticians and clinicians to review the operating characteristics of a chosen multi-arm clinical trial design. The range of designs supported by the application is broader than other currently available software solutions. Its primary limitation, particularly from a regulatory agency point of view, is its lack of validation. However, we present an approach to efficiently confirming its results via simulation.
- Published
- 2020
- Full Text
- View/download PDF
29. Sample size re-estimation in crossover trials: application to the AIM HY-INFORM study.
- Author
-
Wych J, Grayling MJ, and Mander AP
- Subjects
- Cross-Over Studies, Humans, Prospective Studies, Antihypertensive Agents therapeutic use, Clinical Trials as Topic, Hypertension drug therapy, Sample Size
- Abstract
Background: Crossover designs are commonly utilised in randomised controlled trials investigating treatments for long-term chronic illnesses. One problem with this design is its inherent repeated measures necessitate the availability of an estimate of the within-person standard deviation (SD) to perform a sample size calculation, which may be rarely available at the design stage of a trial. Interim sample size re-estimation designs can be used to help alleviate this issue by adapting the sample size mid-way through the trial, using accrued information in a statistically robust way., Methods: The AIM HY-INFORM study is part of the Informative Markers in Hypertension (AIM HY) Programme and comprises two crossover trials, each with a planned recruitment of 600 participants. The objective of the study is to test whether blood pressure response to first line antihypertensive treatment depends on ethnicity. An interim analysis is planned to reassess the assumptions of the planned sample size for the study. The aims of this paper are: (1) to provide a formula for sample size re-estimation in both crossover trials; and (2) to present a simulation study of the planned interim analysis to investigate alternative within-person SDs to that assumed., Results: The AIM HY-INFORM protocol sample size calculation fixes the within-person SD to be 8 mmHg, giving > 90% power for a primary treatment effect of 4 mmHg. Using the method developed here and simulating the interim sample size reassessment, if we were to see a larger within-person SD of 9 mmHg at interim, 640 participants for 90% power 90% of the time in the three-period three-treatment design would be required. Similarly, in the four-period four-treatment crossover design, 602 participants would be required., Conclusions: The formulas presented here provide a method for re-estimating the sample size in crossover trials. In the context of the AIM HY-INFORM study, simulating the interim analysis allows us to explore the results of a possible increase in the within-person SD from that assumed. Simulations show that without increasing the planned sample size of 600 participants, we can reasonably still expect to achieve 80% power with a small increase in the within-person SD from that assumed., Trial Registration: ClinicalTrials.gov, NCT02847338. Registered on 28 July 2016.
- Published
- 2019
- Full Text
- View/download PDF
30. A Review of Perspectives on the Use of Randomization in Phase II Oncology Trials.
- Author
-
Grayling MJ, Dimairo M, Mander AP, and Jaki TF
- Subjects
- Benchmarking, Biomarkers, Tumor, Clinical Trials, Phase II as Topic standards, Clinical Trials, Phase II as Topic statistics & numerical data, Consensus, Humans, Progression-Free Survival, Random Allocation, Randomized Controlled Trials as Topic standards, Randomized Controlled Trials as Topic statistics & numerical data, Clinical Trials, Phase II as Topic methods, Neoplasms therapy, Randomized Controlled Trials as Topic methods, Research Design statistics & numerical data
- Abstract
Historically, phase II oncology trials assessed a treatment's efficacy by examining its tumor response rate in a single-arm trial. Then, approximately 25 years ago, certain statistical and pharmacological considerations ignited a debate around whether randomized designs should be used instead. Here, based on an extensive literature review, we review the arguments on either side of this debate. In particular, we describe the numerous factors that relate to the reliance of single-arm trials on historical control data and detail the trial scenarios in which there was general agreement on preferential utilization of single-arm or randomized design frameworks, such as the use of single-arm designs when investigating treatments for rare cancers. We then summarize the latest figures on phase II oncology trial design, contrasting current design choices against historical recommendations on best practice. Ultimately, we find several ways in which the design of recently completed phase II trials does not appear to align with said recommendations. For example, despite advice to the contrary, only 66.2% of the assessed trials that employed progression-free survival as a primary or coprimary outcome used a randomized comparative design. In addition, we identify that just 28.2% of the considered randomized comparative trials came to a positive conclusion as opposed to 72.7% of the single-arm trials. We conclude by describing a selection of important issues influencing contemporary design, framing this discourse in light of current trends in phase II, such as the increased use of biomarkers and recent interest in novel adaptive designs., (© The Author(s) 2019. Published by Oxford University Press.)
- Published
- 2019
- Full Text
- View/download PDF
31. Admissible multiarm stepped-wedge cluster randomized trial designs.
- Author
-
Grayling MJ, Mander AP, and Wason JMS
- Subjects
- Computer Simulation, Hip Fractures, Humans, Research Design, Cluster Analysis, Linear Models, Randomized Controlled Trials as Topic methods, Sample Size
- Abstract
Numerous publications have now addressed the principles of designing, analyzing, and reporting the results of stepped-wedge cluster randomized trials. In contrast, there is little research available pertaining to the design and analysis of multiarm stepped-wedge cluster randomized trials, utilized to evaluate the effectiveness of multiple experimental interventions. In this paper, we address this by explaining how the required sample size in these multiarm trials can be ascertained when data are to be analyzed using a linear mixed model. We then go on to describe how the design of such trials can be optimized to balance between minimizing the cost of the trial and minimizing some function of the covariance matrix of the treatment effect estimates. Using a recently commenced trial that will evaluate the effectiveness of sensor monitoring in an occupational therapy rehabilitation program for older persons after hip fracture as an example, we demonstrate that our designs could reduce the number of observations required for a fixed power level by up to 58%. Consequently, when logistical constraints permit the utilization of any one of a range of possible multiarm stepped-wedge cluster randomized trial designs, researchers should consider employing our approach to optimize their trials efficiency., (© 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.)
- Published
- 2019
- Full Text
- View/download PDF
32. Re-formulating Gehan's design as a flexible two-stage single-arm trial.
- Author
-
Grayling MJ and Mander AP
- Subjects
- Humans, Outcome Assessment, Health Care methods, Outcome Assessment, Health Care statistics & numerical data, Algorithms, Clinical Trials, Phase II as Topic methods, Neoplasms therapy, Research Design
- Abstract
Background: Gehan's two-stage design was historically the design of choice for phase II oncology trials. One of the reasons it is less frequently used today is that it does not allow for a formal test of treatment efficacy, and therefore does not control conventional type-I and type-II error-rates., Methods: We describe how recently developed methodology for flexible two-stage single-arm trials can be used to incorporate the hypothesis test commonly associated with phase II trials in to Gehan's design. We additionally detail how this hypothesis test can be optimised in order to maximise its power, and describe how the second stage sample sizes can be chosen to more readily provide the operating characteristics that were originally envisioned by Gehan. Finally, we contrast our modified Gehan designs to Simon's designs, based on two examples motivated by real clinical trials., Results: Gehan's original designs are often greatly under- or over-powered when compared to type-II error-rates typically used in phase II. However, we demonstrate that the control parameters of his design can be chosen to resolve this problem. With this, though, the modified Gehan designs have operating characteristics similar to the more familiar Simon designs., Conclusions: The trial design settings in which Gehan's design will be preferable over Simon's designs are likely limited. Provided the second stage sample sizes are chosen carefully, however, one scenario of potential utility is when the trial's primary goal is to ascertain the treatment response rate to a certain precision.
- Published
- 2019
- Full Text
- View/download PDF
33. Two-Stage Adaptive Designs for Three-Treatment Bioequivalence Studies.
- Author
-
Grayling MJ, Mander AP, and Wason JMS
- Abstract
Bioequivalence (BE) studies are most often conducted as crossover trials, and therefore establishing their required sample size necessitates specification of the within-person variance. Given that this specification is often difficult in practice, there has been great interest in recent years in the use of adaptive designs for BE trials. However, while numerous methods for this have now been presented, their focus has been solely on two-treatment BE studies. In some instances, it will be desired to incorporate more than a single test and reference formulation into a BE trial. It would therefore be useful to establish methodology for the design of adaptive multi-treatment BE trials, to acquire the benefits in the two-treatment setting in this more complex situation. Here, we achieve this for three-treatment studies by extending previously proposed designs for two-treatment trials. First, we discuss the additional design considerations that arise when multiple comparisons are made. Next, an extensive simulation study is employed to compare the performance of the proposed procedures. With this, we demonstrate that two-stage designs with desirable statistical operating characteristics can be readily identified for three-treatment BE trials. Supplementary materials for this article are available online.
- Published
- 2019
- Full Text
- View/download PDF
34. Calculations involving the multivariate normal and multivariate t distributions with and without truncation.
- Author
-
Grayling MJ and Mander AP
- Abstract
In this article, we present a set of commands and Mata functions to evaluate different distributional quantities of the multivariate normal distribution and a particular type of noncentral multivariate t distribution. Specifically, their densities, distribution functions, equicoordinate quantiles, and pseudo-random vectors can be computed efficiently, in either the absence or the presence of variable truncation.
- Published
- 2018
- Full Text
- View/download PDF
35. Group sequential crossover trial designs with strong control of the familywise error rate.
- Author
-
Grayling MJ, Wason JMS, and Mander AP
- Abstract
Crossover designs are an extremely useful tool to investigators, and group sequential methods have proven highly proficient at improving the efficiency of parallel group trials. Yet, group sequential methods and crossover designs have rarely been paired together. One possible explanation for this could be the absence of a formal proof of how to strongly control the familywise error rate in the case when multiple comparisons will be made. Here, we provide this proof, valid for any number of initial experimental treatments and any number of stages, when results are analyzed using a linear mixed model. We then establish formulae for the expected sample size and expected number of observations of such a trial, given any choice of stopping boundaries. Finally, utilizing the four-treatment, four-period TOMADO trial as an example, we demonstrate that group sequential methods in this setting could have reduced the trials expected number of observations under the global null hypothesis by over 33%.
- Published
- 2018
- Full Text
- View/download PDF
36. Blinded and unblinded sample size reestimation procedures for stepped-wedge cluster randomized trials.
- Author
-
Grayling MJ, Mander AP, and Wason JMS
- Subjects
- Cross-Over Studies, Cross-Sectional Studies, Humans, Uncertainty, Biometry methods, Randomized Controlled Trials as Topic
- Abstract
The ability to accurately estimate the sample size required by a stepped-wedge (SW) cluster randomized trial (CRT) routinely depends upon the specification of several nuisance parameters. If these parameters are misspecified, the trial could be overpowered, leading to increased cost, or underpowered, enhancing the likelihood of a false negative. We address this issue here for cross-sectional SW-CRTs, analyzed with a particular linear-mixed model, by proposing methods for blinded and unblinded sample size reestimation (SSRE). First, blinded estimators for the variance parameters of a SW-CRT analyzed using the Hussey and Hughes model are derived. Following this, procedures for blinded and unblinded SSRE after any time period in a SW-CRT are detailed. The performance of these procedures is then examined and contrasted using two example trial design scenarios. We find that if the two key variance parameters were underspecified by 50%, the SSRE procedures were able to increase power over the conventional SW-CRT design by up to 41%, resulting in an empirical power above the desired level. Thus, though there are practical issues to consider, the performance of the procedures means researchers should consider incorporating SSRE in to future SW-CRTs., (© 2018 The Authors. Biometrical Journal Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.)
- Published
- 2018
- Full Text
- View/download PDF
37. Blinded and unblinded sample size reestimation in crossover trials balanced for period.
- Author
-
Grayling MJ, Mander AP, and Wason JMS
- Subjects
- Cross-Over Studies, Heart Transplantation, Humans, Kaplan-Meier Estimate, Models, Statistical, Regression Analysis, Sample Size, Statistics, Nonparametric, Biometry methods, Clinical Trials as Topic
- Abstract
The determination of the sample size required by a crossover trial typically depends on the specification of one or more variance components. Uncertainty about the value of these parameters at the design stage means that there is often a risk a trial may be under- or overpowered. For many study designs, this problem has been addressed by considering adaptive design methodology that allows for the re-estimation of the required sample size during a trial. Here, we propose and compare several approaches for this in multitreatment crossover trials. Specifically, regulators favor reestimation procedures to maintain the blinding of the treatment allocations. We therefore develop blinded estimators for the within and between person variances, following simple or block randomization. We demonstrate that, provided an equal number of patients are allocated to sequences that are balanced for period, the proposed estimators following block randomization are unbiased. We further provide a formula for the bias of the estimators following simple randomization. The performance of these procedures, along with that of an unblinded approach, is then examined utilizing three motivating examples, including one based on a recently completed four-treatment four-period crossover trial. Simulation results show that the performance of the proposed blinded procedures is in many cases similar to that of the unblinded approach, and thus they are an attractive alternative., (© 2018 The Authors. Biometrical Journal Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.)
- Published
- 2018
- Full Text
- View/download PDF
38. Group sequential clinical trial designs for normally distributed outcome variables.
- Author
-
Grayling MJ, Wason JMS, and Mander AP
- Abstract
In a group sequential clinical trial, accumulated data are analyzed at numerous time points to allow early decisions about a hypothesis of interest. These designs have historically been recommended for their ethical, administrative, and economic benefits. In this article, we first discuss a collection of new commands for computing the stopping boundaries and required group size of various classical group sequential designs, assuming a normally distributed outcome variable. Then, we demonstrate how the performance of several designs can be compared graphically.
- Published
- 2018
- Full Text
- View/download PDF
39. An optimised multi-arm multi-stage clinical trial design for unknown variance.
- Author
-
Grayling MJ, Wason JMS, and Mander AP
- Subjects
- Data Interpretation, Statistical, Endpoint Determination methods, Humans, Research Design, Sample Size, Analysis of Variance, Clinical Trials as Topic methods, Clinical Trials as Topic statistics & numerical data, Monte Carlo Method
- Abstract
Multi-arm multi-stage trial designs can bring notable gains in efficiency to the drug development process. However, for normally distributed endpoints, the determination of a design typically depends on the assumption that the patient variance in response is known. In practice, this will not usually be the case. To allow for unknown variance, previous research explored the performance of t-test statistics, coupled with a quantile substitution procedure for modifying the stopping boundaries, at controlling the familywise error-rate to the nominal level. Here, we discuss an alternative method based on Monte Carlo simulation that allows the group size and stopping boundaries of a multi-arm multi-stage t-test to be optimised, according to some nominated optimality criteria. We consider several examples, provide R code for general implementation, and show that our designs confer a familywise error-rate and power close to the desired level. Consequently, this methodology will provide utility in future multi-arm multi-stage trials., (Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
40. Group sequential designs for stepped-wedge cluster randomised trials.
- Author
-
Grayling MJ, Wason JM, and Mander AP
- Subjects
- Humans, Linear Models, Medical Futility, Patient Selection, Sample Size, Treatment Outcome, Endpoint Determination standards, Randomized Controlled Trials as Topic, Research Design standards
- Abstract
Background/aims: The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs., Methods: Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored., Results: We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial's type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate., Conclusion: The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into stepped-wedge cluster randomised trials according to the needs of the particular trial.
- Published
- 2017
- Full Text
- View/download PDF
41. Stepped wedge cluster randomized controlled trial designs: a review of reporting quality and design features.
- Author
-
Grayling MJ, Wason JM, and Mander AP
- Subjects
- Data Accuracy, Guidelines as Topic, Humans, Quality Control, Randomized Controlled Trials as Topic standards, Randomized Controlled Trials as Topic methods, Research Design standards
- Abstract
Background: The stepped wedge (SW) cluster randomized controlled trial (CRCT) design is being used with increasing frequency. However, there is limited published research on the quality of reporting of SW-CRCTs. We address this issue by conducting a literature review., Methods: Medline, Ovid, Web of Knowledge, the Cochrane Library, PsycINFO, the ISRCTN registry, and ClinicalTrials.gov were searched to identify investigations employing the SW-CRCT design up to February 2015. For each included completed study, information was extracted on a selection of criteria, based on the CONSORT extension to CRCTs, to assess the quality of reporting., Results: A total of 123 studies were included in our review, of which 39 were completed trial reports. The standard of reporting of SW-CRCTs varied in quality. The percentage of trials reporting each criterion varied to as low as 15.4%, with a median of 66.7%., Conclusions: There is much room for improvement in the quality of reporting of SW-CRCTs. This is consistent with recent findings for CRCTs. A CONSORT extension for SW-CRCTs is warranted to standardize the reporting of SW-CRCTs.
- Published
- 2017
- Full Text
- View/download PDF
42. Do single-arm trials have a role in drug development plans incorporating randomised trials?
- Author
-
Grayling MJ and Mander AP
- Subjects
- Carcinoma, Non-Small-Cell Lung drug therapy, Humans, Lung Neoplasms drug therapy, Antineoplastic Protocols, Drug Discovery methods, Randomized Controlled Trials as Topic methods
- Abstract
Often, single-arm trials are used in phase II to gather the first evidence of an oncological drug's efficacy, with drug activity determined through tumour response using the RECIST criterion. Provided the null hypothesis of 'insufficient drug activity' is rejected, the next step could be a randomised two-arm trial. However, single-arm trials may provide a biased treatment effect because of patient selection, and thus, this development plan may not be an efficient use of resources. Therefore, we compare the performance of development plans consisting of single-arm trials followed by randomised two-arm trials with stand-alone single-stage or group sequential randomised two-arm trials. Through this, we are able to investigate the utility of single-arm trials and determine the most efficient drug development plans, setting our work in the context of a published single-arm non-small-cell lung cancer trial. Reference priors, reflecting the opinions of 'sceptical' and 'enthusiastic' investigators, are used to quantify and guide the suitability of single-arm trials in this setting. We observe that the explored development plans incorporating single-arm trials are often non-optimal. Moreover, even the most pessimistic reference priors have a considerable probability in favour of alternative plans. Analysis suggests expected sample size savings of up to 25% could have been made, and the issues associated with single-arm trials avoided, for the non-small-cell lung cancer treatment through direct progression to a group sequential randomised two-arm trial. Careful consideration should thus be given to the use of single-arm trials in oncological drug development when a randomised trial will follow., (Copyright © 2015 The Authors. Pharmaceutical Statistics published by John Wiley & Sons Ltd.)
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.