37 results on '"Friede, T"'
Search Results
2. A conditional error function approach for subgroup selection in adaptive clinical trials
- Author
-
Friede, T., primary, Parsons, N., additional, and Stallard, N., additional
- Published
- 2012
- Full Text
- View/download PDF
3. Designing a seamless phase II/III clinical trial using early outcomes for treatment selection: An application in multiple sclerosis
- Author
-
Friede, T., primary, Parsons, N., additional, Stallard, N., additional, Todd, S., additional, Valdes Marquez, E., additional, Chataway, J., additional, and Nicholas, R., additional
- Published
- 2011
- Full Text
- View/download PDF
4. Correction
- Author
-
Stallard, N., primary, Friede, T., additional, Posch, M., additional, Koenig, F., additional, and Brannath, W., additional
- Published
- 2009
- Full Text
- View/download PDF
5. A comparison of methods for adaptive sample size adjustment.
- Author
-
Friede, Tim, Kieser, Meinhard, Friede, T, and Kieser, M
- Published
- 2001
- Full Text
- View/download PDF
6. Re-calculating the sample size in internal pilot study designs with control of the type I error rate.
- Author
-
Kieser, Meinhard, Friede, Tim, Kieser, M, and Friede, T
- Published
- 2000
- Full Text
- View/download PDF
7. Correction.
- Author
-
Stallard, N., Friede, T., Posch, M., Koenig, F., and Brannath, W.
- Abstract
We regret that there was an error in the computer program used to perform the calculations reported in 'Optimal choice of the number of treatments to be included in a clinical trial' by Stallard et al. ( Statist. Med. 2009; 28:1321-1338). Although the general message of the paper does not change, we would like to present a correction of some of the numerical details. Corrected versions of Figures , and are given. Figure 3 is correct as given in the original paper, but shows the total sample size for the optimal design rather than the sample size per arm as stated in the legend. In the numerical example given in Section 4, when the two-point prior distribution is considered, the optimal design controlling the assurance is that with 85 patients in each of two experimental groups plus a control group. The optimal design controlling the conditional power also includes both of the experimental treatment arms and the control, but with a sample size of 50 per arm. For the bivariate normal prior, the total sample size for the optimal design is 81, so that the required sample size per group is 27. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
8. Summarizing empirical information on between-study heterogeneity for Bayesian random-effects meta-analysis.
- Author
-
Röver C, Sturtz S, Lilienthal J, Bender R, and Friede T
- Subjects
- Humans, Bayes Theorem, Data Interpretation, Statistical, Referral and Consultation
- Abstract
In Bayesian meta-analysis, the specification of prior probabilities for the between-study heterogeneity is commonly required, and is of particular benefit in situations where only few studies are included. Among the considerations in the set-up of such prior distributions, the consultation of available empirical data on a set of relevant past analyses sometimes plays a role. How exactly to summarize historical data sensibly is not immediately obvious; in particular, the investigation of an empirical collection of heterogeneity estimates will not target the actual problem and will usually only be of limited use. The commonly used normal-normal hierarchical model for random-effects meta-analysis is extended to infer a heterogeneity prior. Using an example data set, we demonstrate how to fit a distribution to empirically observed heterogeneity data from a set of meta-analyses. Considerations also include the choice of a parametric distribution family. Here, we focus on simple and readily applicable approaches to then translate these into (prior) probability distributions., (© 2023 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.)
- Published
- 2023
- Full Text
- View/download PDF
9. A straightforward meta-analysis approach for oncology phase I dose-finding studies.
- Author
-
Röver C, Ursino M, Friede T, and Zohar S
- Subjects
- Bayes Theorem, Computer Simulation, Dose-Response Relationship, Drug, Humans, Logistic Models, Maximum Tolerated Dose, Monte Carlo Method, Medical Oncology, Research Design
- Abstract
Phase I early-phase clinical studies aim at investigating the safety and the underlying dose-toxicity relationship of a drug or combination. While little may still be known about the compound's properties, it is crucial to consider quantitative information available from any studies that may have been conducted previously on the same drug. A meta-analytic approach has the advantages of being able to properly account for between-study heterogeneity, and it may be readily extended to prediction or shrinkage applications. Here we propose a simple and robust two-stage approach for the estimation of maximum tolerated dose(s) utilizing penalized logistic regression and Bayesian random-effects meta-analysis methodology. Implementation is facilitated using standard R packages. The properties of the proposed methods are investigated in Monte Carlo simulations. The investigations are motivated and illustrated by two examples from oncology., (© 2022 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.)
- Published
- 2022
- Full Text
- View/download PDF
10. Sample size calculation for the augmented logrank test in randomized clinical trials.
- Author
-
Hattori S, Komukai S, and Friede T
- Subjects
- Humans, Proportional Hazards Models, Randomized Controlled Trials as Topic, Sample Size, Survival Analysis, Research Design
- Abstract
In randomized clinical trials, incorporating baseline covariates can improve the power in hypothesis testing for treatment effects. For survival endpoints, the Cox proportional hazards model with baseline covariates as explanatory variables can improve the standard logrank test in power. Although this has long been recognized, this adjustment is not commonly used as the primary analysis and instead the logrank test followed by the estimation of the hazard ratio between treatment groups is often used. By projecting the score function for the Cox proportional hazards model onto a space of covariates, the logrank test can be more powerful. We derive a power formula for this augmented logrank test under the same setting as the widely used power formula for the logrank test and propose a simple strategy for sizing randomized clinical trials utilizing historical data of the control treatment. Through numerical studies, the proposed procedure was found to have the potential to reduce the sample size substantially as compared to the standard logrank test. A concern to utilize historical data is that those might not reflect well the data structure of the study to design and then the sample size calculated might not be accurate. Since our power formula is applicable to datasets pooled across the treatment arms, the validity of the power calculation at the design stage can be checked in blind reviews., (© 2022 John Wiley & Sons Ltd.)
- Published
- 2022
- Full Text
- View/download PDF
11. A Bayesian time-to-event pharmacokinetic model for phase I dose-escalation trials with multiple schedules.
- Author
-
Günhan BK, Weber S, and Friede T
- Subjects
- Bayes Theorem, Computer Simulation, Dose-Response Relationship, Drug, Humans, Maximum Tolerated Dose, Research Design
- Abstract
Phase I dose-escalation trials must be guided by a safety model in order to avoid exposing patients to unacceptably high risk of toxicities. Traditionally, these trials are based on one type of schedule. In more recent practice, however, there is often a need to consider more than one schedule, which means that in addition to the dose itself, the schedule needs to be varied in the trial. Hence, the aim is finding an acceptable dose-schedule combination. However, most established methods for dose-escalation trials are designed to escalate the dose only and ad hoc choices must be made to adapt these to the more complicated setting of finding an acceptable dose-schedule combination. In this article, we introduce a Bayesian time-to-event model which takes explicitly the dose amount and schedule into account through the use of pharmacokinetic principles. The model uses a time-varying exposure measure to account for the risk of a dose-limiting toxicity over time. The dose-schedule decisions are informed by an escalation with overdose control criterion. The model is formulated using interpretable parameters which facilitates the specification of priors. In a simulation study, we compared the proposed method with an existing method. The simulation study demonstrates that the proposed method yields similar or better results compared with an existing method in terms of recommending acceptable dose-schedule combinations, yet reduces the number of patients enrolled in most of scenarios. The R and Stan code to implement the proposed method is publicly available from Github ( https://github.com/gunhanb/TITEPK_code)., (© 2020 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.)
- Published
- 2020
- Full Text
- View/download PDF
12. Blinded continuous information monitoring of recurrent event endpoints with time trends in clinical trials.
- Author
-
Mütze T, Salem S, Benda N, Schmidli H, and Friede T
- Subjects
- Binomial Distribution, Child, Humans, Models, Statistical, Sample Size, Time, Multiple Sclerosis drug therapy, Research Design
- Abstract
Blinded sample size re-estimation and information monitoring based on blinded data has been suggested to mitigate risks due to planning uncertainties regarding nuisance parameters. Motivated by a randomized controlled trial in pediatric multiple sclerosis (MS), a continuous monitoring procedure for overdispersed count data was proposed recently. However, this procedure assumed constant event rates, an assumption often not met in practice. Here we extend the procedure to accommodate time trends in the event rates considering two blinded approaches: (a) the mixture approach modeling the number of events by a mixture of two negative binomial distributions and (b) the lumping approach approximating the marginal distribution of the event counts by a negative binomial distribution. Through simulations the operating characteristics of the proposed procedures are investigated under decreasing event rates. We find that the type I error rate is not inflated relevantly by either of the monitoring procedures, with the exception of strong time dependencies where the procedure assuming constant rates exhibits some inflation. Furthermore, the procedure accommodating time trends has generally favorable power properties compared with the procedure based on constant rates which stops often too late. The proposed method is illustrated by the clinical trial in pediatric MS., (© 2020 John Wiley & Sons Ltd.)
- Published
- 2020
- Full Text
- View/download PDF
13. Blinded sample size reestimation for negative binomial regression with baseline adjustment.
- Author
-
Zapf A, Asendorf T, Anten C, Mütze T, and Friede T
- Subjects
- Humans, Likelihood Functions, Recurrence, Sample Size, Models, Statistical, Research Design
- Abstract
In randomized clinical trials, it is standard to include baseline variables in the primary analysis as covariates, as it is recommended by international guidelines. For the study design to be consistent with the analysis, these variables should also be taken into account when calculating the sample size to appropriately power the trial. Because assumptions made in the sample size calculation are always subject to some degree of uncertainty, a blinded sample size reestimation (BSSR) is recommended to adjust the sample size when necessary. In this article, we introduce a BSSR approach for count data outcomes with baseline covariates. Count outcomes are common in clinical trials and examples include the number of exacerbations in asthma and chronic obstructive pulmonary disease, relapses, and scan lesions in multiple sclerosis and seizures in epilepsy. The introduced methods are based on Wald and likelihood ratio test statistics. The approaches are illustrated by a clinical trial in epilepsy. The BSSR procedures proposed are compared in a Monte Carlo simulation study and shown to yield power values close to the target while not inflating the type I error rate., (© 2020 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.)
- Published
- 2020
- Full Text
- View/download PDF
14. Adaptive trial designs in diagnostic accuracy research.
- Author
-
Zapf A, Stark M, Gerke O, Ehret C, Benda N, Bossuyt P, Deeks J, Reitsma J, Alonzo T, and Friede T
- Subjects
- Humans, Sample Size, Adaptive Clinical Trials as Topic, Medical Futility, Research Design
- Abstract
The aim of diagnostic accuracy studies is to evaluate how accurately a diagnostic test can distinguish diseased from nondiseased individuals. Depending on the research question, different study designs and accuracy measures are appropriate. As the prior knowledge in the planning phase is often very limited, modifications of design aspects such as the sample size during the ongoing trial could increase the efficiency of diagnostic trials. In intervention studies, group sequential and adaptive designs are well established. Such designs are characterized by preplanned interim analyses, giving the opportunity to stop early for efficacy or futility or to modify elements of the study design. In contrast, in diagnostic accuracy studies, such flexible designs are less common, even if they are as important as for intervention studies. However, diagnostic accuracy studies have specific features, which may require adaptations of the statistical methods or may lead to specific advantages or limitations of sequential and adaptive designs. In this article, we summarize the current status of methodological research and applications of flexible designs in diagnostic accuracy research. Furthermore, we indicate and advocate future development of adaptive design methodology and their use in diagnostic accuracy trials from an interdisciplinary viewpoint. The term "interdisciplinary viewpoint" describes the collaboration of experts of the academic and nonacademic research., (© 2019 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.)
- Published
- 2020
- Full Text
- View/download PDF
15. A conditional error function approach for adaptive enrichment designs with continuous endpoints.
- Author
-
Placzek M and Friede T
- Subjects
- Biomarkers, Computer Simulation, Endpoint Determination, Humans, Hypertension, Pulmonary therapy, Statistical Distributions, Clinical Trials as Topic, Models, Statistical, Research Design
- Abstract
Adaptive enrichment designs offer an efficient and flexible way to demonstrate the efficacy of a treatment in a clinically defined full population or in, eg, biomarker-defined subpopulations while controlling the family-wise Type I error rate in the strong sense. Frequently used testing strategies in designs with two or more stages include the combination test and the conditional error function approach. Here, we focus on the latter and present some extensions. In contrast to previous work, we allow for multiple subgroups rather than one subgroup only. For nested as well as nonoverlapping subgroups with normally distributed endpoints, we explore the effect of estimating the variances in the subpopulations. Instead of using a normal approximation, we derive new t-distribution-based methods for two different scenarios. First, in the case of equal variances across the subpopulations, we present exact results using a multivariate t-distribution. Second, in the case of potentially varying variances across subgroups, we provide some improved approximations compared to the normal approximation. The performance of the proposed conditional error function approaches is assessed and compared to the combination test in a simulation study. The proposed methods are motivated by an example in pulmonary arterial hypertension., (© 2019 John Wiley & Sons, Ltd.)
- Published
- 2019
- Full Text
- View/download PDF
16. Sample size re-estimation for clinical trials with longitudinal negative binomial counts including time trends.
- Author
-
Asendorf T, Henderson R, Schmidli H, and Friede T
- Subjects
- Computer Simulation, Data Interpretation, Statistical, Humans, Magnetic Resonance Imaging, Multiple Sclerosis diagnostic imaging, Time, Binomial Distribution, Clinical Trials as Topic methods, Sample Size
- Abstract
In some diseases, such as multiple sclerosis, lesion counts obtained from magnetic resonance imaging (MRI) are used as markers of disease progression. This leads to longitudinal, and typically overdispersed, count data outcomes in clinical trials. Models for such data invariably include a number of nuisance parameters, which can be difficult to specify at the planning stage, leading to considerable uncertainty in sample size specification. Consequently, blinded sample size re-estimation procedures are used, allowing for an adjustment of the sample size within an ongoing trial by estimating relevant nuisance parameters at an interim point, without compromising trial integrity. To date, the methods available for re-estimation have required an assumption that the mean count is time-constant within patients. We propose a new modeling approach that maintains the advantages of established procedures but allows for general underlying and treatment-specific time trends in the mean response. A simulation study is conducted to assess the effectiveness of blinded sample size re-estimation methods over fixed designs. Sample sizes attained through blinded sample size re-estimation procedures are shown to maintain the desired study power without inflating the Type I error rate and the procedure is demonstrated on MRI data from a recent study in multiple sclerosis., (© 2018 John Wiley & Sons, Ltd.)
- Published
- 2019
- Full Text
- View/download PDF
17. Model averaging for robust extrapolation in evidence synthesis.
- Author
-
Röver C, Wandel S, and Friede T
- Subjects
- Adolescent, Child, Graft Rejection prevention & control, Humans, Interleukin-2 Receptor alpha Subunit antagonists & inhibitors, Liver Transplantation methods, Meta-Analysis as Topic, Migraine Disorders drug therapy, Treatment Outcome, Data Interpretation, Statistical, Models, Statistical
- Abstract
Extrapolation from a source to a target, eg, from adults to children, is a promising approach to utilize external information when data are sparse. In the context of meta-analyses, one is commonly faced with a small number of studies, whereas potentially relevant additional information may also be available. Here, we describe a simple extrapolation strategy using heavy-tailed mixture priors for effect estimation in meta-analysis, which effectively results in a model-averaging technique. The described method is robust in the sense that a potential prior-data conflict, ie, a discrepancy between source and target data, is explicitly anticipated. The aim of this paper is to develop a solution for this particular application to showcase the ease of implementation by providing R code, and to demonstrate the robustness of the general approach in simulations., (© 2018 John Wiley & Sons, Ltd.)
- Published
- 2019
- Full Text
- View/download PDF
18. Blinded sample size re-estimation in three-arm trials with 'gold standard' design.
- Author
-
Mütze T and Friede T
- Subjects
- Computer Simulation, Humans, Monte Carlo Method, Pilot Projects, Placebos, Reproducibility of Results, Research Design, Clinical Trials as Topic methods, Models, Statistical, Sample Size
- Abstract
In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd., (Copyright © 2017 John Wiley & Sons, Ltd.)
- Published
- 2017
- Full Text
- View/download PDF
19. A studentized permutation test for three-arm trials in the 'gold standard' design.
- Author
-
Mütze T, Konietschke F, Munk A, and Friede T
- Subjects
- Clinical Trials as Topic standards, Equivalence Trials as Topic, Humans, Monte Carlo Method, Poisson Distribution, Sample Size, Statistical Distributions, Statistics as Topic, Clinical Trials as Topic methods
- Abstract
The 'gold standard' design for three-arm trials refers to trials with an active control and a placebo control in addition to the experimental treatment group. This trial design is recommended when being ethically justifiable and it allows the simultaneous comparison of experimental treatment, active control, and placebo. Parametric testing methods have been studied plentifully over the past years. However, these methods often tend to be liberal or conservative when distributional assumptions are not met particularly with small sample sizes. In this article, we introduce a studentized permutation test for testing non-inferiority and superiority of the experimental treatment compared with the active control in three-arm trials in the 'gold standard' design. The performance of the studentized permutation test for finite sample sizes is assessed in a Monte Carlo simulation study under various parameter constellations. Emphasis is put on whether the studentized permutation test meets the target significance level. For comparison purposes, commonly used Wald-type tests, which do not make any distributional assumptions, are included in the simulation study. The simulation study shows that the presented studentized permutation test for assessing non-inferiority in three-arm trials in the 'gold standard' design outperforms its competitors, for instance the test based on a quasi-Poisson model, for count data. The methods discussed in this paper are implemented in the R package ThreeArmedTrials which is available on the comprehensive R archive network (CRAN). Copyright © 2016 John Wiley & Sons, Ltd., (Copyright © 2016 John Wiley & Sons, Ltd.)
- Published
- 2017
- Full Text
- View/download PDF
20. Design and analysis of three-arm trials with negative binomially distributed endpoints.
- Author
-
Mütze T, Munk A, and Friede T
- Subjects
- Computer Simulation, Dimethyl Fumarate therapeutic use, Humans, Immunosuppressive Agents therapeutic use, Magnetic Resonance Imaging, Monte Carlo Method, Multiple Sclerosis drug therapy, Multiple Sclerosis pathology, Placebos, Sample Size, Clinical Trials as Topic, Endpoint Determination, Models, Statistical, Research Design
- Abstract
A three-arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non-inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three-arm trials with negative binomially distributed endpoints. In particular, we develop a Wald-type test with a restricted maximum-likelihood variance estimator for testing non-inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald-type statistics with a sample variance estimator and an unrestricted maximum-likelihood estimator are included in the simulation study. We found that the proposed Wald-type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials, which implements the methods discussed in this paper, is available on CRAN., (Copyright © 2015 John Wiley & Sons, Ltd.)
- Published
- 2016
- Full Text
- View/download PDF
21. Flexible selection of a single treatment incorporating short-term endpoint information in a phase II/III clinical trial.
- Author
-
Stallard N, Kunz CU, Todd S, Parsons N, and Friede T
- Subjects
- Alzheimer Disease, Clinical Trials, Phase II as Topic methods, Clinical Trials, Phase III as Topic methods, Computer Simulation, Endpoint Determination methods, Humans, Research Design, Clinical Trials, Phase II as Topic statistics & numerical data, Clinical Trials, Phase III as Topic statistics & numerical data, Endpoint Determination statistics & numerical data
- Abstract
Seamless phase II/III clinical trials in which an experimental treatment is selected at an interim analysis have been the focus of much recent research interest. Many of the methods proposed are based on the group sequential approach. This paper considers designs of this type in which the treatment selection can be based on short-term endpoint information for more patients than have primary endpoint data available. We show that in such a case, the familywise type I error rate may be inflated if previously proposed group sequential methods are used and the treatment selection rule is not specified in advance. A method is proposed to avoid this inflation by considering the treatment selection that maximises the conditional error given the data available at the interim analysis. A simulation study is reported that illustrates the type I error rate inflation and compares the power of the new approach with two other methods: a combination testing approach and a group sequential method that does not use the short-term endpoint data, both of which also strongly control the type I error rate. The new method is also illustrated through application to a study in Alzheimer's disease., (© 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.)
- Published
- 2015
- Full Text
- View/download PDF
22. Spline-based procedures for dose-finding studies with active control.
- Author
-
Helms HJ, Benda N, Zinserling J, Kneib T, and Friede T
- Subjects
- Bias, Computer Simulation, Confidence Intervals, Endpoint Determination methods, Humans, Likelihood Functions, Logistic Models, Regression Analysis, Clinical Trials, Phase II as Topic methods, Dose-Response Relationship, Drug, Research Design
- Abstract
In a dose-finding study with an active control, several doses of a new drug are compared with an established drug (the so-called active control). One goal of such studies is to characterize the dose-response relationship and to find the smallest target dose concentration d(*), which leads to the same efficacy as the active control. For this purpose, the intersection point of the mean dose-response function with the expected efficacy of the active control has to be estimated. The focus of this paper is a cubic spline-based method for deriving an estimator of the target dose without assuming a specific dose-response function. Furthermore, the construction of a spline-based bootstrap CI is described. Estimator and CI are compared with other flexible and parametric methods such as linear spline interpolation as well as maximum likelihood regression in simulation studies motivated by a real clinical trial. Also, design considerations for the cubic spline approach with focus on bias minimization are presented. Although the spline-based point estimator can be biased, designs can be chosen to minimize and reasonably limit the maximum absolute bias. Furthermore, the coverage probability of the cubic spline approach is satisfactory, especially for bias minimal designs., (© 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.)
- Published
- 2015
- Full Text
- View/download PDF
23. Guest editors' introduction. Preface.
- Author
-
Friede T, Henderson R, and Hougaard P
- Subjects
- Epidemiologic Methods, Genes, Germany, Humans, Models, Statistical, Biostatistics, Societies, Scientific
- Published
- 2014
- Full Text
- View/download PDF
24. Robustness of methods for blinded sample size re-estimation with overdispersed count data.
- Author
-
Schneider S, Schmidli H, and Friede T
- Subjects
- Computer Simulation, Humans, Pilot Projects, Algorithms, Models, Statistical, Randomized Controlled Trials as Topic methods, Sample Size
- Abstract
Counts of events are increasingly common as primary endpoints in randomized clinical trials. With between-patient heterogeneity leading to variances in excess of the mean (referred to as overdispersion), statistical models reflecting this heterogeneity by mixtures of Poisson distributions are frequently employed. Sample size calculation in the planning of such trials requires knowledge on the nuisance parameters, that is, the control (or overall) event rate and the overdispersion parameter. Usually, there is only little prior knowledge regarding these parameters in the design phase resulting in considerable uncertainty regarding the sample size. In this situation internal pilot studies have been found very useful and very recently several blinded procedures for sample size re-estimation have been proposed for overdispersed count data, one of which is based on an EM-algorithm. In this paper we investigate the EM-algorithm based procedure with respect to aspects of their implementation by studying the algorithm's dependence on the choice of convergence criterion and find that the procedure is sensitive to the choice of the stopping criterion in scenarios relevant to clinical practice. We also compare the EM-based procedure to other competing procedures regarding their operating characteristics such as sample size distribution and power. Furthermore, the robustness of these procedures to deviations from the model assumptions is explored. We find that some of the procedures are robust to at least moderate deviations. The results are illustrated using data from the US National Heart, Lung and Blood Institute sponsored Asymptomatic Cardiac Ischemia Pilot study., (Copyright © 2013 John Wiley & Sons, Ltd.)
- Published
- 2013
- Full Text
- View/download PDF
25. Design and semiparametric analysis of non-inferiority trials with active and placebo control for censored time-to-event data.
- Author
-
Kombrink K, Munk A, and Friede T
- Subjects
- Computer Simulation, Depressive Disorder, Major drug therapy, Humans, Research Design, Sample Size, Algorithms, Data Interpretation, Statistical, Models, Statistical, Randomized Controlled Trials as Topic methods
- Abstract
The clinical trial design including a test treatment, an active control and a placebo is called the gold standard design. In this paper, we develop a statistical method for planning and evaluating non-inferiority trials with gold standard design for right-censored time-to-event data. We consider both lost to follow-up and administrative censoring. We present a semiparametric approach that only assumes the proportionality of the hazard functions. In particular, we develop an algorithm for calculating the minimal total sample size and its optimal allocation to treatment groups such that a desired power can be attained for a specific parameter constellation under the alternative. For the purpose of sample size calculation, we assume the endpoints to be Weibull distributed. By means of simulations, we investigate the actual type I error rate, power and the accuracy of the calculated sample sizes. Finally, we compare our procedure with a previously proposed procedure assuming exponentially distributed event times. To illustrate our method, we consider a double-blinded, randomized, active and placebo controlled trial in major depression., (Copyright © 2013 John Wiley & Sons, Ltd.)
- Published
- 2013
- Full Text
- View/download PDF
26. Assessment of statistical significance and clinical relevance.
- Author
-
Kieser M, Friede T, and Gondan M
- Subjects
- Biosurveillance, Drug Discovery, Humans, Models, Statistical, Multiple Sclerosis drug therapy, Multiple Sclerosis physiopathology, Sample Size, Clinical Trials as Topic statistics & numerical data
- Abstract
In drug development, it is well accepted that a successful study will demonstrate not only a statistically significant result but also a clinically relevant effect size. Whereas standard hypothesis tests are used to demonstrate the former, it is less clear how the latter should be established. In the first part of this paper, we consider the responder analysis approach and study the performance of locally optimal rank tests when the outcome distribution is a mixture of responder and non-responder distributions. We find that these tests are quite sensitive to their planning assumptions and have therefore not really any advantage over standard tests such as the t-test and the Wilcoxon-Mann-Whitney test, which perform overall well and can be recommended for applications. In the second part, we present a new approach to the assessment of clinical relevance based on the so-called relative effect (or probabilistic index) and derive appropriate sample size formulae for the design of studies aiming at demonstrating both a statistically significant and clinically relevant effect. Referring to recent studies in multiple sclerosis, we discuss potential issues in the application of this approach., (Copyright © 2012 John Wiley & Sons, Ltd.)
- Published
- 2013
- Full Text
- View/download PDF
27. Considerations on what constitutes a 'qualified statistician' in regulatory guidelines.
- Author
-
Gerlinger C, Edler L, Friede T, Kieser M, Nakas CT, Schumacher M, Seldrup J, and Victor N
- Subjects
- Biostatistics, Humans, Licensure standards, Licensure statistics & numerical data, Practice Guidelines as Topic, Statistics as Topic education, Clinical Trials as Topic legislation & jurisprudence, Clinical Trials as Topic statistics & numerical data, Research Personnel
- Abstract
International regulatory guidelines require that a 'qualified statistician' takes responsibility for the statistical aspects of a clinical trial used for drug licensing. No consensus on what constitutes a 'qualified statistician' appears to have been developed so far. The International Society for Clinical Biostatistics is issuing this reflection paper in order to stimulate a discussion on the concept., (Copyright © 2011 John Wiley & Sons, Ltd.)
- Published
- 2012
- Full Text
- View/download PDF
28. Blinded sample size reestimation with count data: methods and applications in multiple sclerosis.
- Author
-
Friede T and Schmidli H
- Subjects
- Data Interpretation, Statistical, Double-Blind Method, Epidemiologic Research Design, Humans, Multiple Sclerosis, Relapsing-Remitting pathology, Pilot Projects, Poisson Distribution, Single-Blind Method, Time Factors, Binomial Distribution, Clinical Trials as Topic methods, Endpoint Determination methods, Multiple Sclerosis, Relapsing-Remitting drug therapy, Sample Size
- Abstract
Sample size estimation in clinical trials depends critically on nuisance parameters, such as variances or overall event rates, which have to be guessed or estimated from previous studies in the planning phase of a trial. Blinded sample size reestimation estimates these nuisance parameters based on blinded data from the ongoing trial, and allows to adjust the sample size based on the acquired information. In the present paper, this methodology is developed for clinical trials with count data as the primary endpoint. In multiple sclerosis such endpoints are commonly used in phase 2 trials (lesion counts in magnetic resonance imaging (MRI)) and phase 3 trials (relapse counts). Sample size adjustment formulas are presented for both Poisson-distributed data and for overdispersed Poisson-distributed data. The latter arise from sometimes considerable between-patient heterogeneity, which can be observed in particular in MRI lesion counts. The operation characteristics of the procedure are evaluated by simulations and recommendations on how to choose the size of the internal pilot study are given. The results suggest that blinded sample size reestimation for count data maintains the required power without an increase in the type I error.
- Published
- 2010
- Full Text
- View/download PDF
29. Blinded assessment of treatment effects utilizing information about the randomization block length.
- Author
-
Miller F, Friede T, and Kieser M
- Subjects
- Bias, Data Interpretation, Statistical, Depressive Disorder, Major drug therapy, Double-Blind Method, Humans, Hypericum, Likelihood Functions, Models, Statistical, Phytotherapy, Randomized Controlled Trials as Topic methods, Treatment Outcome, Biometry methods, Random Allocation, Randomized Controlled Trials as Topic statistics & numerical data
- Abstract
It is essential for the integrity of double-blind clinical trials that during the study course the individual treatment allocations of the patients as well as the treatment effect remain unknown to any involved person. Recently, methods have been proposed for which it was claimed that they would allow reliable estimation of the treatment effect based on blinded data by using information about the block length of the randomization procedure. If this would hold true, it would be difficult to preserve blindness without taking further measures. The suggested procedures apply to continuous data. We investigate the properties of these methods thoroughly by repeated simulations per scenario. Furthermore, a method for blinded treatment effect estimation in case of binary data is proposed, and blinded tests for treatment group differences are developed both for continuous and binary data. We report results of comprehensive simulation studies that investigate the features of these procedures. It is shown that for sample sizes and treatment effects which are typical in clinical trials, no reliable inference can be made on the treatment group difference which is due to the bias and imprecision of the blinded estimates., ((c) 2009 John Wiley & Sons, Ltd.)
- Published
- 2009
- Full Text
- View/download PDF
30. Optimal choice of the number of treatments to be included in a clinical trial.
- Author
-
Stallard N, Posch M, Friede T, Koenig F, and Brannath W
- Subjects
- Algorithms, Alzheimer Disease drug therapy, Bayes Theorem, Biometry, Decision Theory, Humans, Models, Statistical, Sample Size, Clinical Trials, Phase III as Topic statistics & numerical data
- Abstract
It is common for a number of potentially effective treatments to be available for clinical evaluation. Limitations on resources mean that this inevitably leads to a decision as to how many, and which, treatments should be considered for inclusion in a clinical trial. This paper considers the problem of selection of possible treatments for inclusion in a phase III clinical trial. We assume that treatments will be compared using a standard frequentist hypothesis test, and propose a Bayesian decision-theoretic approach that leads to minimization of the total sample size of the trial subject to controlling the familywise type I error rate and the expected probability of rejecting at least one null hypothesis. The method is illustrated in the simplest situation, in which two experimental treatments could be included in the clinical trial, exploring the levels of evidence that are required to lead to an optimal trial that includes one or both of these treatments., (John Wiley & Sons, Ltd)
- Published
- 2009
- Full Text
- View/download PDF
31. A group-sequential design for clinical trials with treatment selection.
- Author
-
Stallard N and Friede T
- Subjects
- Clinical Trials, Phase II as Topic statistics & numerical data, Clinical Trials, Phase III as Topic statistics & numerical data, Controlled Clinical Trials as Topic statistics & numerical data, Humans, Models, Statistical, Biometry methods, Clinical Trials as Topic statistics & numerical data
- Abstract
A group-sequential design for clinical trials that involve treatment selection was proposed by Stallard and Todd (Statist. Med. 2003; 22:689-703). In this design, the best among a number of experimental treatments is selected on the basis of data observed at the first of a series of interim analyses. This experimental treatment then continues together with the control treatment to be assessed in one or more further analyses. The method was extended by Kelly et al. (J. Biopharm. Statist. 2005; 15:641-658) to allow more than one experimental treatment to continue beyond the first interim analysis. This design controls the familywise type I error rate under the global null hypothesis, that is in the weak sense, but may not strongly control the error rate, particularly if the treatments selected are not the best-performing ones. In some cases, for example when additional safety data are available, the restriction that the best-performing treatments continue may be unreasonable. This paper describes an extension of the approach of Stallard and Todd that enables construction of a group-sequential design for comparison of several experimental treatments with a control treatment. The new method controls the type I error rate in the strong sense if the number of treatments included at each stage is specified in advance, and is indicated by simulation studies to be conservative when the number of treatments is chosen based on the observed data in a practically relevant way.
- Published
- 2008
- Full Text
- View/download PDF
32. Planning and analysis of three-arm non-inferiority trials with binary endpoints.
- Author
-
Kieser M and Friede T
- Subjects
- Data Interpretation, Statistical, Depression drug therapy, Duloxetine Hydrochloride, Humans, Paroxetine administration & dosage, Sample Size, Selective Serotonin Reuptake Inhibitors administration & dosage, Therapeutic Equivalency, Thiophenes administration & dosage, Controlled Clinical Trials as Topic methods, Research Design, Statistics as Topic methods
- Abstract
Three-arm trials including an experimental treatment, an active control and a placebo group are frequently preferred for the assessment of non-inferiority. In contrast to two-arm non-inferiority studies, these designs allow a direct proof of efficacy of a new treatment by comparison with placebo. As a further advantage, the test problem for establishing non-inferiority can be formulated in such a way that rejection of the null hypothesis assures that a pre-defined portion of the (unknown) effect the reference shows versus placebo is preserved by the treatment under investigation. We present statistical methods for this study design and the situation of a binary outcome variable. Asymptotic test procedures are given and their actual type I error rates are calculated. Approximate sample size formulae are derived and their accuracy is discussed. Furthermore, the question of optimal allocation of the total sample size is considered. Power properties of the testing strategy including a pre-test for assay sensitivity are presented. The derived methods are illustrated by application to a clinical trial in depression., (Copyright (c) 2006 John Wiley & Sons, Ltd.)
- Published
- 2007
- Full Text
- View/download PDF
33. Power and sample size determination when assessing the clinical relevance of trial results by 'responder analyses'.
- Author
-
Kieser M, Röhmel J, and Friede T
- Subjects
- Activities of Daily Living, Alzheimer Disease drug therapy, Alzheimer Disease pathology, Cognition drug effects, Cysteine therapeutic use, Drug Combinations, Humans, Pantothenic Acid therapeutic use, Risk Assessment, Clinical Trials as Topic methods, Sample Size, Treatment Outcome
- Abstract
A fundamental issue in regulatory decision making is the assessment of the benefit/risk profile of a compound. In order to do this, establishing the existence of a treatment effect by a significance test is not sufficient, but the clinical relevance of a potential benefit must also be taken into account. A number of regulatory guidelines propose that clinical relevance should be assessed by considering the rate of responders, i.e. the proportion of patients who are observed to achieve an apparently meaningful benefit. In this paper, we present methods for planning clinical trials that aim at demonstrating both statistical and clinical significance in superiority trials. Procedures based on analytical calculations are derived for normally distributed data and the case of a single endpoint as well as multiple primary outcomes. A bootstrap procedure is proposed that can be applied to non-normal data. Application is illustrated by a clinical trial in Alzheimer's disease., (2004 John Wiley & Sons, Ltd.)
- Published
- 2004
- Full Text
- View/download PDF
34. Intervention effects in observational survival studies with an application in total hip replacements.
- Author
-
Friede T and Henderson R
- Subjects
- Adult, Age Factors, Female, Humans, Male, Observation, Proportional Hazards Models, Treatment Outcome, United Kingdom, Arthroplasty, Replacement, Hip, Prosthesis Failure, Survival Analysis
- Abstract
Time to revision is a common and clinically relevant endpoint for studies of patients with total hip replacement. Because failures occur rarely within the first years after replacement, new surgical techniques and materials are often implemented without evidence of their effectiveness from randomized trials. Observational data may be available but this relies on the use of historical controls which has been heavily criticized. Instead the use of changepoint methods has been suggested to detect changes caused by successfully implemented interventions. In the setting of a proportional hazards model we develop a semi-parametric changepoint method to detect changes in baseline hazard. The procedure is motivated by and applied to a clinical study in patients with total hip replacements, where the effect of a new cement type is of interest. Power properties of the proposed method are investigated., (Copyright 2003 John Wiley & Sons, Ltd.)
- Published
- 2003
- Full Text
- View/download PDF
35. Simple procedures for blinded sample size adjustment that do not affect the type I error rate.
- Author
-
Kieser M and Friede T
- Subjects
- Anxiety Disorders drug therapy, Humans, Kava chemistry, Plant Extracts pharmacology, Clinical Trials as Topic methods, Research Design standards, Sample Size
- Abstract
For normally distributed data, determination of the appropriate sample size requires a knowledge of the variance. Because of the uncertainty in the planning phase, two-stage procedures are attractive where the variance is reestimated from a subsample and the sample size is adjusted if necessary. From a regulatory viewpoint, preserving blindness and maintaining the ability to calculate or control the type I error rate are essential. Recently, a number of proposals have been made for sample size adjustment procedures in the t-test situation. Unfortunately, none of these methods satisfy both these requirements. We show through analytical computations that the type I error rate of the t-test is not affected if simple blind variance estimators are used for sample size recalculation. Furthermore, the results for the expected power of the procedures demonstrate that the methods are effective in ensuring the desired power even under initial misspecification of the variance. A method is discussed that can be applied in a more general setting and that assumes analysis with a permutation test. This procedure maintains the significance level for any design situation and arbitrary blind sample size recalculation strategy., (Copyright 2003 John Wiley & Sons, Ltd.)
- Published
- 2003
- Full Text
- View/download PDF
36. Blinded sample size reassessment in non-inferiority and equivalence trials.
- Author
-
Friede T and Kieser M
- Subjects
- Asthma drug therapy, Bronchodilator Agents administration & dosage, Bronchodilator Agents therapeutic use, Budesonide administration & dosage, Budesonide therapeutic use, Double-Blind Method, Forced Expiratory Volume drug effects, Humans, Metered Dose Inhalers, Sample Size, Data Interpretation, Statistical, Randomized Controlled Trials as Topic methods, Research Design, Therapeutic Equivalency
- Abstract
Even in situations where the design and conduct of clinical trials is highly standardized, there may be a considerable between-study variation in the observed variability of the primary outcome variable. As a consequence, performing a study in a fixed sample size design implies a considerable risk of resulting in a too high or too low sample size. This difficulty can be alleviated by applying a design with internal pilot study. After a provisional sample size calculation in the planning stage, a portion of the planned sample is recruited and the sample size is recalculated on the basis of the observed variability. To comply with the requirement of some regulatory guidelines only blinded data should be used for the reassessment procedure. Furthermore, the effect on the type I error rate should be quantified. The current literature presents analytical results on the actual level in the t-test situation only for superiority trials. In these situations, blinded sample size recalculation does not lead to an inflation of the type I error rate. We extended the methodology to non-inferiority and equivalence trials with normally distributed outcome variable and hypotheses formulated in terms of the ratio and difference of means. Surprisingly, in contrast to the case of testing superiority, we observed actual type I error rates above the nominal level. The extent of inflation depends on the required sample size, the sample size of the internal pilot study, and the standardized equivalence or non-inferiority margin. It turned out that the elevation of the significance level is negligible for most practical situations. Nevertheless, the consequences of sample size reassessment have to be discussed case by case and regulatory concerns with respect to the actual size of the procedure cannot generally be refuted by referring to the fact that only blinded data were used., (Copyright 2003 John Wiley & Sons, Ltd.)
- Published
- 2003
- Full Text
- View/download PDF
37. On the inappropriateness of an EM algorithm based procedure for blinded sample size re-estimation.
- Author
-
Friede T and Kieser M
- Subjects
- Administration, Inhalation, Anti-Asthmatic Agents administration & dosage, Anti-Asthmatic Agents therapeutic use, Asthma drug therapy, Beclomethasone administration & dosage, Beclomethasone therapeutic use, Computer Simulation, Humans, Nebulizers and Vaporizers standards, Sample Size, Algorithms, Clinical Trials as Topic methods, Models, Statistical
- Abstract
When planning a clinical trial the sample size calculation is commonly based on an a priori estimate of the variance of the outcome variable. Misspecification of the variance can have substantial impact on the power of the trial. It is therefore attractive to update the planning assumptions during the ongoing trial using an internal estimate of the variance. For this purpose, an EM algorithm based procedure for blinded variance estimation was proposed for normally distributed data. Various simulation studies suggest a number of appealing properties of this procedure. In contrast, we show that (i) the estimates provided by this procedure depend on the initialization, (ii) the stopping rule used is inadequate to guarantee that the algorithm converges against the maximum likelihood estimator, and (iii) the procedure corresponds to the special case of simple randomization which, however, in clinical trials is rarely applied. Further, we show that maximum likelihood estimation leads to no reasonable results for blinded sample size re-estimation due to bias and high variability. The problem is illustrated by a clinical trial in asthma., (Copyright 2002 John Wiley & Sons, Ltd.)
- Published
- 2002
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.