396 results
Search Results
2. Confidence distributions for treatment effects in clinical trials: Posteriors without priors.
- Author
-
Marschner, Ian C.
- Subjects
- *
CLINICAL trials , *TREATMENT effectiveness , *DISTRIBUTION (Probability theory) , *FREQUENTIST statistics , *BAYESIAN analysis - Abstract
An attractive feature of using a Bayesian analysis for a clinical trial is that knowledge and uncertainty about the treatment effect is summarized in a posterior probability distribution. Researchers often find probability statements about treatment effects highly intuitive and the fact that this is not accommodated in frequentist inference is a disadvantage. At the same time, the requirement to specify a prior distribution in order to obtain a posterior distribution is sometimes an artificial process that may introduce subjectivity or complexity into the analysis. This paper considers a compromise involving confidence distributions, which are probability distributions that summarize uncertainty about the treatment effect without the need for a prior distribution and in a way that is fully compatible with frequentist inference. The concept of a confidence distribution provides a posterior–like probability distribution that is distinct from, but exists in tandem with, the relative frequency interpretation of probability used in frequentist inference. Although they have been discussed for decades, confidence distributions are not well known among clinical trial statisticians and the goal of this paper is to discuss their use in analyzing treatment effects from randomized trials. As well as providing an introduction to confidence distributions, some illustrative examples relevant to clinical trials are presented, along with various case studies based on real clinical trials. It is recommended that trial statisticians consider presenting confidence distributions for treatment effects when reporting analyses of clinical trials. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Inference on tree‐structured subgroups with subgroup size and subgroup effect relationship in clinical trials.
- Author
-
Luo, Yuanhui and Guo, Xinzhou
- Subjects
- *
CLINICAL trials , *PANITUMUMAB , *INFERENTIAL statistics - Abstract
When multiple candidate subgroups are considered in clinical trials, we often need to make statistical inference on the subgroups simultaneously. Classical multiple testing procedures might not lead to an interpretable and efficient inference on the subgroups as they often fail to take subgroup size and subgroup effect relationship into account. In this paper, built on the selective traversed accumulation rules (STAR), we propose a data‐adaptive and interactive multiple testing procedure for subgroups which can take subgroup size and subgroup effect relationship into account under prespecified tree structure. The proposed method is easy‐to‐implement and can lead to a more interpretable and efficient inference on prespecified tree‐structured subgroups. Possible accommodations to post hoc identified tree‐structure subgroups are also discussed in the paper. We demonstrate the merit of our proposed method by re‐analyzing the panitumumab trial with the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Estimation of conditional power in the presence of auxiliary data.
- Author
-
Li, Xin, Yung, Godwin, Lin, Jianchang, and Zhu, Jian
- Subjects
TREATMENT effectiveness ,CLINICAL trials - Abstract
Conditional power (CP) is a commonly used tool to inform interim decision‐making in clinical trials, but the conventional approach using only primary endpoint data to calculate CP may not perform well when the primary endpoint requires a long follow‐up period, or the treatment effect size changes during the trial. Several methods have been proposed to use additional short term auxiliary data observed at the interim analysis to improve the CP estimation in these situations, however, they may rely on strong assumptions, have limited applications, or use ad hoc choices of information fraction. In this paper we propose a general framework where the true CP formula is first derived in the presence of auxiliary data, and CP estimation is obtained by substituting the unknown parameters with consistent estimators. We conducted extensive simulations to examine the performance of both proposed and conventional approaches using the true CP as the benchmark. As the proposed approach is based on the true underlying CP, the simulations confirmed its superiority over the conventional approach in terms of efficiency and accuracy, especially if observed auxiliary data reflect the change of treatment effect size. The simulations also indicate that the magnitude of improvement in CP estimation is associated with the correlation between auxiliary and primary endpoints and/or the magnitude of the effect size change during the trial. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Single‐world intervention graphs for defining, identifying, and communicating estimands in clinical trials.
- Author
-
Ocampo, Alex and Bather, Jemar R.
- Subjects
CLINICAL trials ,CHRONIC pain ,TREATMENT effectiveness ,CAUSAL inference - Abstract
Confusion often arises when attempting to articulate target estimand(s) of a clinical trial in plain language. We aim to rectify this confusion by using a type of causal graph called the Single‐World Intervention Graph (SWIG) to provide a visual representation of the estimand that can be effectively communicated to interdisciplinary stakeholders. These graphs not only display estimands, but also illustrate the assumptions under which a causal estimand is identifiable by presenting the graphical relationships between the treatment, intercurrent events, and clinical outcomes. To demonstrate its usefulness in pharmaceutical research, we present examples of SWIGs for various intercurrent event strategies specified in the ICH E9(R1) addendum, as well as an example from a real‐world clinical trial for chronic pain. code to generate all the SWIGs shown is this paper is made available. We advocate clinical trialists adopt the use of SWIGs in their estimand discussions during the planning stages of their studies. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Understanding an impact of patient enrollment pattern on predictability of central (unstratified) randomization in a multi‐center clinical trial.
- Author
-
Krisam, Johannes, Ryeznik, Yevgen, Carter, Kerstine, Kuznetsova, Olga, and Sverdlov, Oleksandr
- Subjects
- *
CLINICAL trials , *RANDOMIZED controlled trials , *BLOCK designs , *NUMBER theory - Abstract
In a multi‐center randomized controlled trial (RCT) with competitive recruitment, eligible patients are enrolled sequentially by different study centers and are randomized to treatment groups using the chosen randomization method. Given the stochastic nature of the recruitment process, some centers may enroll more patients than others, and in some instances, a center may enroll multiple patients in a row, for example, on a given day. If the study is open‐label, the investigators might be able to make intelligent guesses on upcoming treatment assignments in the randomization sequence, even if the trial is centrally randomized and not stratified by center. In this paper, we use enrollment data inspired by a real multi‐center RCT to quantify the susceptibility of two restricted randomization procedures, the permuted block design and the big stick design, to selection bias under the convergence strategy of Blackwell and Hodges (1957) applied at the center level. We provide simulation evidence that the expected proportion of correct guesses may be greater than 50% (i.e., an increased risk of selection bias) and depends on the chosen randomization method and the number of study patients recruited by a given center that takes consecutive positions on the central allocation schedule. We propose some strategies for ensuring stronger encryption of the randomization sequence to mitigate the risk of selection bias. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Modern approaches for evaluating treatment effect heterogeneity from clinical trials and observational data.
- Author
-
Lipkovich, Ilya, Svensson, David, Ratitch, Bohdana, and Dmitrienko, Alex
- Subjects
- *
TREATMENT effect heterogeneity , *CLINICAL trials , *EVALUATION methodology - Abstract
In this paper, we review recent advances in statistical methods for the evaluation of the heterogeneity of treatment effects (HTE), including subgroup identification and estimation of individualized treatment regimens, from randomized clinical trials and observational studies. We identify several types of approaches using the features introduced in Lipkovich et al (Stat Med 2017;36: 136‐196) that distinguish the recommended principled methods from basic methods for HTE evaluation that typically rely on rules of thumb and general guidelines (the methods are often referred to as common practices). We discuss the advantages and disadvantages of various principled methods as well as common measures for evaluating their performance. We use simulated data and a case study based on a historical clinical trial to illustrate several new approaches to HTE evaluation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Simultaneous hypothesis testing for multiple competing risks in comparative clinical trials.
- Author
-
Wen, Jiyang, Wang, Mei‐Cheng, and Hu, Chen
- Subjects
COMPETING risks ,CLINICAL trials ,COVID-19 treatment ,HOSPITAL admission & discharge ,MONTE Carlo method - Abstract
Competing risks data are commonly encountered in randomized clinical trials or observational studies. Ignoring competing risks in survival analysis leads to biased risk estimates and improper conclusions. Often, one of the competing events is of primary interest and the rest competing events are handled as nuisances. These approaches can be inadequate when multiple competing events have important clinical interpretations and thus of equal interest. For example, in COVID‐19 in‐patient treatment trials, the outcomes of COVID‐19 related hospitalization are either death or discharge from hospital, which have completely different clinical implications and are of equal interest, especially during the pandemic. In this paper we develop nonparametric estimation and simultaneous inferential methods for multiple cumulative incidence functions (CIFs) and corresponding restricted mean times. Based on Monte Carlo simulations and a data analysis of COVID‐19 in‐patient treatment clinical trial, we demonstrate that the proposed method provides global insights of the treatment effects across multiple endpoints. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Jerome Cornfield's contributions to early large randomized clinical trials and some reminiscences from the years of the slippery doorknobs.
- Author
-
Wittes, Janet
- Abstract
This paper briefly describes Jerome Cornfield's approach to Bayesian statistics, his discomfort with frequentist inference, and his contribution to two major clinical trials, the University Group Diabetes Program and the Coronary Drug Project. I mention the role of Bayesian statistics in current randomized clinical trials and conjecture why Cornfield's contributions to Bayesian methods are not more widely cited today. I then provide some personal recollections of Jerry as a role model and mentor and conclude with a recommendation that biostatisticians read his seminal papers because of their thoughtfulness, insight, wit, and clarity. Copyright © 2012 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
10. On assessing survival benefit of immunotherapy using long‐term restricted mean survival time.
- Author
-
Horiguchi, Miki, Tian, Lu, and Uno, Hajime
- Subjects
SURVIVAL rate ,LOG-rank test ,CLINICAL trials ,TREATMENT delay (Medicine) ,IMMUNOTHERAPY - Abstract
The pattern of the difference between two survival curves we often observe in randomized clinical trials for evaluating immunotherapy is not proportional hazards; the treatment effect typically appears several months after the initiation of the treatment (ie, delayed difference pattern). The commonly used logrank test and hazard ratio estimation approach will be suboptimal concerning testing and estimation for those trials. The long‐term restricted mean survival time (LT‐RMST) approach is a promising alternative for detecting the treatment effect that potentially appears later in the study. A challenge in employing the LT‐RMST approach is that it must specify a lower end of the time window in addition to a truncation time point that the RMST requires. There are several investigations and suggestions regarding the choice of the truncation time point for the RMST. However, little has been investigated to address the choice of the lower end of the time window. In this paper, we propose a flexible LT‐RMST‐based test/estimation approach that does not require users to specify a lower end of the time window. Numerical studies demonstrated that the potential power loss by adopting this flexibility was minimal, compared to the standard LT‐RMST approach using a prespecified lower end of the time window. The proposed method is flexible and can offer higher power than the RMST‐based approach when the delayed treatment effect is expected. Also, it provides a robust estimate of the magnitude of the treatment effect and its confidence interval that corresponds to the test result. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Estimation and visualization of heterogeneous treatment effects for multiple outcomes.
- Author
-
Yuki, Shintaro, Tanioka, Kensuke, and Yadohisa, Hiroshi
- Subjects
TREATMENT effectiveness ,LATENT variables ,VISUALIZATION ,CLINICAL trials - Abstract
We consider two‐arm comparison in clinical trials. The objective is to identify a population with characteristics that make the treatment effective. Such a population is called a subgroup. This identification can be made by estimating the treatment effect and identifying the interactions between treatments and covariates. For a single outcome, there are several ways available to identify the subgroups. There are also multiple outcomes, but they are difficult to interpret and cannot be applied to outcomes other than continuous values. In this paper, we thus propose a new method that allows for a straightforward interpretation of subgroups and deals with both continuous and binary outcomes. The proposed method introduces latent variables and adds Lasso sparsity constraints to the estimated loadings to facilitate the interpretation of the relationship between outcomes and covariates. The interpretation of the subgroups is made by visualizing treatment effects and latent variables. Since we are performing sparse estimation, we can interpret the covariates related to the treatment effects and subgroups. Finally, simulation and real data examples demonstrate the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Flexible evaluation of surrogate markers with Bayesian model averaging.
- Author
-
Duan, Yunshan and Parast, Layla
- Subjects
- *
BIOMARKERS , *CLINICAL trials , *DECISION making , *PARAMETRIC modeling - Abstract
When long‐term follow up is required for a primary endpoint in a randomized clinical trial, a valid surrogate marker can help to estimate the treatment effect and accelerate the decision process. Several model‐based methods have been developed to evaluate the proportion of the treatment effect that is explained by the treatment effect on the surrogate marker. More recently, a nonparametric approach has been proposed allowing for more flexibility by avoiding the restrictive parametric model assumptions required in the model‐based methods. While the model‐based approaches suffer from potential mis‐specification of the models, the nonparametric method fails to give desirable estimates when the sample size is small, or when the range of the data does not follow certain conditions. In this paper, we propose a Bayesian model averaging approach to estimate the proportion of treatment effect explained by the surrogate marker. Our procedure offers a compromise between the model‐based approach and the nonparametric approach by introducing model flexibility via averaging over several candidate models and maintains the strength of parametric models with respect to inference. We compare our approach with previous model‐based methods and the nonparametric method. Simulation studies demonstrate the advantage of our method when surrogate supports are inconsistent and sample sizes are small. We illustrate our method using data from the Diabetes Prevention Program study to examine hemoglobin A1c as a surrogate marker for fasting glucose. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Correction to "Propensity score weighting for covariate adjustment in randomized clinical trials".
- Author
-
Zeng, Shuxi, Li, Fan, and Wang, Rui
- Subjects
- *
CLINICAL trials , *ASYMPTOTIC efficiencies , *ASYMPTOTIC expansions - Abstract
The article titled "Correction to 'Propensity score weighting for covariate adjustment in randomized clinical trials'" by Zeng et al. provides a correction to a previously published paper. The correction clarifies that certain propositions in the paper only hold under equal randomization with a specific condition, which was mistakenly omitted in the original publication. The article discusses the properties of the OW estimator in randomized trials under equal randomization and unequal randomization, when the true conditional outcome surface is linear. It also acknowledges a mistake in the original paper's asymptotic expansion, but assures that it did not affect the numerical results in the simulation studies and data application. The updated supplementary materials are provided to support the corrected proposition. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
14. Semiparametric normal transformation joint model of multivariate longitudinal and bivariate time‐to‐event data.
- Author
-
Tang, An‐Ming, Peng, Cheng, and Tang, Niansheng
- Subjects
- *
GIBBS sampling , *HAZARD function (Statistics) , *RANDOM variables , *BREAST cancer , *CLINICAL trials - Abstract
Joint models for longitudinal and survival data (JMLSs) are widely used to investigate the relationship between longitudinal and survival data in clinical trials in recent years. But, the existing studies mainly focus on independent survival data. In many clinical trials, survival data may be bivariately correlated. To this end, this paper proposes a novel JMLS accommodating multivariate longitudinal and bivariate correlated time‐to‐event data. Nonparametric marginal survival hazard functions are transformed to bivariate normal random variables. Bayesian penalized splines are employed to approximate unknown baseline hazard functions. Incorporating the Metropolis‐Hastings algorithm into the Gibbs sampler, we develop a Bayesian adaptive Lasso method to simultaneously estimate parameters and baseline hazard functions, and select important predictors in the considered JMLS. Simulation studies and an example taken from the International Breast Cancer Study Group are used to illustrate the proposed methodologies. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. On becoming a Bayesian: Early correspondences between J. Cornfield and L. J. Savage.
- Author
-
Greenhouse, Joel B.
- Abstract
Jerome Cornfield was arguably the leading proponent for the use of Bayesian methods in biostatistics during the 1960s. Prior to 1963, however, Cornfield had no publications in the area of Bayesian statistics. At a time when frequentist methods were the dominant influence on statistical practice, Cornfield went against the mainstream and embraced Bayes. The goals of this paper are as follows: (i) to explore how and why this transformation came about and (ii) to provide some sense as to who Cornfield was and the context in which he worked. Copyright © 2012 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
16. Information content of stepped-wedge designs when treatment effect heterogeneity and/or implementation periods are present.
- Author
-
Kasza, Jessica, Taljaard, Monica, and Forbes, Andrew B.
- Subjects
CLUSTER randomized controlled trials ,HETEROGENEITY ,ATRIAL fibrillation treatment ,EXPERIMENTAL design ,RESEARCH ,CLINICAL trials ,TREATMENT effect heterogeneity ,RESEARCH methodology ,EVALUATION research ,MEDICAL protocols ,COMPARATIVE studies ,RESEARCH funding ,STATISTICAL models - Abstract
Stepped-wedge cluster randomized trials, which randomize clusters of subjects to treatment sequences in which clusters switch from control to intervention conditions, are being conducted with increasing frequency. Due to the real-world nature of this design, methodological and implementation challenges are ubiquitous. To account for such challenges, more complex statistical models to plan studies and analyze data are required. In this paper, we consider stepped-wedge trials that accommodate treatment effect heterogeneity across clusters, implementation periods during which no data are collected, or both treatment effect heterogeneity and implementation periods. Previous work has shown that the sequence-period cells of a stepped-wedge design contribute unequal amounts of information to the estimation of the treatment effect. In this paper, we extend that work by considering the amount of information available for the estimation of the treatment effect in each sequence-period cell, sequence, and period of stepped-wedge trials with more complex designs and outcome models. When either treatment effect heterogeneity and/or implementation periods are present, the pattern of information content of sequence-period cells tends to be clustered around the times of the switch from control to intervention condition, similarly to when these complexities are absent. However, the presence and degree of treatment effect heterogeneity and the number of implementation periods can influence the information content of periods and sequences markedly. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
17. Prospective individual patient data meta‐analysis: Evaluating convalescent plasma for COVID‐19.
- Author
-
Goldfeld, Keith S., Wu, Danni, Tarpey, Thaddeus, Liu, Mengling, Wu, Yinxiang, Troxel, Andrea B., and Petkova, Eva
- Subjects
CONVALESCENT plasma ,COVID-19 ,COVID-19 pandemic ,COVID-19 treatment ,CLINICAL trials - Abstract
As the world faced the devastation of the COVID‐19 pandemic in late 2019 and early 2020, numerous clinical trials were initiated in many locations in an effort to establish the efficacy (or lack thereof) of potential treatments. As the pandemic has been shifting locations rapidly, individual studies have been at risk of failing to meet recruitment targets because of declining numbers of eligible patients with COVID‐19 encountered at participating sites. It has become clear that it might take several more COVID‐19 surges at the same location to achieve full enrollment and to find answers about what treatments are effective for this disease. This paper proposes an innovative approach for pooling patient‐level data from multiple ongoing randomized clinical trials (RCTs) that have not been configured as a network of sites. We present the statistical analysis plan of a prospective individual patient data (IPD) meta‐analysis (MA) from ongoing RCTs of convalescent plasma (CP). We employ an adaptive Bayesian approach for continuously monitoring the accumulating pooled data via posterior probabilities for safety, efficacy, and harm. Although we focus on RCTs for CP and address specific challenges related to CP treatment for COVID‐19, the proposed framework is generally applicable to pooling data from RCTs for other therapies and disease settings in order to find answers in weeks or months, rather than years. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
18. Improving likelihood-based inference in control rate regression.
- Author
-
Guolo, Annamaria
- Subjects
CLINICAL trials ,COMPUTER simulation ,DEATH ,HYPERTENSION ,META-analysis ,PROBABILITY theory ,REGRESSION analysis ,STATISTICS ,STATISTICAL models - Abstract
Control rate regression is a diffuse approach to account for heterogeneity among studies in meta-analysis by including information about the outcome risk of patients in the control condition. Correcting for the presence of measurement error affecting risk information in the treated and in the control group has been recognized as a necessary step to derive reliable inferential conclusions. Within this framework, the paper considers the problem of small sample size as an additional source of misleading inference about the slope of the control rate regression. Likelihood procedures relying on first-order approximations are shown to be substantially inaccurate, especially when dealing with increasing heterogeneity and correlated measurement errors. We suggest to address the problem by relying on higher-order asymptotics. In particular, we derive Skovgaard's statistic as an instrument to improve the accuracy of the approximation of the signed profile log-likelihood ratio statistic to the standard normal distribution. The proposal is shown to provide much more accurate results than standard likelihood solutions, with no appreciable computational effort. The advantages of Skovgaard's statistic in control rate regression are shown in a series of simulation experiments and illustrated in a real data example. R code for applying first- and second-order statistic for inference on the slope on the control rate regression is provided. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
19. Sequential knockoffs for continuous and categorical predictors: With application to a large psoriatic arthritis clinical trial pool.
- Author
-
Kormaksson, Matthias, Kelly, Luke J., Zhu, Xuan, Haemmerle, Sibylle, Pricop, Luminita, and Ohlssen, David
- Subjects
- *
PSORIATIC arthritis , *FORECASTING , *FALSE discovery rate , *CLINICAL trials , *PROGNOSIS - Abstract
Knockoffs provide a general framework for controlling the false discovery rate when performing variable selection. Much of the Knockoffs literature focuses on theoretical challenges and we recognize a need for bringing some of the current ideas into practice. In this paper we propose a sequential algorithm for generating knockoffs when underlying data consists of both continuous and categorical (factor) variables. Further, we present a heuristic multiple knockoffs approach that offers a practical assessment of how robust the knockoff selection process is for a given dataset. We conduct extensive simulations to validate performance of the proposed methodology. Finally, we demonstrate the utility of the methods on a large clinical data pool of more than 2000 patients with psoriatic arthritis evaluated in four clinical trials with an IL-17A inhibitor, secukinumab (Cosentyx), where we determine prognostic factors of a well established clinical outcome. The analyses presented in this paper could provide a wide range of applications to commonly encountered datasets in medical practice and other fields where variable selection is of particular interest. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. Comparative poisson clinical trials of multiple experimental treatments vs a single control using the negative multinomial distribution.
- Author
-
Chiarappa, Joseph A. and Hoover, Donald R.
- Subjects
MULTINOMIAL distribution ,INVESTIGATIONAL therapies ,FALSE positive error ,CLINICAL trials ,DISTRIBUTION (Probability theory) - Abstract
This paper introduces a method which conditions on the number of events that occur in the control group to determine rejection regions and power for comparative Poisson trials with multiple experimental treatment arms that are each compared to one control arm. This leads to the negative multinomial as the statistical distribution used for testing. For one experimental treatment and one control with curtailed sampling, this is equivalent to Gail's (1974) approach. We provide formulas to calculate exact one-sided overall Type I error and pointwise power for tests of treatment superiority and inferiority (vs the control). Tables of trial design parameters for combinations of one-sided overall Type I error = 0.05, 0.01 and pointwise power = 0.90, 0.80 are provided. Curtailment approaches are presented to stop follow-up of experimental treatment arms or to stop the study entirely once the final outcomes for each arm are known. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
21. Sharp nonparametric bounds and randomization inference for treatment effects on an ordinal outcome.
- Author
-
Chiba, Yasutaka
- Subjects
ATTRIBUTION (Social psychology) ,CLINICAL trials ,COMPUTER simulation ,CONFIDENCE intervals ,NONPARAMETRIC statistics ,STATISTICAL sampling ,STATISTICS ,DATA analysis ,TREATMENT effectiveness - Abstract
In clinical research, investigators are interested in inferring the average causal effect of a treatment. However, the causal parameter that can be used to derive the average causal effect is not well defined for ordinal outcomes. Although some definitions have been proposed, they are limited in that they are not identical to the well-defined causal risk for a binary outcome, which is the simplest ordinal outcome. In this paper, we propose the use of a causal parameter for an ordinal outcome, defined as the proportion that a potential outcome under one treatment condition would not be smaller than that under the other condition. For a binary outcome, this proportion is identical to the causal risk. Unfortunately, the proposed causal parameter cannot be identified, even under randomization. Therefore, we present a numerical method to calculate the sharp nonparametric bounds within a sample, reflecting the impact of confounding. When the assumption of independent potential outcomes is included, the causal parameter can be identified when randomization is in play. Then, we present exact tests and the associated confidence intervals for the relative treatment effect using the randomization-based approach, which are an extension of the existing methods for a binary outcome. Our methodologies are illustrated using data from an emetic prevention clinical trial. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
22. On responder analyses in the framework of within subject comparisons - considerations and two case studies.
- Author
-
Kunz, Michael
- Abstract
A responder analysis is a common tool when clinical data are reported. In this paper, we extend the definition of responders to within subject comparisons and present a rigorous definition of the corresponding statistical functional. Via simulation studies, we get further insights under which conditions these analyses can even result in a higher power compared with an analysis based on the arithmetic mean. We report two case studies where these analyses contributed to a better understanding of the clinical data especially as some large observations were present that had a notable impact on the observed standard deviation. Copyright © 2014 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
23. Early phase dose-finding trials in virology.
- Author
-
Dehbi, Hakim‐Moulay, Lowe, David M., O'Quigley, John, and Dehbi, Hakim-Moulay
- Subjects
DRUG side effects ,VIROLOGY ,NOROVIRUS diseases ,VIRAL load ,ANTIVIRAL agents ,VIRUS disease drug therapy ,EXPERIMENTAL design ,CLINICAL trials ,DRUG dosage ,DOSE-effect relationship in pharmacology ,DRUG toxicity - Abstract
Little has been published in terms of dose-finding methodology in virology. Aside from a few papers focusing on HIV, the considerable progress in dose-finding methodology of the last 25 years has focused almost entirely on oncology. While adverse reactions to cytotoxic drugs may be life threatening, for anti-viral agents we anticipate something different: side effects that provoke the cessation of treatment. This would correspond to treatment failure. On the other hand, success would not be yes/no but would correspond to a range of responses, from small, no more than say 20% reduction in viral load to the complete elimination of the virus. Less than total success matters since this may allow the patient to achieve immune-mediated clearance. The motivation for this article is an upcoming dose-finding trial in chronic norovirus infection. We propose a novel methodology whose goal is twofold: first, to identify the dose that provides the most favorable distribution of treatment outcomes, and, second, to do this in a way that maximizes the treatment benefit for the patients included in the study. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
24. Estimands in clinical trials - broadening the perspective.
- Author
-
Akacha, Mouna, Bretz, Frank, and Ruberg, Stephen
- Subjects
INDUSTRIES ,CLINICAL trials ,EXPERIMENTAL design ,STATISTICS ,DRUG development ,DATA analysis ,STATISTICAL models ,STANDARDS - Abstract
Defining the scientific questions of interest in a clinical trial is crucial to align its planning, design, conduct, analysis, and interpretation. However, practical experience shows that oftentimes specific choices in the statistical analysis blur the scientific question either in part or even completely, resulting in misalignment between trial objectives, conduct, analysis, and confusion in interpretation. The need for more clarity was highlighted by the Steering Committee of the International Council for Harmonization (ICH) in 2014, which endorsed a Concept Paper with the goal of developing a new regulatory guidance, suggested to be an addendum to ICH guideline E9. Triggered by these developments, we elaborate in this paper what the relevant questions in drug development are and how they fit with the current practice of intention-to-treat analyses. To this end, we consider the perspectives of patients, physicians, regulators, and payers. We argue that despite the different backgrounds and motivations of the various stakeholders, they all have similar interests in what the clinical trial estimands should be. Broadly, these can be classified into estimands addressing (a) lack of adherence to treatment due to different reasons and (b) efficacy and safety profiles when patients, in fact, are able to adhere to the treatment for its intended duration. We conclude that disentangling adherence to treatment and the efficacy and safety of treatment in patients that adhere leads to a transparent and clinical meaningful assessment of treatment risks and benefits. We touch upon statistical considerations and offer a discussion of additional implications. Copyright © 2016 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
25. Comparing a stratified treatment strategy with the standard treatment in randomized clinical trials.
- Author
-
Sun, Hong, Bretz, Frank, Gerke, Oke, and Vach, Werner
- Subjects
CLINICAL trials ,EXPERIMENTAL design - Abstract
The increasing emergence of predictive markers for different treatments in the same patient population allows us to define stratified treatment strategies. We consider randomized clinical trials that compare a standard treatment with a new stratified treatment strategy that divides the study population into subgroups receiving different treatments. Because the new strategy may not be beneficial in all subgroups, we consider in this paper an intermediate approach that establishes a treatment effect in a subset of patients built by joining several subgroups. The approach is based on the simple idea of selecting the subset with minimal p-value when testing the subset-specific treatment effects. We present a framework to compare this approach with other approaches to select subsets by introducing three performance measures. The results of a comprehensive simulation study are presented, and the relative merits of the various approaches are discussed. Copyright © 2016 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
26. Joint modeling of binary response and survival for clustered data in clinical trials.
- Author
-
Chen, Bingshu E. and Wang, Jia
- Subjects
CLINICAL trials ,INFERENTIAL statistics ,RANDOM variables ,RANDOM effects model - Abstract
In clinical trials, it is often desirable to evaluate the effect of a prognostic factor such as a marker response on a survival outcome. However, the marker response and survival outcome are usually associated with some potentially unobservable factors. In this case, the conventional statistical methods that model these two outcomes separately may not be appropriate. In this paper, we propose a joint model for marker response and survival outcomes for clustered data, providing efficient statistical inference by considering these two outcomes simultaneously. We focus on a special type of marker response: a binary outcome, which is investigated together with survival data using a cluster-specific multivariate random effect variable. A multivariate penalized likelihood method is developed to make statistical inference for the joint model. However, the standard errors obtained from the penalized likelihood method are usually underestimated. This issue is addressed using a jackknife resampling method to obtain a consistent estimate of standard errors. We conduct extensive simulation studies to assess the finite sample performance of the proposed joint model and inference methods in different scenarios. The simulation studies show that the proposed joint model has excellent finite sample properties compared to the separate models when there exists an underlying association between the marker response and survival data. Finally, we apply the proposed method to a symptom control study conducted by Canadian Cancer Trials Group to explore the prognostic effect of covariates on pain control and overall survival. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
27. Incorporating patient-reported outcomes in dose-finding clinical trials.
- Author
-
Lee, Shing M., Lu, Xiaoqi, and Cheng, Bin
- Subjects
CLINICAL trials ,MEASURING instruments - Abstract
Oncology dose-finding clinical trials determine the maximum tolerated dose (MTD) based on toxicity outcomes captured by clinicians. With the availability of more rigorous instruments for measuring toxicity directly from patients, there is a growing interest to incorporate patient-reported outcomes (PRO) in clinical trials to inform patient tolerability. This is particularly important for dose-finding trials to ensure the identification of a well-tolerated dose. In this paper, we propose three extensions of the continual reassessment method (CRM), termed PRO-CRMs, that incorporate both clinician and patient outcomes. The first method is a marginal modeling approach whereby clinician and patient toxicity outcomes are modeled separately. The other two methods impose a constraint using a joint outcome defined based on both clinician and patient toxicities and model them either jointly or marginally. Simulation studies show that while all three PRO-CRMs select well-tolerated doses based on clinician's and patient's perspectives, the methods using a joint outcome perform better and have similar performance. We also show that the proposed PRO-CRMs are consistent under robust model assumptions. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
28. A new Bayesian joint model for longitudinal count data with many zeros, intermittent missingness, and dropout with applications to HIV prevention trials.
- Author
-
Wu, Jing, Chen, Ming‐Hui, Schifano, Elizabeth D., Ibrahim, Joseph G., Fisher, Jeffrey D., and Chen, Ming-Hui
- Subjects
HIV prevention ,GIBBS sampling ,POISSON regression ,TREATMENT effectiveness ,PATIENT dropouts ,HIV infection transmission ,HIV infections & psychology ,STATISTICS ,COMPUTER simulation ,CLINICAL trials ,HUMAN sexuality ,REGRESSION analysis ,RESEARCH funding ,STATISTICAL models ,DATA analysis ,PROBABILITY theory ,POISSON distribution ,LONGITUDINAL method - Abstract
In longitudinal clinical trials, it is common that subjects may permanently withdraw from the study (dropout), or return to the study after missing one or more visits (intermittent missingness). It is also routinely encountered in HIV prevention clinical trials that there is a large proportion of zeros in count response data. In this paper, a sequential multinomial model is adopted for dropout and subsequently a conditional model is constructed for intermittent missingness. The new model captures the complex structure of missingness and incorporates dropout and intermittent missingness simultaneously. The model also allows us to easily compute the predictive probabilities of different missing data patterns. A zero-inflated Poisson mixed-effects regression model is assumed for the longitudinal count response data. We also propose an approach to assess the overall treatment effects under the zero-inflated Poisson model. We further show that the joint posterior distribution is improper if uniform priors are specified for the regression coefficients under the proposed model. Variations of the g-prior, Jeffreys prior, and maximally dispersed normal prior are thus established as remedies for the improper posterior distribution. An efficient Gibbs sampling algorithm is developed using a hierarchical centering technique. A modified logarithm of the pseudomarginal likelihood and a concordance based area under the curve criterion are used to compare the models under different missing data mechanisms. We then conduct an extensive simulation study to investigate the empirical performance of the proposed methods and further illustrate the methods using real data from an HIV prevention clinical trial. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
29. Two-stage enrichment clinical trial design with adjustment for misclassification in predictive biomarkers.
- Author
-
Lin, Yong, Shih, Weichung J., Lu, Shou‐En, and Lu, Shou-En
- Subjects
FALSE positive error ,NON-small-cell lung carcinoma ,CLINICAL trials ,ERROR rates ,TYPE design - Abstract
A two-stage enrichment design is a type of adaptive design, which extends a stratified design with a futility analysis on the marker negative cohort at the first stage, and the second stage can be either a targeted design with only the marker positive stratum, or still the stratified design with both marker strata, depending on the result of the interim futility analysis. In this paper, we consider the situation where the marker assay and the classification rule are possibly subject to error. We derive the sequential tests for the global hypothesis as well as the component tests for the overall cohort and the marker-positive cohort. We discuss the power analysis with the control of the type I error rate and show the adverse impact of the misclassification on the powers. We also show the enhanced power of the two-stage enrichment over the one-stage design and illustrate with examples of the recent successful development of immunotherapy in non-small-cell lung cancer. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
30. Symmetric graphs for equally weighted tests, with application to the Hochberg procedure.
- Author
-
Xi, Dong and Bretz, Frank
- Subjects
WEIGHTED graphs ,NULL hypothesis ,ERROR rates ,CLINICAL trials ,STATISTICS ,STATISTICAL models ,DATA analysis - Abstract
The graphical approach to multiple testing provides a convenient tool for designing, visualizing, and performing multiplicity adjustments in confirmatory clinical trials while controlling the familywise error rate. It assigns a set of weights to each intersection null hypothesis within the closed test framework. These weights form the basis for intersection tests using weighted individual p-values, such as the weighted Bonferroni test. In this paper, we extend the graphical approach to intersection tests that assume equal weights for the elementary null hypotheses associated with any intersection hypothesis, including the Hochberg procedure as well as omnibus tests such as Fisher's combination, O'Brien's, and F tests. More specifically, we introduce symmetric graphs that generate sets of equal weights so that the aforementioned tests can be applied with the graphical approach. In addition, we visualize the Hochberg and the truncated Hochberg procedures in serial and parallel gatekeeping settings using symmetric component graphs. We illustrate the method with two clinical trial examples. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
31. Baseline patient characteristics and mortality associated with longitudinal intervention compliance.
- Author
-
Lin, Julia Y., Ten Have, Thomas R., Bogner, Hillary R., and Elliott, Michael R.
- Subjects
THERAPEUTICS ,SUICIDE prevention ,CLINICAL trials ,MENTAL depression ,LONGITUDINAL method ,MORTALITY ,PROBABILITY theory ,RESEARCH funding ,TIME ,PROPORTIONAL hazards models ,PATIENT selection ,PATIENT dropouts ,STATISTICAL models - Abstract
Copyright of Statistics in Medicine is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2007
- Full Text
- View/download PDF
32. Discussion of "target estimands for population-adjusted indirect comparisons" by Antonio Remiro-Azocar.
- Author
-
Russek‐Cohen, Estelle and Russek-Cohen, Estelle
- Subjects
CLINICAL trials ,NATIONAL health insurance - Abstract
Using covariates or omitting them with other models may alter the definition of the treatment effect, so careful consideration of covariates ought to be considered at the planning stages of a study5 but it does not preclude defining a marginal estimand. Covariates can improve the precision of estimates of treatment effect but that impacts the estimator and estimate. I also agree that ignoring important covariates in a network meta-analysis is likely to generate less meaningful estimates of treatment benefit, even if the precision of the estimates is not necessarily tighter when covariates are included. In many cases individual patient data are not available and the journal articles are too terse to determine the estimand for each study; different studies capture the covariates differently (eg, lumping and splitting subjects differently) and the endpoints may not be identical. [Extracted from the article]
- Published
- 2022
- Full Text
- View/download PDF
33. Bayesian evidence synthesis for exploring generalizability of treatment effects: a case study of combining randomized and non-randomized results in diabetes.
- Author
-
Verde, Pablo E., Ohmann, Christian, Morbach, Stephan, and Icks, Andrea
- Subjects
DIABETIC foot prevention ,CLINICAL trials ,COMPARATIVE studies ,EXPERIMENTAL design ,RESEARCH methodology ,MEDICAL cooperation ,PROBABILITY theory ,RESEARCH ,EVALUATION research ,TREATMENT effectiveness - Abstract
In this paper, we present a unified modeling framework to combine aggregated data from randomized controlled trials (RCTs) with individual participant data (IPD) from observational studies. Rather than simply pooling the available evidence into an overall treatment effect, adjusted for potential confounding, the intention of this work is to explore treatment effects in specific patient populations reflected by the IPD. In this way, by collecting IPD, we can potentially gain new insights from RCTs' results, which cannot be seen using only a meta-analysis of RCTs. We present a new Bayesian hierarchical meta-regression model, which combines submodels, representing different types of data into a coherent analysis. Predictors of baseline risk are estimated from the individual data. Simultaneously, a bivariate random effects distribution of baseline risk and treatment effects is estimated from the combined individual and aggregate data. Therefore, given a subgroup of interest, the estimated treatment effect can be calculated through its correlation with baseline risk. We highlight different types of model parameters: those that are the focus of inference (e.g., treatment effect in a subgroup of patients) and those that are used to adjust for biases introduced by data collection processes (e.g., internal or external validity). The model is applied to a case study where RCTs' results, investigating efficacy in the treatment of diabetic foot problems, are extrapolated to groups of patients treated in medical routine and who were enrolled in a prospective cohort study. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
34. Design and analysis of three-arm trials with negative binomially distributed endpoints.
- Author
-
Mütze, Tobias, Munk, Axel, and Friede, Tim
- Subjects
IMMUNOSUPPRESSIVE agents ,BIOLOGICAL assay ,CLINICAL trials ,COMPUTER simulation ,EXPERIMENTAL design ,MAGNETIC resonance imaging ,MULTIPLE sclerosis ,PLACEBOS ,SYSTEM analysis ,SAMPLE size (Statistics) ,STATISTICAL models - Abstract
A three-arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non-inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three-arm trials with negative binomially distributed endpoints. In particular, we develop a Wald-type test with a restricted maximum-likelihood variance estimator for testing non-inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald-type statistics with a sample variance estimator and an unrestricted maximum-likelihood estimator are included in the simulation study. We found that the proposed Wald-type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials, which implements the methods discussed in this paper, is available on CRAN. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
35. Simultaneous estimation of parameters in the bivariate Emax model.
- Author
-
Magnusdottir, Bergrun T. and Nyquist, Hans
- Subjects
CHAOS theory ,CLINICAL trials ,DOSE-effect relationship in pharmacology ,TYPE 2 diabetes ,RESEARCH bias ,STATISTICAL models - Abstract
In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
36. Analysis of linear transformation models with covariate measurement error and interval censoring.
- Author
-
Mandal, Soutrik, Wang, Suojin, and Sinha, Samiran
- Subjects
ERRORS-in-variables models ,LINEAR statistical models ,PROPORTIONAL hazards models ,INTERVAL measurement ,MEASUREMENT errors ,COMPUTER simulation ,HIV infections ,ANTI-HIV agents ,RESEARCH ,CLINICAL trials ,RESEARCH methodology ,EVALUATION research ,MEDICAL cooperation ,COMPARATIVE studies ,BLIND experiment ,RESEARCH funding ,PROBABILITY theory - Abstract
Among several semiparametric models, the Cox proportional hazard model is widely used to assess the association between covariates and the time-to-event when the observed time-to-event is interval-censored. Often, covariates are measured with error. To handle this covariate uncertainty in the Cox proportional hazard model with the interval-censored data, flexible approaches have been proposed. To fill a gap and broaden the scope of statistical applications to analyze time-to-event data with different models, in this paper, a general approach is proposed for fitting the semiparametric linear transformation model to interval-censored data when a covariate is measured with error. The semiparametric linear transformation model is a broad class of models that includes the proportional hazard model and the proportional odds model as special cases. The proposed method relies on a set of estimating equations to estimate the regression parameters and the infinite-dimensional parameter. For handling interval censoring and covariate measurement error, a flexible imputation technique is used. Finite sample performance of the proposed method is judged via simulation studies. Finally, the suggested method is applied to analyze a real data set from an AIDS clinical trial. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
37. Bayesian consensus-based sample size criteria for binomial proportions.
- Author
-
Joseph, Lawrence and Bélisle, Patrick
- Subjects
DECISION theory ,ACCOUNTING methods ,TECHNICAL specifications ,CONFIDENCE intervals ,SAMPLE size (Statistics) ,CLINICAL trials ,PROBABILITY theory ,STATISTICS - Abstract
Many sample size criteria exist. These include power calculations and methods based on confidence interval widths from a frequentist viewpoint, and Bayesian methods based on credible interval widths or decision theory. Bayesian methods account for the inherent uncertainty of inputs to sample size calculations through the use of prior information rather than the point estimates typically used by frequentist methods. However, the choice of prior density can be problematic because there will almost always be different appreciations of the past evidence. Such differences can be accommodated a priori by robust methods for Bayesian design, for example, using mixtures or ϵ-contaminated priors. This would then ensure that the prior class includes divergent opinions. However, one may prefer to report several posterior densities arising from a "community of priors," which cover the range of plausible prior densities, rather than forming a single class of priors. To date, however, there are no corresponding sample size methods that specifically account for a community of prior densities in the sense of ensuring a large-enough sample size for the data to sufficiently overwhelm the priors to ensure consensus across widely divergent prior views. In this paper, we develop methods that account for the variability in prior opinions by providing the sample size required to induce posterior agreement to a prespecified degree. Prototypic examples to one- and two-sample binomial outcomes are included. We compare sample sizes from criteria that consider a family of priors to those that would result from previous interval-based Bayesian criteria. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
38. Controlling false discovery proportion in identification of drug-related adverse events from multiple system organ classes.
- Author
-
Tan, Xianming, Liu, Guanghan F., Zeng, Donglin, Wang, William, Diao, Guoqing, Heyse, Joseph F., and Ibrahim, Joseph G.
- Subjects
CLINICAL drug trials ,ADVERSE health care events ,FALSE discovery rate ,SIGNAL detection ,RANDOM variables ,SAFETY ,STATISTICS ,COMPUTER simulation ,RESEARCH ,CLINICAL trials ,RESEARCH methodology ,EVALUATION research ,MEDICAL cooperation ,COMPARATIVE studies ,RESEARCH funding ,DRUG side effects ,STATISTICAL models ,DIAGNOSTIC errors - Abstract
Analyzing safety data from clinical trials to detect safety signals worth further examination involves testing multiple hypotheses, one for each observed adverse event (AE) type. There exists certain hierarchical structure for these hypotheses due to the classification of the AEs into system organ classes, and these AEs are also likely correlated. Many approaches have been proposed to identify safety signals under the multiple testing framework and tried to achieve control of false discovery rate (FDR). The FDR control concerns the expectation of the false discovery proportion (FDP). In practice, the control of the actual random variable FDP could be more relevant and has recently drawn much attention. In this paper, we proposed a two-stage procedure for safety signal detection with direct control of FDP, through a permutation-based approach for screening groups of AEs and a permutation-based approach of constructing simultaneous upper bounds for false discovery proportion. Our simulation studies showed that this new approach has controlled FDP. We demonstrate our approach using data sets derived from a drug clinical trial. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
39. Joint modeling of progression-free and overall survival and computation of correlation measures.
- Author
-
Meller, Matthias, Beyersmann, Jan, and Rufibach, Kaspar
- Subjects
PROGRESSION-free survival ,MATHEMATICAL statistics ,PEARSON correlation (Statistics) ,CLINICAL trials ,PARSIMONIOUS models ,COMPUTER simulation ,RESEARCH ,RESEARCH methodology ,PROGNOSIS ,EVALUATION research ,MEDICAL cooperation ,COMPARATIVE studies ,SURVIVAL analysis (Biometry) ,RESEARCH funding ,STATISTICAL models ,PROBABILITY theory - Abstract
In this paper, we derive the joint distribution of progression-free and overall survival as a function of transition probabilities in a multistate model. No assumptions on copulae or latent event times are needed and the model is allowed to be non-Markov. From the joint distribution, statistics of interest can then be computed. As an example, we provide closed formulas and statistical inference for Pearson's correlation coefficient between progression-free and overall survival in a parametric framework. The example is inspired by recent approaches to quantify the dependence between progression-free survival, a common primary outcome in Phase 3 trials in oncology and overall survival. We complement these approaches by providing methods of statistical inference while at the same time working within a much more parsimonious modeling framework. Our approach is completely general and can be applied to other measures of dependence. We also discuss extensions to nonparametric inference. Our analytical results are illustrated using a large randomized clinical trial in breast cancer. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
40. Combined criteria for dose optimisation in early phase clinical trials.
- Author
-
Alam, M. Iftakhar, Coad, D. Stephen, and Bogacka, Barbara
- Subjects
CLINICAL trials ,TECHNICAL specifications - Abstract
This paper aims to investigate whether any bridge is possible between so-called best intention and D-optimum designs. It introduces combined criteria for dose optimisation in seamless phase I/II adaptive clinical trials. Each of the optimality criteria considers efficacy and toxicity as endpoints and is based on the probability of a successful outcome and on the determinant of the Fisher information matrix for estimation of the dose-response parameters. In addition, one of the criteria incorporates penalties for choosing a toxic or inefficacious dose. Starting with the lowest dose, the adaptive design selects the dose for each subsequent cohort that maximises the respective defined criterion. The methodology is illustrated with a dose-response model that assumes trinomial responses. Simulation studies show that the method is capable of identifying the optimal dose accurately without exposing many patients to toxic doses. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
41. Frequentist operating characteristics of Bayesian optimal designs via simulation.
- Author
-
Zhang, Yifan, Trippa, Lorenzo, and Parmigiani, Giovanni
- Subjects
DESIGN ,DYNAMIC programming ,COMPUTER simulation ,EXPERIMENTAL design ,RESEARCH ,CLINICAL trials ,RESEARCH methodology ,EVALUATION research ,MEDICAL cooperation ,COMPARATIVE studies ,RESEARCH funding ,PROBABILITY theory - Abstract
Bayesian adaptive designs have become popular because of the possibility of increasing the number of patients treated with more beneficial treatments, while still providing sufficient evidence for treatment efficacy comparisons. It can be essential, for regulatory and other purposes, to conduct frequentist analyses both before and after a Bayesian adaptive trial, and these remain challenging. In this paper, we propose a general simulation-based approach to compare frequentist designs with Bayesian adaptive designs based on frequentist criteria such as power and to compute valid frequentist p-values. We illustrate our approach by comparing the power of an equal randomization (ER) design with that of an optimal Bayesian adaptive (OBA) design. The Bayesian design considered here is the dynamic programming solution of the optimization of a specific utility function defined by the number of successes in a patient horizon, including patients whose treatment will be affected by the trial's results after the end of the trial. While the power of an ER design depends on treatment efficacy and the sample size, the power of the OBA design also depends on the patient horizon size. Our results quantify the trade-off between power and the optimal assignment of patients to treatments within the trial. We show that, for large patient horizons, the two criteria are in agreement, while for small horizons, differences can be substantial. This has implications for precision medicine, where patient horizons are decreasing as a result of increasing stratification of patients into subpopulations defined by molecular markers. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
42. Bayesian hierarchical EMAX model for dose-response in early phase efficacy clinical trials.
- Author
-
Gajewski, Byron J., Meinzer, Caitlyn, Berry, Scott M., Rockswold, Gaylan L., Barsan, William G., Korley, Frederick K., and Martin, Renee' H.
- Subjects
CLINICAL trials ,DOSE-response relationship in biochemistry ,BRAIN injuries ,EXPERIMENTAL design ,RESEARCH ,HYPERBARIC oxygenation ,MEDICAL cooperation ,DOSE-effect relationship in pharmacology ,RESEARCH funding ,STATISTICAL models ,LONGITUDINAL method ,PROBABILITY theory - Abstract
A primary goal of a phase II dose-ranging trial is to identify a correct dose before moving forward to a phase III confirmatory trial. A correct dose is one that is actually better than control. A popular model in phase II is an independent model that puts no structure on the dose-response relationship. Unfortunately, the independent model does not efficiently use information from related doses. One very successful alternate model improves power using a pre-specified dose-response structure. Past research indicates that EMAX models are broadly successful and therefore attractive for designing dose-response trials. However, there may be instances of slight risk of nonmonotone trends that need to be addressed when planning a clinical trial design. We propose to add hierarchical parameters to the EMAX model. The added layer allows information about the treatment effect in one dose to be "borrowed" when estimating the treatment effect in another dose. This is referred to as the hierarchical EMAX model. Our paper compares three different models (independent, EMAX, and hierarchical EMAX) and two different design strategies. The first design considered is Bayesian with a fixed trial design, and it has a fixed schedule for randomization. The second design is Bayesian but adaptive, and it uses response adaptive randomization. In this article, a randomized trial of patients with severe traumatic brain injury is provided as a motivating example. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
43. Implementing unequal randomization in clinical trials with heterogeneous treatment costs.
- Author
-
Sverdlov, Oleksandr and Ryeznik, Yevgen
- Subjects
CLINICAL trials ,THERAPEUTICS ,INVESTIGATIONAL therapies ,INFERENTIAL statistics ,TRIAL practice - Abstract
Equal randomization has been a popular choice in clinical trial practice. However, in trials with heterogeneous variances and/or variable treatment costs, as well as in settings where maximization of every trial participant's benefit is an important design consideration, optimal allocation proportions may be unequal across study treatment arms. In this paper, we investigate optimal allocation designs minimizing study cost under statistical efficiency constraints for parallel group clinical trials comparing several investigational treatments against the control. We show theoretically that equal allocation designs may be suboptimal, and unequal allocation designs can provide higher statistical power for the same budget or result in a smaller cost for the same level of power. We also show how optimal allocation can be implemented in practice by means of restricted randomization procedures and how to perform statistical inference following these procedures, using invoked population-based or randomization-based approaches. Our results provide further support to some previous findings in the literature that unequal randomization designs can be cost efficient and can be successfully implemented in practice. We conclude that the choice of the target allocation, the randomization procedure, and the statistical methodology for data analysis is an essential component in ensuring valid, powerful, and robust clinical trial results. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
44. Quantile regression and empirical likelihood for the analysis of longitudinal data with monotone missing responses due to dropout, with applications to quality of life measurements from clinical trials.
- Author
-
Lv, Yang, Qin, Guoyou, Zhu, Zhongyi, and Tu, Dongsheng
- Subjects
QUANTILE regression ,QUALITY of life measurement ,DATA analysis ,CLINICAL trials ,REGRESSION analysis - Abstract
The analysis of quality of life (QoL) data can be challenging due to the skewness of responses and the presence of missing data. In this paper, we propose a new weighted quantile regression method for estimating the conditional quantiles of QoL data with responses missing at random. The proposed method makes use of the correlation information within the same subject from an auxiliary mean regression model to enhance the estimation efficiency and takes into account of missing data mechanism. The asymptotic properties of the proposed estimator have been studied and simulations are also conducted to evaluate the performance of the proposed estimator. The proposed method has also been applied to the analysis of the QoL data from a clinical trial on early breast cancer, which motivated this study. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
45. A Bayesian adaptive marker-stratified design for molecularly targeted agents with customized hierarchical modeling.
- Author
-
Zang, Yong, Guo, Beibei, Han, Yan, Cao, Sha, and Zhang, Chi
- Subjects
THERAPEUTICS ,MEDICAL care surveys ,CLINICAL trials - Abstract
It is well known that the treatment effect of a molecularly targeted agent (MTA) may vary dramatically, depending on each patient's biomarker profile. Therefore, for a clinical trial evaluating MTA, it is more reasonable to evaluate its treatment effect within different marker subgroups rather than evaluating the average treatment effect for the overall population. The marker-stratified design (MSD) provides a useful tool to evaluate the subgroup treatment effects of MTAs. Under the Bayesian framework, the beta-binomial model is conventionally used under the MSD to estimate the response rate and test the hypothesis. However, this conventional model ignores the fact that the biomarker used in the MSD is, in general, predictive only for the MTA. The response rates for the standard treatment can be approximately consistent across different subgroups stratified by the biomarker. In this paper, we proposed a Bayesian hierarchical model incorporating this biomarker information into consideration. The proposed model uses a hierarchical prior to borrow strength across different subgroups of patients receiving the standard treatment and, therefore, improve the efficiency of the design. Prior informativeness is determined by solving a "customized" equation reflecting the physician's professional opinion. We developed a Bayesian adaptive design based on the proposed hierarchical model to guide the treatment allocation and test the subgroup treatment effect as well as the predictive marker effect. Simulation studies and a real trial application demonstrate that the proposed design yields desirable operating characteristics and outperforms the existing designs. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
46. Randomised trials with provision for early stopping for benefit (or harm): The impact on the estimated treatment effect.
- Author
-
Walter, S.D., Guyatt, G.H., Bassler, D., Briel, M., Ramsay, T., and Han, H.D.
- Subjects
THERAPEUTICS ,FALSE positive error ,ERROR rates - Abstract
Stopping rules for clinical trials are primarily intended to control Type I error rates if interim analyses are planned, but less is known about the impact that potential stopping has on estimating treatment benefit. In this paper, we derive analytic expressions for (1) the over-estimation of benefit in studies that stop early, (2) the under-estimation of benefit in completed studies, and (3) the overall bias in studies with a stopping rule. We also examine the probability of stopping early and the situation in meta-analyses. Numerical evaluations show that the greatest concern is with over-estimation of benefit in stopped studies, especially if the probability of stopping early is small. The overall bias is usually less than 10% of the true benefit, and under-estimation in completed studies is also typically small. The probability of stopping depends on the true treatment effect and sample size. The magnitude of these effects depends on the particular rule adopted, but we show that the maximum overall bias is the same for all stopping rules. We also show that an essentially unbiased meta-analysis estimate of benefit can be recovered, even if some component studies have stopping rules. We illustrate these methods using data from three clinical trials. The results confirm our earlier empirical work on clinical trials. Investigators may consult our numerical results for guidance on potential mis-estimation and bias in the treatment effect if a stopping rule is adopted. Particular concern is warranted in studies that actually stop early, where interim results may be quite misleading. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
47. A Bayesian nonparametric causal inference model for synthesizing randomized clinical trial and real-world evidence.
- Author
-
Wang, Chenguang and Rosner, Gary L.
- Subjects
CLINICAL trials ,CAUSAL models ,ACE inhibitors ,CONGESTIVE heart failure - Abstract
With the wide availability of various real-world data (RWD), there is an increasing interest in synthesizing information from both randomized clinical trials and RWD for health-care decision makings. The task of addressing study-specific heterogeneities is one of the most difficult challenges in synthesizing data from disparate sources. Bayesian hierarchical models with nonparametric extension provide a powerful and convenient platform that formalizes the information borrowing strength across the sources. In this paper, we propose a propensity score-based Bayesian nonparametric Dirichlet process mixture model that summarizes subject-level information from randomized and registry studies to draw inference on the causal treatment effect. Simulation studies are conducted to evaluate the model performance under different scenarios. In addition, we demonstrate the proposed method using data from a clinical study on angiotensin converting enzyme inhibitor for treating congestive heart failure. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
48. Level of evidence for promising subgroup findings: The case of trends and multiple subgroups.
- Author
-
Tanniou, Julien, Smid, Sanne C., Tweel, Ingeborg, Teerenstra, Steven, Roes, Kit C.B., and van der Tweel, Ingeborg
- Subjects
FALSE positive error ,CLINICAL trials ,THERAPEUTICS ,EVIDENCE - Abstract
Subgroup analyses are an essential part of fully understanding the complete results from confirmatory clinical trials. However, they come with substantial methodological challenges. In case no statistically significant overall treatment effect is found in a clinical trial, this does not necessarily indicate that no patients will benefit from treatment. Subgroup analyses could be conducted to investigate whether a treatment might still be beneficial for particular subgroups of patients. Assessment of the level of evidence associated with such subgroup findings is primordial as it may form the basis for performing a new clinical trial or even drawing the conclusion that a specific patient group could benefit from a new therapy. Previous research addressed the overall type I error and the power associated with a single subgroup finding for continuous outcomes and suitable replication strategies. The current study aims at investigating two scenarios as part of a nonconfirmatory strategy in a trial with dichotomous outcomes: (a) when a covariate of interest is represented by ordered subgroups, eg, in case of biomarkers, and thus, a trend can be studied that may reflect an underlying mechanism, and (b) when multiple covariates, and thus multiple subgroups, are investigated at the same time. Based on simulation studies, this paper assesses the credibility of subgroup findings in overall nonsignificant trials and provides practical recommendations for evaluating the strength of evidence of subgroup findings in these settings. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
49. Alpha spending for historical versus surveillance Poisson data with CMaxSPRT.
- Author
-
Silva, Ivair R., Lopes, Wilson M., Dias, Philipe, and Yih, W. Katherine
- Abstract
Sequential analysis hypothesis testing is now an important tool for postmarket drug and vaccine safety surveillance. When the number of adverse events accruing in time is assumed to follow a Poisson distribution, and if the baseline Poisson rate is assessed only with uncertainty, the conditional maximized sequential probability ratio test, CMaxSPRT, is a formal solution. CMaxSPRT is based on comparing monitored data with historical matched data, and it was primarily developed under a flat signaling threshold. This paper demonstrates that CMaxSPRT can be performed under nonflat thresholds too. We pose the discussion in the light of the alpha spending approach. In addition, we offer a rule of thumb for establishing the best shape of the signaling threshold in the sense of minimizing expected time to signal and expected sample size. An example involving surveillance for adverse events after influenza vaccination is used to illustrate the method. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
50. Half blind superiority tests for clinical trials of anti‐infective drugs.
- Author
-
Follmann, Dean, Brittain, Erica, and Lumbard, Keith
- Abstract
This paper introduces a test of superiority of new anti‐infective drug B over comparator drug A based on a randomized clinical trial. This test can be used to demonstrate assay (trial) sensitivity for noninferiority trials and rigorously tailor drug choice for individual patients. Our approach uses specialized baseline covariates XA,XB, which should predict the benefits of drug A and drug B, respectively. Using a response surface model for the treatment effect, we test for superiority at the (XA,XB) point that is most likely to show superiority. We identify this point based on estimates from a novel half‐blind pseudo likelihood, where we augment a blinded likelihood (mixed over the treatment indicator) with likelihoods for the overall success rates for drug A and drug B (mixed over XA,XB). The augmentation results in much better estimates than those based on the mixed blinded likelihood alone but, interestingly, the estimates almost behave as if they were based on fully blinded data. We also develop an analogous univariate method using XA for settings where XB has little variation. Permutation methods are used for testing. If the "half‐blind" test rejects, pointwise confidence interval can be used to identify patients who would benefit from drug B. We compare the new tests to other methods with an example and via simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.