11 results on '"Wason, James"'
Search Results
2. Point estimation following a two-stage group sequential trial.
- Author
-
Grayling, Michael J and Wason, James MS
- Subjects
- *
FIX-point estimation , *OPTIMAL stopping (Mathematical statistics) - Abstract
Repeated testing in a group sequential trial can result in bias in the maximum likelihood estimate of the unknown parameter of interest. Many authors have therefore proposed adjusted point estimation procedures, which attempt to reduce such bias. Here, we describe nine possible point estimators within a common general framework for a two-stage group sequential trial. We then contrast their performance in five example trial settings, examining their conditional and marginal biases and residual mean square error. By focusing on the case of a trial with a single interim analysis, additional new results aiding the determination of the estimators are given. Our findings demonstrate that the uniform minimum variance unbiased estimator, whilst being marginally unbiased, often has large conditional bias and residual mean square error. If one is concerned solely about inference on progression to the second trial stage, the conditional uniform minimum variance unbiased estimator may be preferred. Two estimators, termed mean adjusted estimators, which attempt to reduce the marginal bias, arguably perform best in terms of the marginal residual mean square error. In all, one should choose an estimator accounting for its conditional and marginal biases and residual mean square error; the most suitable estimator will depend on relative desires to minimise each of these factors. If one cares solely about the conditional and marginal biases, the conditional maximum likelihood estimate may be preferred provided lower and upper stopping boundaries are included. If the conditional and marginal residual mean square error are also of concern, two mean adjusted estimators perform well. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Employing a latent variable framework to improve efficiency in composite endpoint analysis.
- Author
-
McMenamin, Martina, Barrett, Jessica K, Berglind, Anna, and Wason, James MS
- Subjects
LATENT variables ,SYSTEMIC lupus erythematosus ,LOGISTIC regression analysis ,CHRONIC diseases - Abstract
Composite endpoints that combine multiple outcomes on different scales are common in clinical trials, particularly in chronic conditions. In many of these cases, patients will have to cross a predefined responder threshold in each of the outcomes to be classed as a responder overall. One instance of this occurs in systemic lupus erythematosus, where the responder endpoint combines two continuous, one ordinal and one binary measure. The overall binary responder endpoint is typically analysed using logistic regression, resulting in a substantial loss of information. We propose a latent variable model for the systemic lupus erythematosus endpoint, which assumes that the discrete outcomes are manifestations of latent continuous measures and can proceed to jointly model the components of the composite. We perform a simulation study and find that the method offers large efficiency gains over the standard analysis, the magnitude of which is highly dependent on the components driving response. Bias is introduced when joint normality assumptions are not satisfied, which we correct for using a bootstrap procedure. The method is applied to the Phase IIb MUSE trial in patients with moderate to severe systemic lupus erythematosus. We show that it estimates the treatment effect 2.5 times more precisely, offering a 60% reduction in required sample size. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. A latent variable model for improving inference in trials assessing the effect of dose on toxicity and composite efficacy endpoints.
- Author
-
Wason, James MS and Seaman, Shaun R
- Subjects
- *
LATENT variables , *CONFIDENCE intervals , *MONTE Carlo method , *RESEARCH , *CLINICAL trials , *HETEROCYCLIC compounds , *RESEARCH methodology , *ANTINEOPLASTIC agents , *MEDICAL cooperation , *EVALUATION research , *COMPARATIVE studies , *DOSE-effect relationship in pharmacology , *RESEARCH funding , *TUMORS , *ONCOLOGY - Abstract
It is often of interest to explore how dose affects the toxicity and efficacy properties of a novel treatment. In oncology, efficacy is often assessed through response, which is defined by a patient having no new tumour lesions and their tumour size shrinking by 30%. Usually response and toxicity are analysed as binary outcomes in early phase trials. Methods have been proposed to improve the efficiency of analysing response by utilising the continuous tumour size information instead of dichotomising it. However, these methods do not allow for toxicity or for different doses. Motivated by a phase II trial testing multiple doses of a treatment against placebo, we propose a latent variable model that can estimate the probability of response and no toxicity (or other related outcomes) for different doses. We assess the confidence interval coverage and efficiency properties of the method, compared to methods that do not use the continuous tumour size, in a simulation study and the real study. The coverage is close to nominal when model assumptions are met, although can be below nominal when the model is misspecified. Compared to methods that treat response as binary, the method has confidence intervals with 30-50% narrower widths. The method adds considerable efficiency but care must be taken that the model assumptions are reasonable. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
5. Two-stage phase II oncology designs using short-term endpoints for early stopping.
- Author
-
Kunz, Cornelia U., Wason, James M. S., Kieser, Meinhard, and Wason, James Ms
- Subjects
- *
ONCOLOGY , *TUMORS , *TUMOR treatment , *ANGIOSARCOMA , *MEDICINE , *PATIENTS , *BIOLOGICAL assay , *CLINICAL trials , *EXPERIMENTAL design , *RESEARCH funding , *SARCOMA , *TIME , *SAMPLE size (Statistics) , *TREATMENT effectiveness - Abstract
Phase II oncology trials are conducted to evaluate whether the tumour activity of a new treatment is promising enough to warrant further investigation. The most commonly used approach in this context is a two-stage single-arm design with binary endpoint. As for all designs with interim analysis, its efficiency strongly depends on the relation between recruitment rate and follow-up time required to measure the patients' outcomes. Usually, recruitment is postponed after the sample size of the first stage is achieved up until the outcomes of all patients are available. This may lead to a considerable increase of the trial length and with it to a delay in the drug development process. We propose a design where an intermediate endpoint is used in the interim analysis to decide whether or not the study is continued with a second stage. Optimal and minimax versions of this design are derived. The characteristics of the proposed design in terms of type I error rate, power, maximum and expected sample size as well as trial duration are investigated. Guidance is given on how to select the most appropriate design. Application is illustrated by a phase II oncology trial in patients with advanced angiosarcoma, which motivated this research. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
6. A review of statistical designs for improving the efficiency of phase II studies in oncology.
- Author
-
Wason, James M. S., Jaki, Thomas, and Wason, James Ms
- Subjects
- *
EXPERIMENTAL design , *ONCOLOGY research , *CLINICAL drug trials , *CANCER treatment , *DRUG efficacy , *MEDICAL research , *ONCOLOGY , *TUMORS - Abstract
Phase II oncology trials are carried out to assess whether an experimental anti-cancer treatment shows sufficient signs of effectiveness to justify being tested in a phase III trial. Traditionally such trials are conducted as single-arm studies using a binary response rate as the primary endpoint. In this article, we review and contrast alternative approaches for such studies. Each approach uses only data that are necessary for the traditional analysis. We consider two broad classes of methods: ones that aim to improve the efficiency using novel design ideas, such as multi-stage and multi-arm multi-stage designs; and ones that aim to improve the analysis, by making better use of the richness of the data that is ignored in the traditional analysis. The former class of methods provides considerable gains in efficiency but also increases the administrative and logistical issues in running the trial. The second class consists of viable alternatives to the standard analysis that come with little additional requirements and provide considerable gains in efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
7. Group sequential clinical trial designs for normally distributed outcome variables.
- Author
-
Grayling, Michael J., Wason, James M. S., and Mander, Adrian P.
- Subjects
- *
CLINICAL trials , *CARTOGRAPHY , *STATISTICS - Abstract
In a group sequential clinical trial, accumulated data are analyzed at numerous time points to allow early decisions about a hypothesis of interest. These designs have historically been recommended for their ethical, administrative, and economic benefits. In this article, we first discuss a collection of new commands for computing the stopping boundaries and required group size of various classical group sequential designs, assuming a normally distributed outcome variable. Then, we demonstrate how the performance of several designs can be compared graphically. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
8. A multi-stage drop-the-losers design for multi-arm clinical trials.
- Author
-
Wason, James, Stallard, Nigel, Bowden, Jack, and Jennison, Christopher
- Subjects
- *
DRUG development , *SAMPLE size (Statistics) , *MEDICAL statistics , *CLINICAL trials , *RANDOM variables , *ANTI-infective agents , *EXPERIMENTAL design , *HETEROCYCLIC compounds , *HIV infections , *INSULIN resistance , *RESEARCH funding , *TREATMENT effectiveness , *PATIENT selection - Abstract
Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
9. Some recommendations for multi-arm multi-stage trials.
- Author
-
Wason, James, Magirr, Dominic, Law, Martin, and Jaki, Thomas
- Subjects
- *
DRUG development , *DOSE-effect relationship in pharmacology , *TREATMENT effectiveness , *ANTIRETROVIRAL agents , *COMBINATION drug therapy , *CLINICAL drug trials , *CLINICAL trials , *EXPERIMENTAL design , *RESEARCH funding - Abstract
Multi-arm multi-stage designs can improve the efficiency of the drug-development process by evaluating multiple experimental arms against a common control within one trial. This reduces the number of patients required compared to a series of trials testing each experimental arm separately against control. By allowing for multiple stages experimental treatments can be eliminated early from the study if they are unlikely to be significantly better than control. Using the TAILoR trial as a motivating example, we explore a broad range of statistical issues related to multi-arm multi-stage trials including a comparison of different ways to power a multi-arm multi-stage trial; choosing the allocation ratio to the control group compared to other experimental arms; the consequences of adding additional experimental arms during a multi-arm multi-stage trial, and how one might control the type-I error rate when this is necessary; and modifying the stopping boundaries of a multi-arm multi-stage design to account for unknown variance in the treatment outcome. Multi-arm multi-stage trials represent a large financial investment, and so considering their design carefully is important to ensure efficiency and that they have a good chance of succeeding. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
10. HLA associations in South Asian multiple sclerosis.
- Author
-
Pandit, Lekha, Malli, Chaithra, Singhal, Bhim, Wason, James, Malik, Omar, Sawcer, Stephen, Ban, Maria, D’Cunha, Anitha, and Mustafa, Sharik
- Subjects
HLA histocompatibility antigens ,POLYMERASE chain reaction ,DNA polymerases ,THERMOCYCLING ,MULTIPLE sclerosis ,PATIENTS - Abstract
Background: Previous efforts to identify Human Leukocyte Antigen (HLA) gene associations with multiple sclerosis (MS) in the South Asian population have been underpowered. Aim: To identify the primary HLA class II alleles associated with MS in Indians. Methods: We typed HLA-DRB1, -DQA1 and -DQB1 in 419 patients and 451 unrelated controls by polymerase chain reaction using sequence specific oligonucleotide probes (PCR-SSOP). Results: At the gene level DRB1 showed significant evidence of association (p=0.0000012), DQA1 showed only marginal evidence of association (p=0.04) and there was no evidence for association at DQB1 (p=0.26). At the DRB1 locus association is confirmed with the *15:01 (p=0.00002) and the *03 (p=0.00005) alleles. Conclusion: Our study confirms that the risk effects attributable to the HLA- DRB1*15:01and DRB1*03 alleles seen in Europeans are also seen in Indians. The absence of any evidence of association with DQB1 alleles reflects the lower linkage disequilibrium between DQB1 alleles and DRB1 risk alleles present in this population, and illustrates the potential value of fine mapping signals of association in different ethnic groups. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
11. The choice of test in phase II cancer trials assessing continuous tumour shrinkage when complete responses are expected.
- Author
-
Wason, James M. S. and Mander, Adrian P.
- Subjects
- *
CANCER treatment , *CLINICAL trials , *TUMOR grading , *T-test (Statistics) , *NONPARAMETRIC estimation , *ANTINEOPLASTIC agents , *BIOLOGICAL assay , *NONPARAMETRIC statistics , *RESEARCH funding , *STATISTICS , *TUMORS , *DATA analysis , *TREATMENT effectiveness , *STATISTICAL models - Abstract
Traditionally, phase II cancer trials test a binary endpoint formed from a dichotomisation of the continuous change in tumour size. Directly testing the continuous endpoint provides considerable gains in power, although also results in several statistical issues. One such issue is when complete responses, i.e. complete tumour removal, are observed in multiple patients; this is a problem when normality is assumed. Using simulated data and a recently published phase II trial, we investigate how the choice of test affects the operating characteristics of the trial. We propose using parametric tests based on the censored normal distribution, comparing them to the t-test and Wilcoxon non-parametric test. The censored normal distribution fits the real dataset well, but simulations indicate its type-I error rate is inflated, and its power is only slightly higher than the t-test. The Wilcoxon test has deflated type I error. For two-arm designs, the differences are much smaller. We conclude that the t-test is suitable for use when complete responses are present, although positively skewed data can result in the non-parametric test having higher power. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.