7 results on '"Historical data"'
Search Results
2. Quality Tolerance Limits: A General Guidance for Parameter Selection and Threshold Setting.
- Author
-
Keller, Annett, van Borrendam, Nathalie, Benner, Patrice, Gilbert, Steven, Saino, Stefano, Jendrasek, Debra, Young, Steve, Muli, Marcus, Wang, Jim, Kozińska, Marta, and Liu, Jun
- Subjects
MEDICAL protocols ,MEDICAL quality control ,PHARMACEUTICAL technology ,SIMULATION methods in education ,MEDICAL research - Abstract
The past years have sharpened the industry's understanding of a Quality by Design (QbD) approach toward clinical trials. Using QbD encourages designing quality into a trial during the planning phase. The identification of Critical to Quality (CtQs) factors and specifically Critical Data and Processes (CD&Ps) is key to such a risk-based monitoring approach. A variable that allows monitoring the evolution of risk regarding the CD&Ps is called a Quality Tolerance Limit (QTL) parameter. These parameters are linked to the scientific question(s) of a trial and may identify the issues that can jeopardize the integrity of trial endpoints. This paper focuses on defining what QTL parameters are and providing general guidance on setting thresholds for these parameters allowing for the derivation of an acceptable range of the risk. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Real-World Data as External Controls: Practical Experience from Notable Marketing Applications of New Therapies.
- Author
-
Izem, Rima, Buenconsejo, Joan, Davi, Ruthanna, Luan, Jingyu Julia, Tracy, LaRee, and Gamalo, Margaret
- Subjects
DRUG approval ,CLINICAL trials ,PROFESSIONAL licenses ,INVESTIGATIONAL drugs ,MARKETING ,LABELS ,DRUG side effects ,CANCER patient medical care - Abstract
Introduction: Real-world data (RWD) can contextualize findings from single-arm trials when randomized comparative trials are unethical or unfeasible. Findings from single-arm trials alone are difficult to interpret and a comparison, when feasible and meaningful, to patient-level information from RWD facilitates the evaluation. As such, there have been several recent regulatory applications including RWD or other external data to support the product's efficacy and safety. This paper summarizes some lessons learned from such contextualization from 20 notable new drug or biologic licensing applications in oncology and rare diseases. Methods: This review focuses on 20 notable new drug or biologic licensing applications that included patient-level RWD or other external data for contextualization of trial results. Publicly available regulatory documents including clinical and statistical reviews, advisory committee briefing materials and minutes, and approved product labeling were retrieved for each application. The authors conducted independent assessments of these documents focusing on the regulatory evaluation, in each case. Three examples are presented in detail to illustrate the salient issues and themes identified across applications. Results: Regulatory decisions were strongly influenced by the quality and usability of the RWD. Comparability of cohort attributes such as endpoints, populations, follow-up, index and censoring criteria, as well as data completeness and accuracy of key variables appeared to be essential to ensure the quality and relevance of the RWD. Given adequate sample size of the clinical trials or external control, the use of appropriate analytic methods to properly account for confounding, such as regression or matching, and pre-specification of these methods while blinded to patient outcomes seemed good strategies to address baseline differences. Discussion: Contextualizing single-arm trials with patient-level RWD appears to be an advance in regulatory science; however, challenges remain. Statisticians and epidemiologists have long focused on analytical methods for comparative effectiveness but hurdles in use of RWD have often occurred upstream of the analyses. More specifically, we noted hurdles in evaluating data quality, justifying cohort selection or initiation of follow-up, and demonstrating comparability of cohorts and endpoints. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Historical Benchmarks for Quality Tolerance Limits Parameters in Clinical Trials.
- Author
-
Makowski, Marcin, Bhagat, Ruma, Chevalier, Soazig, Gilbert, Steven A., Görtz, Dagmar R., Kozińska, Marta, Nadolny, Patrick, Suprin, Melissa, and Turri, Sabine
- Subjects
CLINICAL trials ,ALZHEIMER'S disease ,GOVERNMENT regulation ,CONCEPTUAL structures ,BENCHMARKING (Management) ,QUALITY assurance ,ACCESS to information ,PHARMACEUTICAL industry ,MANAGEMENT - Abstract
Background: In 2016, the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use updated its efficacy guideline for good clinical practice and introduced quality tolerance limits (QTLs) as a quality control in clinical trials. Previously, TransCelerate proposed a framework for QTL implementation and parameters. Historical data can be important in helping to determine QTL thresholds in new clinical trials. Methods: This article presents results of historical data analyses for the previously proposed parameters based on data from 294 clinical trials from seven TransCelerate member companies. The differences across therapeutic areas were assessed by comparing Alzheimer's disease (AD) and oncology trials using a separate dataset provided by Medidata. Results: TransCelerate member companies provided historical data on 11 QTL parameters with data sufficient for analysis for parameters. The distribution of values was similar for most parameters with a relatively small number of outlying trials with high parameter values. Medidata provided values for three parameters in a total of 45 AD and oncology trials with no obvious differences between the therapeutic areas. Conclusion: Historical parameter values can provide helpful benchmark information for quality control activities in future trials. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
5. Optimal Decision Criteria for the Study Design and Sample Size of a Biomarker-Driven Phase III Trial.
- Author
-
Takazawa, Akira and Morita, Satoshi
- Subjects
BIOMARKERS ,CLINICAL trials ,EXPERIMENTAL design ,PROBABILITY theory ,SAMPLE size (Statistics) ,PREDICTION models - Abstract
Background: The design and sample size of a phase III study for new medical technologies were historically determined within the framework of frequentist hypothesis testing. Recently, drug development using predictive biomarkers, which can predict efficacy based on the status of biomarkers, has attracted attention, and various study designs using predictive biomarkers have been suggested. Additionally, when choosing a study design, considering economic factors, such as the risk of development, expected revenue, and cost, is important. Methods: Here, we propose a method to determine the optimal phase III design and sample size and judge whether the phase III study will be conducted using the expected net present value (eNPV). The eNPV is defined using the probability of success of the study calculated based on historical data, the revenue that will be obtained after the success of the phase III study, and the cost of the study. Decision procedures of the optimal phase III design and sample size considering historical data obtained up to the start of the phase III study were considered using numerical examples. Results: Based on the numerical examples, the optimal study design and sample size depend on the mean treatment effect in the biomarker-positive and biomarker-negative populations obtained from historical data, the between-trial variance of response, the prevalence of the biomarker-positive population, and the threshold value of probability of success required to go to phase III study. Conclusions: Thus, the design and sample size of a biomarker-driven phase III study can be appropriately determined based on the eNPV. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
6. Historical Benchmarks for Quality Tolerance Limits Parameters in Clinical Trials
- Author
-
Melissa Suprin, Patrick Nadolny, Steven A. Gilbert, Dagmar R. Görtz, Marcin Makowski, Ruma Bhagat, Marta Kozińska, Sabine Turri, and Soazig Chevalier
- Subjects
Historical data ,QTL ,Computer science ,Quality tolerance limits ,media_common.quotation_subject ,Public Health, Environmental and Occupational Health ,Guideline ,Clinical trial ,Benchmarking ,Clinical trials ,Human use ,Good clinical practice ,Statistics ,Benchmark (computing) ,Humans ,Thresholds ,Pharmacology (medical) ,Quality (business) ,Pharmacology, Toxicology and Pharmaceutics (miscellaneous) ,Original Research ,media_common - Abstract
Background In 2016, the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use updated its efficacy guideline for good clinical practice and introduced quality tolerance limits (QTLs) as a quality control in clinical trials. Previously, TransCelerate proposed a framework for QTL implementation and parameters. Historical data can be important in helping to determine QTL thresholds in new clinical trials. Methods This article presents results of historical data analyses for the previously proposed parameters based on data from 294 clinical trials from seven TransCelerate member companies. The differences across therapeutic areas were assessed by comparing Alzheimer’s disease (AD) and oncology trials using a separate dataset provided by Medidata. Results TransCelerate member companies provided historical data on 11 QTL parameters with data sufficient for analysis for parameters. The distribution of values was similar for most parameters with a relatively small number of outlying trials with high parameter values. Medidata provided values for three parameters in a total of 45 AD and oncology trials with no obvious differences between the therapeutic areas. Conclusion Historical parameter values can provide helpful benchmark information for quality control activities in future trials.
- Published
- 2021
- Full Text
- View/download PDF
7. Incorporating Historical Data in Bayesian Phase I Trial Design: The Caucasian-to-Asian Toxicity Tolerability Problem.
- Author
-
Takeda, Kentaro and Morita, Satoshi
- Subjects
ASIANS ,CANCER patient medical care ,CLINICAL trials ,DRUG toxicity ,WHITE people ,SAMPLE size (Statistics) ,DATA analysis - Abstract
The article discusses a study which investigates a Caucasian phase I dose-finding oncology trials determining the maximum tolerated dose (MTD) in Asian patients. Topics include the concerns on possible differences between Caucasian and Asian patients' treatment tolerability, using the continual reassessment method with effective sample size (ESS), and the outline of the Bayesian study related to an Asian phase I trial with data from a Caucasian phase I trial.
- Published
- 2015
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.