343 results
Search Results
2. Predicting translational progress in biomedical research.
- Author
-
Hutchins, B. Ian, Davis, Matthew T., Meseroll, Rebecca A., and Santangelo, George M.
- Subjects
MEDICAL research ,SCIENTIFIC community ,SCIENTIFIC discoveries ,MACHINE learning ,CLINICAL trials ,FALSE discovery rate ,THERAPEUTICS - Abstract
Fundamental scientific advances can take decades to translate into improvements in human health. Shortening this interval would increase the rate at which scientific discoveries lead to successful treatment of human disease. One way to accomplish this would be to identify which advances in knowledge are most likely to translate into clinical research. Toward that end, we built a machine learning system that detects whether a paper is likely to be cited by a future clinical trial or guideline. Despite the noisiness of citation dynamics, as little as 2 years of postpublication data yield accurate predictions about a paper's eventual citation by a clinical article (accuracy = 84%, F1 score = 0.56; compared to 19% accuracy by chance). We found that distinct knowledge flow trajectories are linked to papers that either succeed or fail to influence clinical research. Translational progress in biomedicine can therefore be assessed and predicted in real time based on information conveyed by the scientific community's early reaction to a paper. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
3. Beyond the RCT: When are Randomized Trials Unnecessary for New Therapeutic Devices, and What Should We Do Instead?
- Author
-
Wendy A Rogers, Peter McCulloch, Baptiste Vasey, Katrina Hutchison, Maroeska M. Rovers, and Arsenio Páez
- Subjects
safety ,Clinical Decision-Making ,MEDLINE ,medical devices ,law.invention ,Randomized controlled trial ,law ,SAFER ,Humans ,Medicine ,Meaning (existential) ,Set (psychology) ,Review Papers ,Randomized Controlled Trials as Topic ,clinical trials ,evaluation ,evidence ,business.industry ,regulation ,Equipment and Supplies ,Action (philosophy) ,Risk analysis (engineering) ,Urological cancers Radboud Institute for Health Sciences [Radboudumc 15] ,CLARITY ,Surgery ,Observational study ,business ,RCT ,Algorithms - Abstract
Contains fulltext : 248580.pdf (Publisher’s version ) (Open Access) OBJECTIVE: The aim of this study was to develop an evidence-based framework for evaluation of therapeutic devices, based on ethical principles and clinical evidence considerations. SUMMARY BACKGROUND DATA: Nearly all medical products which do not work solely through chemical action are regulated as medical devices. Their huge range of purposes, mechanisms of action and risks pose challenges for regulation. High-profile implantable device failures have fuelled concerns about the level of clinical evidence needed for market approval. Calls for more rigorous evaluation lack clarity about what kind of evaluation is appropriate, and are commonly interpreted as meaning more randomized controlled trials (RCTs). These are valuable where devices are genuinely new and claim to offer measurable therapeutic benefits. Where this is not the case, RCTs may be inappropriate and wasteful. METHODS: Starting with a set of ethical principles and basic precepts of clinical epidemiology, we developed a sequential decision-making algorithm for identifying when an RCT should be performed to evaluate new therapeutic devices, and when other methods, such as observational study designs and registry-based approaches, are acceptable. RESULTS: The algorithm clearly defines a group of devices where an RCT is deemed necessary, and the associated framework indicates that an IDEAL 2b study should be the default clinical evaluation method where it is not. CONCLUSIONS: The algorithm and recommendations are based on the principles of the IDEAL-D framework for medical device evaluation and appear eminently practicable. Their use would create a safer system for monitoring innovation, and facilitate more rapid detection of potential hazards to patients and the public.
- Published
- 2022
4. Utilizing ChatGPT in clinical research related to anesthesiology: a comprehensive review of opportunities and limitations.
- Author
-
Sang-Wook Lee and Woo-Jong Choi
- Subjects
CHATGPT ,CLINICAL trials ,ANESTHESIOLOGY ,ARTIFICIAL neural networks ,ALGORITHMS - Abstract
Chat generative pre-trained transformer (ChatGPT) is a chatbot developed by OpenAI that answers questions in a human-like manner. ChatGPT is a GPT language model that understands and responds to natural language created using a transformer, which is a new artificial neural network algorithm first introduced by Google in 2017. ChatGPT can be used to identify research topics and proofread English writing and R scripts to improve work efficiency and optimize time. Attempts to actively utilize generative artificial intelligence (AI) are expected to continue in clinical settings. However, ChatGPT still has many limitations for widespread use in clinical research, owing to AI hallucination symptoms and its training data constraints. Researchers recommend avoiding scientific writing using ChatGPT in many traditional journals because of the current lack of originality guidelines and plagiarism of content generated by ChatGPT. Further regulations and discussions on these topics are expected in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Continuous Digital Monitoring of Walking Speed in Frail Elderly Patients: Noninterventional Validation Study and Longitudinal Clinical Trial
- Author
-
Ola Bunte, Ronenn Roubenoff, Julian Fürmetz, Daniel Rooks, Wolfgang Böcker, Jens Praestgaard, Matthias Schieker, Arne Mueller, Lorcan Walsh, Roland M. Huber, Alexander M. Keppler, Amir Muaremi, Ieuan Clay, Sophie Brachat, and Holger Hoefling
- Subjects
Male ,medicine.medical_specialty ,data collection ,Ecological validity ,Poison control ,Health Informatics ,Context (language use) ,open source data ,Information technology ,Fitness Trackers ,walking speed ,Validation Studies as Topic ,gait ,algorithms ,03 medical and health sciences ,mobility limitation ,0302 clinical medicine ,Physical medicine and rehabilitation ,Gait (human) ,medicine ,accelerometry ,dataset ,Humans ,030212 general & internal medicine ,Longitudinal Studies ,wearable electronic devices ,Aged ,Monitoring, Physiologic ,Aged, 80 and over ,Original Paper ,clinical trials ,Data collection ,Frailty ,business.industry ,Middle Aged ,T58.5-58.64 ,Preferred walking speed ,Clinical trial ,Standard error ,Cross-Sectional Studies ,Female ,Public aspects of medicine ,RA1-1270 ,business ,human activities ,030217 neurology & neurosurgery - Abstract
Background Digital technologies and advanced analytics have drastically improved our ability to capture and interpret health-relevant data from patients. However, only limited data and results have been published that demonstrate accuracy in target indications, real-world feasibility, or the validity and value of these novel approaches. Objective This study aimed to establish accuracy, feasibility, and validity of continuous digital monitoring of walking speed in frail, elderly patients with sarcopenia and to create an open source repository of raw, derived, and reference data as a resource for the community. Methods Data described here were collected as a part of 2 clinical studies: an independent, noninterventional validation study and a phase 2b interventional clinical trial in older adults with sarcopenia. In both studies, participants were monitored by using a waist-worn inertial sensor. The cross-sectional, independent validation study collected data at a single site from 26 naturally slow-walking elderly subjects during a parcours course through the clinic, designed to simulate a real-world environment. In the phase 2b interventional clinical trial, 217 patients with sarcopenia were recruited across 32 sites globally, where patients were monitored over 25 weeks, both during and between visits. Results We have demonstrated that our approach can capture in-clinic gait speed in frail slow-walking adults with a residual standard error of 0.08 m per second in the independent validation study and 0.08, 0.09, and 0.07 m per second for the 4 m walk test (4mWT), 6-min walk test (6MWT), and 400 m walk test (400mWT) standard gait speed assessments, respectively, in the interventional clinical trial. We demonstrated the feasibility of our approach by capturing 9668 patient-days of real-world data from 192 patients and 32 sites, as part of the interventional clinical trial. We derived inferred contextual information describing the length of a given walking bout and uncovered positive associations between the short 4mWT gait speed assessment and gait speed in bouts between 5 and 20 steps (correlation of 0.23) and longer 6MWT and 400mWT assessments with bouts of 80 to 640 steps (correlations of 0.48 and 0.59, respectively). Conclusions This study showed, for the first time, accurate capture of real-world gait speed in slow-walking older adults with sarcopenia. We demonstrated the feasibility of long-term digital monitoring of mobility in geriatric populations, establishing that sufficient data can be collected to allow robust monitoring of gait behaviors outside the clinic, even in the absence of feedback or incentives. Using inferred context, we demonstrated the ecological validity of in-clinic gait assessments, describing positive associations between in-clinic performance and real-world walking behavior. We make all data available as an open source resource for the community, providing a basis for further study of the relationship between standardized physical performance assessment and real-world behavior and independence.
- Published
- 2019
6. Rationale, design, and protocol for a hybrid type 1 effectiveness-implementation trial of a proactive smoking cessation electronic visit for scalable delivery via primary care: the E-STOP trial.
- Author
-
Fahey, Margaret C., Wahlquist, Amy E., Diaz, Vanessa A., Player, Marty S., Natale, Noelle, Sterba, Katherine R., Chen, Brian K., Hermes, Eric D. A., Carpenter, Mathew J., and Dahne, Jennifer
- Subjects
EVALUATION of human services programs ,EXPERIMENTAL design ,BIOCHEMISTRY ,SMOKING cessation ,CLINICAL trials ,COUNSELING ,CARBON monoxide ,MOTIVATION (Psychology) ,PHENOMENOLOGICAL biology ,PSYCHOLOGY ,DISEASES ,MEDICAL protocols ,PRIMARY health care ,TREATMENT effectiveness ,HARM reduction ,CONCEPTUAL models ,SMOKING ,ELECTRONIC health records ,MEDICAL appointments ,VARENICLINE ,ALGORITHMS ,TOBACCO - Abstract
Background: Cigarette smoking remains the leading cause of preventable disease and death in the United States. Primary care offers an ideal setting to reach adults who smoke cigarettes and improve uptake of evidence-based cessation treatment. Although U.S. Preventive Services Task Force Guidelines recommend the 5As model (Ask, Advise, Assess, Assist, Arrange) in primary care, there are many barriers to its implementation. Automated, comprehensive, and proactive tools are needed to overcome barriers. Our team developed and preliminarily evaluated a proactive electronic visit (e-visit) delivered via the Electronic Health Record patient portal to facilitate evidence-based smoking cessation treatment uptake in primary care, with promising initial feasibility and efficacy. This paper describes the rationale, design, and protocol for an ongoing Hybrid Type I effectiveness-implementation trial that will simultaneously assess effectiveness of the e-visit intervention for smoking cessation as well as implementation potential across diverse primary care settings. Methods: The primary aim of this remote five-year study is to examine the effectiveness of the e-visit intervention vs. treatment as usual (TAU) for smoking cessation via a clinic-randomized clinical trial. Adults who smoke cigarettes are recruited across 18 primary care clinics. Clinics are stratified based on their number of primary care providers and randomized 2:1 to either e-visit or TAU. An initial baseline e-visit gathers information about patient smoking history and motivation to quit, and a clinical decision support algorithm determines the best evidence-based cessation treatment to prescribe. E-visit recommendations are evaluated by a patient's own provider, and a one-month follow-up e-visit assesses cessation progress. Main outcomes include: (1) cessation treatment utilization (medication, psychosocial cessation counseling), (2) reduction in cigarettes per day, and (3) biochemically verified 7-day point prevalence abstinence (PPA) at six-months. We hypothesize that patients randomized to the e-visit condition will have better cessation outcomes (vs. TAU). A secondary aim evaluates e-visit implementation potential at patient, provider, and organizational levels using a mixed-methods approach. Implementation outcomes include acceptability, adoption, fidelity, implementation cost, penetration, and sustainability. Discussion: This asynchronous, proactive e-visit intervention could provide substantial benefits for patients, providers, and primary care practices and has potential to widely improve reach of evidence-based cessation treatment. Trial registration: NCT05493254. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Rationale for the update algorithm of the graphical approach to sequentially rejective multiple test procedures.
- Author
-
Maurer, Willi, Bretz, Frank, and Posch, Martin
- Subjects
EARLY death ,ERROR rates ,ALGORITHMS ,TECHNICAL reports ,MARKOV processes - Abstract
The graphical approach by Bretz et al. is a convenient tool to construct, visualize and perform multiple test procedures that are tailored to structured families of hypotheses while controlling the familywise error rate. A critical step is to update the transition weights following a pre‐specified algorithm. In their original publication, however, the authors did not provide a detailed rationale for the update formula. This paper closes the gap and provides three alternative arguments for the update of the transition weights of the graphical approach. It is a legacy of the first author, based on an unpublished technical report from 2014, and after his untimely death reconstructed by the other two authors as a tribute to Willi Maurer's collaboration with Andy Grieve and contributions to biostatistics over many years. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. Optimal Personalised Treatment Computation through In Silico Clinical Trials on Patient Digital Twins*.
- Author
-
Sinisi, Stefano, Alimguzhin, Vadim, Mancini, Toni, Tronci, Enrico, Mari, Federico, Leeners, Brigitte, Eiter, Thomas, Maratea, Marco, and Vallati, Mauro
- Subjects
CLINICAL trials ,INDIVIDUALIZED medicine ,MEDICAL protocols ,EXPERIMENTAL medicine ,ALGORITHMS ,HUMAN reproductive technology ,ANIMAL experimentation - Abstract
In Silico Clinical Trials (ISCT), i.e. clinical experimental campaigns carried out by means of computer simulations, hold the promise to decrease time and cost for the safety and efficacy assessment of pharmacological treatments, reduce the need for animal and human testing, and enable precision medicine. In this paper we present methods and an algorithm that, by means of extensive computer simulation-based experimental campaigns (ISCT) guided by intelligent search, optimise a pharmacological treatment for an individual patient (precision medicine). We show the effectiveness of our approach on a case study involving a real pharmacological treatment, namely the downregulation phase of a complex clinical protocol for assisted reproduction in humans. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
9. A smoothed Q‐learning algorithm for estimating optimal dynamic treatment regimes.
- Author
-
Fan, Yanqin, He, Ming, Su, Liangjun, and Zhou, Xiao‐Hua
- Subjects
REINFORCEMENT learning ,ALGORITHMS ,ASYMPTOTIC normality ,CLINICAL trials - Abstract
In this paper, we propose a smoothed Q‐learning algorithm for estimating optimal dynamic treatment regimes. In contrast to the Q‐learning algorithm in which nonregular inference is involved, we show that, under assumptions adopted in this paper, the proposed smoothed Q‐learning estimator is asymptotically normally distributed even when the Q‐learning estimator is not and its asymptotic variance can be consistently estimated. As a result, inference based on the smoothed Q‐learning estimator is standard. We derive the optimal smoothing parameter and propose a data‐driven method for estimating it. The finite sample properties of the smoothed Q‐learning estimator are studied and compared with several existing estimators including the Q‐learning estimator via an extensive simulation study. We illustrate the new method by analyzing data from the Clinical Antipsychotic Trials of Intervention Effectiveness–Alzheimer's Disease (CATIE‐AD) study. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
10. Collaborative data mining for clinical trial analytics.
- Author
-
Janeja, Vandana P., Gholap, Jay, Walkikar, Prathamesh, Yesha, Yelena, Rishe, Naphtali, and Grasso, Michael A.
- Subjects
DATA mining ,CLINICAL trials ,DATA management ,ALGORITHMS ,OSTEOARTHRITIS - Abstract
Clinical research and drug development trials generate large amounts of data. Due to the dispersed nature of clinical trial data across multiple sites and heterogeneous databases, it remains a challenge to harness these trial data for analytics to gain more understanding about the implementation of studies as well as disease processes. Moreover, the veracity of the results from analytics is difficult to establish in such datasets. We make a two-fold contribution in this paper: First, we provide a mechanism to extract task-relevant data using Master Data Management (MDM) from a clinical trial database with data spread over several domain datasets. Second, we provide a method for validating findings by collaborative utilization of multiple data mining techniques, namely: classification, clustering, and association rule mining. Overall, our approach aims at extracting useful knowledge from data collected during clinical trials to enable the development of faster and cheaper clinical trials that more accurate and impactful. For a demonstration of the efficacy of our proposed methods, we utilized the following datasets: (1) the National Institute on Drug Abuse (NIDA) data share repository and (2) the data from the Osteoarthritis initiative (OAI), where we found real-world implications in validating the findings using multiple data mining methods in a collaborative manner. The comparative results with existing state of the art techniques show the usefulness and high accuracy of our methods. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
11. PIPELINE FOR CONTROL OF THE DYNAMICS OF LOCALIZED BRAIN PATHOLOGIES IN MAGNETIC RESONANCE IMAGES.
- Author
-
Lobantsev, Artyom, Shovkoplias, Grigorii, Tkachenko, Mark, Morokova, Ksenia, Soldatov, Roman, Zubanenko, Aleksey, and Shalyto, Anatoly
- Subjects
PIPELINES ,MAGNETIC resonance imaging ,LABOR costs ,ALGORITHMS ,CLINICAL trials - Abstract
A reliable assessment of changes in the dynamics of brain pathologies is primordial for accurate diagnostics, treatment and predicting the course of the disease. Magnetic resonance imaging (MRI) is the method of choice for it. In the paper, we explore the possibilities of semi-automatic control of the dynamics of localized brain pathologies in MRI. Using specific clinical examples, we investigated the sources of errors that accompany various methods for assessing the dynamics of the development of brain pathologies. We built a pipeline for semi-automatic control of the dynamics of these pathologies based on the Chan-Vese algorithm. The accuracy of estimating changes in the volume of pathological zones by proposed pipeline is comparable with the results obtained under idealized conditions of laboratory experiments. The proposed pipeline provides a significant gain in processing time and labor costs of radiologists is undemanding in computing resources and the availability of training datasets and can be easily implemented in real clinical practice. [ABSTRACT FROM AUTHOR]
- Published
- 2020
12. Automatic classification of regular and irregular capnogram segments using time- and frequency-domain features: A machine learning-based approach.
- Author
-
El-Badawy, Ismail M., Singh, Om Prakash, and Omar, Zaid
- Subjects
CAPNOGRAPHY ,MACHINE learning ,RESPIRATORY diseases ,FEATURE extraction ,CLINICAL trials ,MEDICAL artifacts ,ALGORITHMS - Abstract
Background: The quantitative features of a capnogram signal are important clinical metrics in assessing pulmonary function. However, these features should be quantified from the regular (artefact-free) segments of the capnogram waveform.Objective: This paper presents a machine learning-based approach for the automatic classification of regular and irregular capnogram segments.Methods: Herein, we proposed four time- and two frequency-domain features experimented with the support vector machine classifier through ten-fold cross-validation. MATLAB simulation was conducted on 100 regular and 100 irregular 15 s capnogram segments. Analysis of variance was performed to investigate the significance of the proposed features. Pearson's correlation was utilized to select the relatively most substantial ones, namely variance and the area under normalized magnitude spectrum. Classification performance, using these features, was evaluated against two feature sets in which either time- or frequency-domain features only were employed.Results: Results showed a classification accuracy of 86.5%, which outperformed the other cases by an average of 5.5%. The achieved specificity, sensitivity, and precision were 84%, 89% and 86.51%, respectively. The average execution time for feature extraction and classification per segment is only 36 ms.Conclusion: The proposed approach can be integrated with capnography devices for real-time capnogram-based respiratory assessment. However, further research is recommended to enhance the classification performance. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
13. A threshold linear mixed model for identification of treatment-sensitive subsets in a clinical trial based on longitudinal outcomes and a continuous covariate.
- Author
-
Ge, Xinyi, Peng, Yingwei, and Tu, Dongsheng
- Subjects
CLINICAL trials ,MAXIMUM likelihood statistics ,IDENTIFICATION ,TREATMENT effectiveness ,SMOOTHNESS of functions ,ALGORITHMS - Abstract
Identification of a subset of patients who may be sensitive to a specific treatment is an important problem in clinical trials. In this paper, we consider the case where the treatment effect is measured by longitudinal outcomes, such as quality of life scores assessed over the duration of a clinical trial, and the subset is determined by a continuous baseline covariate, such as age and expression level of a biomarker. A threshold linear mixed model is introduced, and a smoothing maximum likelihood method is proposed to obtain the estimation of the parameters in the model. Broyden-Fletcher-Goldfarb-Shanno algorithm is employed to maximize the proposed smoothing likelihood function. The proposed procedure is evaluated through simulation studies and application to the analysis of data from a randomized clinical trial on patients with advanced colorectal cancer. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
14. Validation of an automated-ETDRS near and intermediate visual acuity measurement.
- Author
-
Pang, Yi, Sparschu, Lauren, and Nylin, Elyse
- Subjects
- *
VISUAL acuity , *DIABETIC retinopathy , *CLINICAL trials , *VISION testing , *COMPUTER-aided diagnosis , *ALGORITHMS ,RESEARCH evaluation - Abstract
Background: The aim of this study was to determine the repeatability of an automated-ETDRS (Early Treatment Diabetic Retinopathy Study) near and intermediate visual acuity measurement in subjects with normal visual acuity and subjects with reduced visual acuity. The agreement of automated-ETDRS with gold standard chart-based visual acuity measurement was also studied.Methods: Fifty-one subjects were tested (aged 23 to 91 years; 33 subjects with normal visual acuity: 6/7.5 or better; 18 subjects with reduced visual acuity: 6/9 to 6/30). Near and intermediate visual acuity of one eye from each subject was measured with an automated tablet-computer system (M&S Technologies, Inc.) and Precision Vision paper chart in a random sequence. Subjects were retested one week later. Repeatability was evaluated using the 95 per cent limits of agreement (LoA) between the two visits.Results: Average difference between automated-ETDRS near visual acuity and near visual acuity by paper chart was 0.02 ± 0.10 logMAR (p > 0.05). Agreement of near visual acuity between automated-ETDRS and paper chart was good, with 95 per cent LoA of ±0.19 logMAR. Furthermore, automated-ETDRS near visual acuity showed good repeatability (95 per cent LoA of ±0.20). Mean difference between automated-ETDRS intermediate visual acuity and intermediate visual acuity by paper chart was 0.02 ± 0.10 logMAR (p > 0.05). Agreement of intermediate visual acuity between automated-ETDRS and paper chart was good, with 95 per cent LoA of ±0.20 logMAR. In addition, automated-ETDRS intermediate visual acuity had good repeatability (95 per cent LoA of ±0.16).Conclusion: Automated-ETDRS near and intermediate visual acuity measurement showed good repeatability and agreement with the gold standard chart-based visual acuity measurement. The findings of this study indicate the automated visual acuity measurement system may have potential for use in both patient care and clinical trials. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
15. An Efficient Algorithm to Determine the Optimal Two-Stage Randomized Multinomial Designs in Oncology Clinical Trials.
- Author
-
Zhang, Yong, Mietlowski, William, Chen, Bee, and Wang, Yibin
- Subjects
ONCOLOGY ,ALGORITHMS ,MEDICINE ,CLINICAL trials ,CLINICAL medicine research - Abstract
Sun et al. (2009) proposed an optimal two-stage randomized multinomial design that incorporates both response rate (RR) and early progression rate (EPR) in designing phase II oncology trials. However, determination of the design parameters in their approach requires evaluating huge numbers of combinations among possible values of design parameters, and thus requires highly intensive computation. In this paper we develop an efficient algorithm to identify the optimal two-stage randomized multinomial designs in phase II oncology clinical trials comparing a treatment arm to a control arm. The proposed algorithm substantially reduces the computation intensity via an approximation method. Some other techniques are also used to further improve its efficiency. Examples show that the proposed algorithm has more than a 90% reduction in computation time while having an acceptably low approximation error. This may enhance usage of the optimal two-stage multinomial design in clinical trials and also make it feasible to extend the design to more complicated scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
16. Usability of an Intelligent Virtual Assistant for Promoting Behavior Change and Self-Care in Older People with Type 2 Diabetes.
- Author
-
Balsa, João, Félix, Isa, Cláudio, Ana Paula, Carmo, Maria Beatriz, Silva, Isabel Costa e, Guerreiro, Ana, Guedes, Maria, Henriques, Adriana, and Guerreiro, Mara Pereira
- Subjects
HEALTH self-care ,PATIENT compliance ,ASSISTIVE technology centers ,PEOPLE with diabetes ,GRAPHICAL user interfaces ,QUALITATIVE research ,BEHAVIOR modification ,RESEARCH funding ,ARTIFICIAL intelligence ,CONTENT analysis ,CLINICAL trials ,BEHAVIOR ,JUDGMENT sampling ,LONGITUDINAL method ,THEMATIC analysis ,CONCEPTUAL structures ,HEALTH behavior ,SOCIAL support ,DRUGS ,PSYCHOSOCIAL factors ,ALGORITHMS ,DIET ,PHYSICAL activity ,OLD age - Abstract
In the context of the VASelfCare project, we developed an application prototype of an intelligent anthropomorphic virtual assistant. Designed as a relational agent, the virtual assistant has the role of supporting older people with Type 2 Diabetes Mellitus (T2D) in medication adherence and lifestyle changes. Our paper has two goals: describing the essentials of this prototype, and reporting on usability evaluation. We describe the general architecture of the prototype, including the graphical component, and focus on its main feature: the incorporation, in the way the dialogue flows, of Behavior Change Techniques, identified through a theoretical framework, the Behaviour Change Wheel. Usability was experimentally evaluated in field tests in a purposive sample of 20 participants (11 older adults with T2D and 9 experts). The Portuguese version of the System Usability Scale was employed, supplemented with qualitative data from open questions, diaries, digital notes and telephone follow-ups. The aggregated mean SUS score was 73,75 (SD 13,31), which corresponds to a borderline rating of excellent. Textual data were content analyzed and will be prioritized to further improve usability. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
17. The Correspondence Between Causal and Traditional Mediation Analysis: the Link Is the Mediator by Treatment Interaction.
- Author
-
MacKinnon, David P., Valente, Matthew J., and Gonzalez, Oscar
- Subjects
MEDIATION ,LETTERS ,SUBSTANCE abuse prevention ,ALGORITHMS ,CLINICAL trials ,COMPARATIVE studies ,RESEARCH methodology ,MEDICAL cooperation ,PREVENTIVE health services ,RESEARCH ,STATISTICS ,TESTOSTERONE ,DATA analysis ,EVALUATION research ,ROOT cause analysis - Abstract
Mediation analysis is a methodology used to understand how and why behavioral phenomena occur. New mediation methods based on the potential outcomes framework are a seminal advancement for mediation analysis because they focus on the causal basis of mediation. Despite the importance of the potential outcomes framework in other fields, the methods are not well known in prevention and other disciplines. The interaction of a treatment (X) and a mediator (M) on an outcome variable (Y) is central to the potential outcomes framework for causal mediation analysis and provides a way to link traditional and modern causal mediation methods. As described in the paper, for a continuous mediator and outcome, if the XM interaction is zero, then potential outcomes estimators of the mediated effect are equal to the traditional model estimators. If the XM interaction is nonzero, the potential outcomes estimators correspond to simple direct and simple mediated contrasts for the treatment and the control groups in traditional mediation analysis. Links between traditional and causal mediation estimators clarify the meaning of potential outcomes framework mediation quantities. A simulation study demonstrates that testing for a XM interaction that is zero in the population can reduce power to detect mediated effects, and ignoring a nonzero XM interaction in the population can also reduce power to detect mediated effects in some situations. We recommend that prevention scientists incorporate evaluation of the XM interaction in their research. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
18. Small-sample performance and underlying assumptions of a bootstrap-based inference method for a general analysis of covariance model with possibly heteroskedastic and nonnormal errors.
- Author
-
Zimmermann, Georg, Pauly, Markus, and Bathke, Arne C
- Subjects
ANALYSIS of covariance ,COVARIANCE matrices ,HETEROSCEDASTICITY ,CLINICAL trials ,SPINAL cord injuries ,STATISTICS ,RESEARCH ,SAMPLE size (Statistics) ,RESEARCH methodology ,EVALUATION research ,MEDICAL cooperation ,COMPARATIVE studies ,DATA analysis ,STATISTICAL models ,ALGORITHMS - Abstract
It is well known that the standard F test is severely affected by heteroskedasticity in unbalanced analysis of covariance models. Currently available potential remedies for such a scenario are based on heteroskedasticity-consistent covariance matrix estimation (HCCME). However, the HCCME approach tends to be liberal in small samples. Therefore, in the present paper, we propose a combination of HCCME and a wild bootstrap technique, with the aim of improving the small-sample performance. We precisely state a set of assumptions for the general analysis of covariance model and discuss their practical interpretation in detail, since this issue may have been somewhat neglected in applied research so far. We prove that these assumptions are sufficient to ensure the asymptotic validity of the combined HCCME-wild bootstrap analysis of covariance. The results of our simulation study indicate that our proposed test remedies the problems of the analysis of covariance F test and its heteroskedasticity-consistent alternatives in small to moderate sample size scenarios. Our test only requires very mild conditions, thus being applicable in a broad range of real-life settings, as illustrated by the detailed discussion of a dataset from preclinical research on spinal cord injury. Our proposed method is ready-to-use and allows for valid hypothesis testing in frequently encountered settings (e.g., comparing group means while adjusting for baseline measurements in a randomized controlled clinical trial). [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
19. Identification of pulmonary nodules via CT images with hierarchical fully convolutional networks.
- Author
-
Chen, Genlang, Zhang, Jiajian, Zhuo, Deyun, Pan, Yuning, and Pang, Chaoyi
- Subjects
PULMONARY nodules ,CANCER tomography ,LUNG cancer diagnosis ,CLINICAL trials ,IMAGE segmentation ,ALGORITHMS ,COMPUTED tomography ,DATABASES ,LUNG tumors ,RESEARCH funding ,THREE-dimensional imaging ,SOLITARY pulmonary nodule - Abstract
Lung cancer is one of the most diagnosable forms of cancer worldwide. The early diagnoses of pulmonary nodules in computed tomography (CT) chest scans are crucial for potential patients. Recent researches have showed that the methods based on deep learning have made a significant progress for the medical diagnoses. However, the achievements on identification of pulmonary nodules are not yet satisfactory enough to be adopted in clinical practice. It is largely caused by either the existence of many false positives or the heavy time of processing. With the development of fully convolutional networks (FCNs), in this study, we proposed a new method of identifying the pulmonary nodules. The method segments the suspected nodules from their environments and then removes the false positives. Especially, it optimizes the network architecture for the identification of nodules rapidly and accurately. In order to remove the false positives, the suspected nodules are reduced using the 2D models. Furthermore, according to the significant differences between nodules and non-nodules in 3D shapes, the false positives are eliminated by integrating into the 3D models and classified via 3D CNNs. The experiments on 1000 patients indicate that our proposed method achieved 97.78% sensitivity rate for segmentation and 90.1% accuracy rate for detection. The maximum response time was less than 30 s and the average time was about 15 s. Graphical Abstract This paper has proposed a new method of identifying the pulmonary nodules. The method segments the suspected nodules from CT images and removes the false positives. As shown in the above, the proposed approach consists of three stages. In stage I, raw data are filtered and normalized. The clean normalized data are then segmented in stage II to extract the suspected nodular lesions through 2D FCNs. Stage III is to remove some false positives generated at stage II via 3D CNNs and outputs the final results. The experiments on 1000 patients indicate that our proposed method has achieved 97.78% sensitivity rate for segmentation and 90.1% accuracy rate for detection. The maximum response time was less than 30 s and the average time was about 15 s. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
20. Interval estimation in multi-stage drop-the-losers designs.
- Author
-
Xiaomin Lu, Ying He, Wu, Samuel S., Lu, Xiaomin, and He, Ying
- Subjects
CLINICAL trials ,STOCHASTIC orders ,TREATMENT effectiveness ,RANDOM variables ,SAMPLE size (Statistics) ,ALGORITHMS ,EXPERIMENTAL design ,STATISTICS - Abstract
Drop-the-losers designs have been discussed extensively in the past decades, mostly focusing on two-stage models. The designs with more than two stages have recently received increasing attention due to their improved efficiency over the corresponding two-stage designs. In this paper, we consider the problem of estimating and testing the effect of selected treatment under the setting of three-stage drop-the-losers designs. A conservative interval estimator is proposed, which is proved to have at least the specified coverage probability using a stochastic ordering approach. The proposed interval estimator is also demonstrated numerically to have narrower interval width but higher coverage rate than the bootstrap method proposed by Bowden and Glimm (Biometrical Journal, vol. 56, pp. 332-349) in most cases. It is also a straightforward derivation from the stochastic ordering result that the family-wise error rate is strongly controlled with the maximum achieved at the global null hypothesis. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
21. Optimal two-stage enrichment design correcting for biomarker misclassification.
- Author
-
Yong Zang, Beibei Guo, Zang, Yong, and Guo, Beibei
- Subjects
BIOLOGICAL tags ,CLINICAL trials ,INDIVIDUALIZED medicine ,RANDOMIZED controlled trials ,SIMULATION methods & models ,ALGORITHMS ,STATISTICAL models - Abstract
The enrichment design is an important clinical trial design to detect the treatment effect of the molecularly targeted agent (MTA) in personalized medicine. Under this design, patients are stratified into marker-positive and marker-negative subgroups based on their biomarker statuses and only the marker-positive patients are enrolled into the trial and randomized to receive either the MTA or a standard treatment. As the biomarker plays a key role in determining the enrollment of the trial, a misclassification of the biomarker can induce substantial bias, undermine the integrity of the trial, and seriously affect the treatment evaluation. In this paper, we propose a two-stage optimal enrichment design that utilizes the surrogate marker to correct for the biomarker misclassification. The proposed design is optimal in the sense that it maximizes the probability of correctly classifying each patient's biomarker status based on the surrogate marker information. In addition, after analytically deriving the bias caused by the biomarker misclassification, we develop a likelihood ratio test based on the EM algorithm to correct for such bias. We conduct comprehensive simulation studies to investigate the operating characteristics of the optimal design and the results confirm the desirable performance of the proposed design. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
22. Quantitative cone-beam CT imaging in radiation therapy using planning CT as a prior: First patient studies.
- Author
-
Niu, Tianye, Al-Basheer, Ahmad, and Zhu, Lei
- Subjects
QUANTITATIVE research ,TOMOGRAPHY ,MEDICAL imaging systems ,IMAGE quality in medical radiography ,CLINICAL trials ,ALGORITHMS ,PERFORMANCE evaluation - Abstract
Purpose: Quantitative cone-beam CT (CBCT) imaging is on increasing demand for high-performance image guided radiation therapy (IGRT). However, the current CBCT has poor image qualities mainly due to scatter contamination. Its current clinical application is therefore limited to patient setup based on only bony structures. To improve CBCT imaging for quantitative use, we recently proposed a correction method using planning CT (pCT) as the prior knowledge. Promising phantom results have been obtained on a tabletop CBCT system, using a correction scheme with rigid registration and without iterations. More challenges arise in clinical implementations of our method, especially because patients have large organ deformation in different scans. In this paper, we propose an improved framework to extend our method from bench to bedside by including several new components. Methods: The basic principle of our correction algorithm is to estimate the primary signals of CBCT projections via forward projection on the pCT image, and then to obtain the low-frequency errors in CBCT raw projections by subtracting the estimated primary signals and low-pass filtering. We improve the algorithm by using deformable registration to minimize the geometry difference between the pCT and the CBCT images. Since the registration performance relies on the accuracy of the CBCT image, we design an optional iterative scheme to update the CBCT image used in the registration. Large correction errors result from the mismatched objects in the pCT and the CBCT scans. Another optional step of gas pocket and couch matching is added into the framework to reduce these effects. Results: The proposed method is evaluated on four prostate patients, of which two cases are presented in detail to investigate the method performance for a large variety of patient geometry in clinical practice. The first patient has small anatomical changes from the planning to the treatment room. Our algorithm works well even without the optional iterations and the gas pocket and couch matching. The image correction on the second patient is more challenging due to the effects of gas pockets and attenuating couch. The improved framework with all new components is used to fully evaluate the correction performance. The enhanced image quality has been evaluated using mean CT number and spatial nonuniformity (SNU) error as well as contrast improvement factor. If the pCT image is considered as the ground truth, on the four patients, the overall mean CT number error is reduced from over 300 HU to below 16 HU in the selected regions of interest (ROIs), and the SNU error is suppressed from over 18% to below 2%. The average soft-tissue contrast is improved by an average factor of 2.6. Conclusions: We further improve our pCT-based CBCT correction algorithm for clinical use. Superior correction performance has been demonstrated on four patient studies. By providing quantitative CBCT images, our approach significantly increases the accuracy of advanced CBCT-based clinical applications for IGRT. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
23. Advances in quantitative magnetic resonance imaging-based biomarkers for Alzheimer disease.
- Author
-
Dickerson, Bradford C.
- Subjects
ALZHEIMER'S disease ,BIOMARKERS ,CLINICAL trials ,ALGORITHMS ,DATA acquisition systems ,BRAIN - Abstract
A critical goal of Alzheimer disease research is to identify disease biomarkers that can be used in clinical trials to assist in the adjudication of treatment effects. While clinical validation remains a goal for many potential Alzheimer disease biomarkers, the rapid proliferation of markers has sparked comparative efforts as well. New data acquisition methods and sophisticated image-processing algorithms are poised to make a substantial impact on our ability to make precise measurements of the structure and function of regions within the living human brain and their connections and chemical composition. This commentary provides a perspective on a recently published paper and how it illustrates progress and challenges in the field. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
24. FTIR-based spectroscopic analysis in the identification of clinically aggressive prostate cancer.
- Author
-
Baker, M. J., Gazi, E., Brown, M. D., Shanks, J. H., Gardner, P., and Clarke, N. W.
- Subjects
PROSTATE cancer ,DIAGNOSIS ,FOURIER transform infrared spectroscopy ,MOLECULAR diagnosis ,GLEASON grading system ,ALGORITHMS ,SPECTROSCOPIC imaging ,CLINICAL trials ,RESEARCH ,RESEARCH methodology ,EVALUATION research ,MEDICAL cooperation ,TUMOR classification ,COMPARATIVE studies ,RESEARCH funding ,INFRARED spectroscopy ,PROSTATE tumors - Abstract
Fourier transform infrared (FTIR) spectroscopy is a vibrational spectroscopic technique that uses infrared radiation to vibrate molecular bonds within the sample that absorbs it. As different samples contain different molecular bonds or different configurations of molecular bonds, FTIR allows us to obtain chemical information on molecules within the sample. Fourier transform infrared microspectroscopy in conjunction with a principal component-discriminant function analysis (PC-DFA) algorithm was applied to the grading of prostate cancer (CaP) tissue specimens. The PC-DFA algorithm is used alongside the established diagnostic measures of Gleason grading and the tumour/node/metastasis system. Principal component-discriminant function analysis improved the sensitivity and specificity of a three-band Gleason score criterion diagnosis previously reported by attaining an overall sensitivity of 92.3% and specificity of 99.4%. For the first time, we present the use of a two-band criterion showing an association of FTIR-based spectral characteristics with clinically aggressive behaviour in CaP manifest as local and/or distal spread. This paper shows the potential for the use of spectroscopic analysis for the evaluation of the biopotential of CaP in an accurate and reproducible manner. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
25. Signal detection in the pharmaceutical industry: integrating clinical and computational approaches.
- Author
-
Hauben, Manfred
- Subjects
SIGNAL detection ,PHARMACEUTICAL industry ,PRODUCT safety ,DRUG side effects ,DRUG monitoring ,ONLINE data processing ,CLINICAL trials ,DRUG interactions ,DATA mining ,ALGORITHMS ,COMMERCIAL product evaluation ,INDUSTRIES ,INFORMATION storage & retrieval systems - Abstract
Drug safety profiles are dynamic and established over time using multiple, complimentary datasets and tools. The principal concern of pharmacovigilance is the detection of adverse drug reactions that are novel by virtue of their clinical nature, severity and/or frequency as soon as possible with minimum patient exposure. A key step in the process is the detection of ‘signals’ that direct safety reviewers to associations that might be worthy of further investigation. Although the ‘prepared mind’ remains the cornerstone of signal detection safety reviewers seeking potential signals by scrutinising very large, sparse databases may find themselves ‘drowning in data but thirsty for knowledge’. Understandably, health authorities, pharmaceutical companies and academic centres are developing, testing and/or deploying computer-assisted database screening tools (also known as data-mining algorithms [DMAs]) to assist human reviewers. The most commonly used DMAs involve disproportionality analysis that project high-dimensional data onto two-dimensional (2 × 2) contingency tables in the context of an independence model. The objective of this paper is to extend the discussion of the evaluation, potential utility and limitations of the commonly used DMAs by providing a ‘holistic’ perspective on their use as one component of a comprehensive suite of signal detection strategies incorporating clinical and statistical approaches to signal detection – a marriage of technology and the ‘prepared mind’. Data-mining exercises involving spontaneous reports submitted to the US FDA will be used for illustration. Potential pitfalls and obstacles to the acceptance and implementation of data mining will be considered and suggestions for future research will be offered. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
26. Application of the development stages of a cluster randomized trial to a framework for evaluating complex health interventions.
- Author
-
Loeb, Mark B.
- Subjects
CLINICAL trials ,EVALUATION of medical care ,ANTIBIOTICS ,NURSING care facilities ,ALGORITHMS - Abstract
Introduction: Trials of complex health interventions often pose difficult methodologic challenges. The objective of this paper is to assess the extent to which the various development steps of a cluster randomized trial to optimize antibiotic use in nursing homes are represented in a recently published framework for the design and evaluation of complex health interventions. In so doing, the utility of the framework for health services researchers is evaluated. Methods: Using the five phases of the framework (theoretical, identification of components of the intervention, definition of trial and intervention design, methodological issues for main trial, promoting effective implementation), corresponding stages in the development of the cluster randomized trial using diagnostic and treatment algorithms to optimize the use of antibiotics in nursing homes are identified and described. Results: Synthesis of evidence needed to construct the algorithms, survey and qualitative research used to define components of the algorithms, a pilot study to assess the feasibility of delivering the algorithms, methodological issues in the main trial including choice of design, allocation concealment, outcomes, sample size calculation, and analysis are adequately represented using the stages of the framework. Conclusions: The framework is a useful resource for researchers planning a randomized clinical trial of a complex intervention. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
27. Pattern mixture models for clinical validation of biomarkers in the presence of missing data.
- Author
-
Gao, Fei, Dong, Jun, Zeng, Donglin, Rong, Alan, and Ibrahim, Joseph G.
- Subjects
ANTINEOPLASTIC agents ,ALGORITHMS ,BIOMETRY ,CLINICAL trials ,COLON tumors ,COMPUTER simulation ,PROTEINS ,RECTUM tumors ,REGRESSION analysis ,RESEARCH funding ,PROPORTIONAL hazards models - Abstract
Targeted therapies for cancers are sometimes only effective in a subset of patients with a particular biomarker status. In clinical development, the biomarker status is typically determined by an investigational-use-only/laboratory-developed test. A market ready test (MRT) is developed later to meet regulatory requirements and for future commercial use. In the USA, the clinical validation of MRT showing efficacy and safety profile of the targeted therapy in the biomarker subgroups determined by MRT is needed for pre-market approval. One of the major challenges in carrying out clinical validation is that the biomarker status per MRT is often missing for many subjects. In this paper, we treat biomarker status as a missing covariate and develop a novel pattern mixture model in the setting of a proportional hazards model for the time-to-event outcome variable. We specify a multinomial regression model for the missing biomarker statuses, and develop an expectation-maximization algorithm by the Method of Weights (Ibrahim, Journal of the American Statistical Association, 1990) to estimate the parameters in the regression model. We use Louis' formula (Louis, Journal of the Royal Statistical Society. Series B, 1982) to obtain standard errors estimates. We examine the performance of our method in extensive simulation studies and apply our method to a clinical trial in metastatic colorectal cancer. Copyright © 2017 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
28. A new framework of statistical inferences based on the valid joint sampling distribution of the observed counts in an incomplete contingency table.
- Author
-
Tian, Guo-Liang and Li, Hui-Qiong
- Subjects
CLINICAL trials ,CELLS ,CONTINGENCY tables ,NEUROLOGIC examination ,PATIENTS ,ALGORITHMS ,CONFIDENCE intervals ,EXPERIMENTAL design ,MOTHERS ,NEUROLOGICAL disorders ,PROBABILITY theory ,RESPIRATORY organ sounds ,SMOKING ,STATISTICS ,DATA analysis ,DISEASE complications - Abstract
Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
29. Implementing the EffTox dose-finding design in the Matchpoint trial.
- Author
-
Brock, Kristian, Billingham, Lucinda, Copland, Mhairi, Siddique, Shamyla, Sirovica, Mirjana, and Yap, Christina
- Subjects
FLUDARABINE ,CHRONIC myeloid leukemia ,CANCER chemotherapy ,PARAMETERIZATION ,CLINICAL trials ,DRUG approval ,ANTINEOPLASTIC agents ,ALGORITHMS ,ANTIMETABOLITES ,ANTIVIRAL agents ,COMPARATIVE studies ,EXPERIMENTAL design ,HETEROCYCLIC compounds ,IMIDAZOLES ,RESEARCH methodology ,MEDICAL cooperation ,HEALTH outcome assessment ,PROBABILITY theory ,RESEARCH ,EVALUATION research ,CYTARABINE ,IDARUBICIN ,STATISTICAL models - Abstract
Background: The Matchpoint trial aims to identify the optimal dose of ponatinib to give with conventional chemotherapy consisting of fludarabine, cytarabine and idarubicin to chronic myeloid leukaemia patients in blastic transformation phase. The dose should be both tolerable and efficacious. This paper describes our experience implementing EffTox in the Matchpoint trial.Methods: EffTox is a Bayesian adaptive dose-finding trial design that jointly scrutinises binary efficacy and toxicity outcomes. We describe a nomenclature for succinctly describing outcomes in phase I/II dose-finding trials. We use dose-transition pathways, where doses are calculated for each feasible set of outcomes in future cohorts. We introduce the phenomenon of dose ambivalence, where EffTox can recommend different doses after observing the same outcomes. We also describe our experiences with outcome ambiguity, where the categorical evaluation of some primary outcomes is temporarily delayed.Results: We arrived at an EffTox parameterisation that is simulated to perform well over a range of scenarios. In scenarios where dose ambivalence manifested, we were guided by the dose-transition pathways. This technique facilitates planning, and also helped us overcome short-term outcome ambiguity.Conclusions: EffTox is an efficient and powerful design, but not without its challenges. Joint phase I/II clinical trial designs will likely become increasingly important in coming years as we further investigate non-cytotoxic treatments and streamline the drug approval process. We hope this account of the problems we faced and the solutions we used will help others implement this dose-finding clinical trial design.Trial Registration: Matchpoint was added to the European Clinical Trials Database ( https://www.clinicaltrialsregister.eu/ctr-search/trial/2012-005629-65/GB ) on 2013-12-30. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
30. Sample size re-estimation in paired comparative diagnostic accuracy studies with a binary response.
- Author
-
McCray, Gareth P. J., Titman, Andrew C., Ghaneh, Paula, and Lancaster, Gillian A.
- Subjects
SAMPLE size (Statistics) ,PANCREATIC cancer diagnosis ,MAXIMUM likelihood statistics ,BIOPSY ,COMPARATIVE studies ,ALGORITHMS ,CLINICAL trials ,COMPUTER simulation ,RESEARCH methodology ,MEDICAL cooperation ,PAIRED comparisons (Mathematics) ,PANCREATIC tumors ,PROBABILITY theory ,RESEARCH ,RESEARCH evaluation ,EVALUATION research ,TREATMENT effectiveness ,RETROSPECTIVE studies ,STATISTICAL models - Abstract
Background: The sample size required to power a study to a nominal level in a paired comparative diagnostic accuracy study, i.e. studies in which the diagnostic accuracy of two testing procedures is compared relative to a gold standard, depends on the conditional dependence between the two tests - the lower the dependence the greater the sample size required. A priori, we usually do not know the dependence between the two tests and thus cannot determine the exact sample size required. One option is to use the implied sample size for the maximal negative dependence, giving the largest possible sample size. However, this is potentially wasteful of resources and unnecessarily burdensome on study participants as the study is likely to be overpowered. A more accurate estimate of the sample size can be determined at a planned interim analysis point where the sample size is re-estimated.Methods: This paper discusses a sample size estimation and re-estimation method based on the maximum likelihood estimates, under an implied multinomial model, of the observed values of conditional dependence between the two tests and, if required, prevalence, at a planned interim. The method is illustrated by comparing the accuracy of two procedures for the detection of pancreatic cancer, one procedure using the standard battery of tests, and the other using the standard battery with the addition of a PET/CT scan all relative to the gold standard of a cell biopsy. Simulation of the proposed method illustrates its robustness under various conditions.Results: The results show that the type I error rate of the overall experiment is stable using our suggested method and that the type II error rate is close to or above nominal. Furthermore, the instances in which the type II error rate is above nominal are in the situations where the lowest sample size is required, meaning a lower impact on the actual number of participants recruited.Conclusion: We recommend multinomial model maximum likelihood estimation of the conditional dependence between paired diagnostic accuracy tests at an interim to reduce the number of participants required to power the study to at least the nominal level.Trial Registration: ISRCTN ISRCTN73852054 . Registered 9th of January 2015. Retrospectively registered. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
31. A hyperspectral vessel image registration method for blood oxygenation mapping.
- Author
-
Wang, Qian, Li, Qingli, Zhou, Mei, Sun, Zhen, Liu, Hongying, and Wang, Yiting
- Subjects
HYPERSPECTRAL imaging systems ,IMAGE registration ,OXYGENATORS ,OXIMETRY ,CLINICAL trials - Abstract
Blood oxygenation mapping by the means of optical oximetry is of significant importance in clinical trials. This paper uses hyperspectral imaging technology to obtain in vivo images for blood oxygenation detection. The experiment involves dorsal skin fold window chamber preparation which was built on adult (8–10 weeks of age) female BALB/c nu/nu mice and in vivo image acquisition which was performed by hyperspectral imaging system. To get the accurate spatial and spectral information of targets, an automatic registration scheme is proposed. An adaptive feature detection method which combines the local threshold method and the level-set filter is presented to extract target vessels. A reliable feature matching algorithm with the correlative information inherent in hyperspectral images is used to kick out the outliers. Then, the registration images are used for blood oxygenation mapping. Registration evaluation results show that most of the false matches are removed and the smooth and concentrated spectra are obtained. This intensity invariant feature detection with outliers-removing feature matching proves to be effective in hyperspectral vessel image registration. Therefore, in vivo hyperspectral imaging system by the assistance of the proposed registration scheme provides a technique for blood oxygenation research. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
32. A default method to specify skeletons for Bayesian model averaging continual reassessment method for phase I clinical trials.
- Author
-
Pan, Haitao and Yuan, Ying
- Subjects
ALGORITHMS ,ANTINEOPLASTIC agents ,CLINICAL trials ,COMPUTER simulation ,DRUG dosage ,DOSE-effect relationship in pharmacology ,DRUG toxicity ,PHARMACOKINETICS ,PROBABILITY theory ,RESEARCH funding ,STATISTICS ,TUMORS ,STATISTICAL models - Abstract
The Bayesian model averaging continual reassessment method (CRM) is a Bayesian dose-finding design. It improves the robustness and overall performance of the continual reassessment method (CRM) by specifying multiple skeletons (or models) and then using Bayesian model averaging to automatically favor the best-fitting model for better decision making. Specifying multiple skeletons, however, can be challenging for practitioners. In this paper, we propose a default way to specify skeletons for the Bayesian model averaging CRM. We show that skeletons that appear rather different may actually lead to equivalent models. Motivated by this, we define a nonequivalence measure to index the difference among skeletons. Using this measure, we extend the model calibration method of Lee and Cheung (2009) to choose the optimal skeletons that maximize the average percentage of correct selection of the maximum tolerated dose and ensure sufficient nonequivalence among the skeletons. Our simulation study shows that the proposed method has desirable operating characteristics. We provide software to implement the proposed method. Copyright © 2016 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
33. Robust inference for mixed censored and binary response models with missing covariates.
- Author
-
Sarkar, Angshuman, Das, Kalyan, and Sinha, Sanjoy K.
- Subjects
REGRESSION analysis ,EXPECTATION-maximization algorithms ,METROPOLIS ,DIABETES ,CLINICAL trials ,BLOOD sugar analysis ,ALGORITHMS ,PROBABILITY theory ,SYSTEM analysis - Abstract
In biomedical and epidemiological studies, often outcomes obtained are of mixed discrete and continuous in nature. Furthermore, due to some technical inconvenience or else, continuous responses are censored and also a few covariates cease to be observed completely. In this paper, we develop a model to tackle these complex situations. Our methodology is developed in a more general framework and provides a full-scale robust analysis of such complex models. The proposed robust maximum likelihood estimators of the model parameters are resistant to potential outliers in the data. We discuss the asymptotic properties of the robust estimators. To avoid computational difficulties involving irreducibly high-dimensional integrals, we propose a Monte Carlo method based on the Metropolis algorithm for approximating the robust maximum likelihood estimators. We study the empirical properties of these estimators in simulations. We also illustrate the proposed robust method using clustered data on blood sugar content from a clinical trial of individuals who were investigated for diabetes. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
34. STOPP/START version 2--development of software applications: easier said than done?
- Author
-
ANRYS, PAULINE, BOLAND, BENOÎT, DEGRYSE, JEAN-MARIE, DE LEPELEIRE, JAN, PETROVIC, MIRKO, MARIEN, SOPHIE, DALLEUR, OLIVIA, STRAUVEN, GOEDELE, FOULON, VEERLE, and SPINEWINE, ANNE
- Subjects
INAPPROPRIATE prescribing (Medicine) ,ALGORITHMS ,CLINICAL trials ,CLUSTER analysis (Statistics) ,DECISION support systems ,INFORMATION storage & retrieval systems ,MEDICAL databases ,MEDICAL cooperation ,NURSING home patients ,NURSING care facilities ,RESEARCH ,SOFTWARE architecture ,CONTENT mining ,DESCRIPTIVE statistics ,OLD age ,PREVENTION - Abstract
Explicit criteria, such as the STOPP/START criteria, are increasingly used both in clinical practice and in research to identify potentially inappropriate prescribing in older people. In an article on the STOPP/START criteria version 2, O'Mahony et al. have pointed out the advantages of developing computerised criteria. Both clinical decision support systems to support healthcare professionals and software applications to automatically detect inappropriate prescribing in research studies can be developed. In the process of developing such tools, difficulties may occur. In the context of a research study, we have developed an algorithm to automatically apply STOPP/START criteria version 2 to our research database. We comment in this paper on different kinds of difficulties encountered and make suggestions that could be taken into account when developing the next version of the criteria. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
35. Mostly Exploration-Free Algorithms for Contextual Bandits.
- Author
-
Bastani, Hamsa, Bayati, Mohsen, and Khosravi, Khashayar
- Subjects
GREEDY algorithms ,ALGORITHMS ,BIG data ,CLINICAL trials - Abstract
The contextual bandit literature has traditionally focused on algorithms that address the exploration–exploitation tradeoff. In particular, greedy algorithms that exploit current estimates without any exploration may be suboptimal in general. However, exploration-free greedy algorithms are desirable in practical settings where exploration may be costly or unethical (e.g., clinical trials). Surprisingly, we find that a simple greedy algorithm can be rate optimal (achieves asymptotically optimal regret) if there is sufficient randomness in the observed contexts (covariates). We prove that this is always the case for a two-armed bandit under a general class of context distributions that satisfy a condition we term covariate diversity. Furthermore, even absent this condition, we show that a greedy algorithm can be rate optimal with positive probability. Thus, standard bandit algorithms may unnecessarily explore. Motivated by these results, we introduce Greedy-First, a new algorithm that uses only observed contexts and rewards to determine whether to follow a greedy algorithm or to explore. We prove that this algorithm is rate optimal without any additional assumptions on the context distribution or the number of arms. Extensive simulations demonstrate that Greedy-First successfully reduces exploration and outperforms existing (exploration-based) contextual bandit algorithms such as Thompson sampling or upper confidence bound. This paper was accepted by J. George Shanthikumar, big data analytics. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
36. Design and analysis of clinical trials in the presence of delayed treatment effect.
- Author
-
Sit, Tony, Liu, Mengling, Shnaidman, Michael, and Ying, Zhiliang
- Subjects
ALGORITHMS ,BIOLOGICAL assay ,CLINICAL trials ,COMPARATIVE studies ,COMPUTER simulation ,EPIDEMIOLOGICAL research ,RESEARCH methodology ,MEDICAL cooperation ,RESEARCH ,RESEARCH funding ,SURVIVAL analysis (Biometry) ,TIME ,EVALUATION research ,TREATMENT effectiveness ,STATISTICAL models - Abstract
In clinical trials with survival endpoint, it is common to observe an overlap between two Kaplan-Meier curves of treatment and control groups during the early stage of the trials, indicating a potential delayed treatment effect. Formulas have been derived for the asymptotic power of the log-rank test in the presence of delayed treatment effect and its accompanying sample size calculation. In this paper, we first reformulate the alternative hypothesis with the delayed treatment effect in a rescaled time domain, which can yield a simplified sample size formula for the log-rank test in this context. We further propose an intersection-union test to examine the efficacy of treatment with delayed effect and show it to be more powerful than the log-rank test. Simulation studies are conducted to demonstrate the proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
37. Optimal adaptive two-stage designs for early phase II clinical trials.
- Author
-
Shan, Guogen, Wilding, Gregory E., Hutson, Alan D., and Gerstenberger, Shawn
- Subjects
ALGORITHMS ,CLINICAL trials ,COMPARATIVE studies ,EXPERIMENTAL design ,RESEARCH methodology ,MEDICAL cooperation ,RESEARCH ,RESEARCH funding ,STATISTICS ,SAMPLE size (Statistics) ,EVALUATION research - Abstract
Simon's optimal two-stage design has been widely used in early phase clinical trials for Oncology and AIDS studies with binary endpoints. With this approach, the second-stage sample size is fixed when the trial passes the first stage with sufficient activity. Adaptive designs, such as those due to Banerjee and Tsiatis (2006) and Englert and Kieser (2013), are flexible in the sense that the second-stage sample size depends on the response from the first stage, and these designs are often seen to reduce the expected sample size under the null hypothesis as compared with Simon's approach. An unappealing trait of the existing designs is that they are not associated with a second-stage sample size, which is a non-increasing function of the first-stage response rate. In this paper, an efficient intelligent process, the branch-and-bound algorithm, is used in extensively searching for the optimal adaptive design with the smallest expected sample size under the null, while the type I and II error rates are maintained and the aforementioned monotonicity characteristic is respected. The proposed optimal design is observed to have smaller expected sample sizes compared to Simon's optimal design, and the maximum total sample size of the proposed adaptive design is very close to that from Simon's method. The proposed optimal adaptive two-stage design is recommended for use in practice to improve the flexibility and efficiency of early phase therapeutic development. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
38. Hierarchical likelihood inference on clustered competing risks data.
- Author
-
Christian, Nicholas J., Ha, Il Do, and Jeong, Jong ‐ Hyeon
- Subjects
ALGORITHMS ,BREAST tumors ,CLINICAL trials ,COMPUTER simulation ,DATABASES ,PROBABILITY theory ,REGRESSION analysis ,RESEARCH funding ,STATISTICS ,SYSTEM analysis ,RELATIVE medical risk ,PROPORTIONAL hazards models ,STATISTICAL models - Abstract
The frailty model, an extension of the proportional hazards model, is often used to model clustered survival data. However, some extension of the ordinary frailty model is required when there exist competing risks within a cluster. Under competing risks, the underlying processes affecting the events of interest and competing events could be different but correlated. In this paper, the hierarchical likelihood method is proposed to infer the cause-specific hazard frailty model for clustered competing risks data. The hierarchical likelihood incorporates fixed effects as well as random effects into an extended likelihood function, so that the method does not require intensive numerical methods to find the marginal distribution. Simulation studies are performed to assess the behavior of the estimators for the regression coefficients and the correlation structure among the bivariate frailty distribution for competing events. The proposed method is illustrated with a breast cancer dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
39. Upper-limb kinematic reconstruction during stroke robot-aided therapy.
- Author
-
Papaleo, E., Zollo, L., Garcia-Aracil, N., Badesa, F., Morales, R., Mazzoleni, S., Sterzi, S., Guglielmelli, E., and Badesa, F J
- Subjects
ARM physiology ,STROKE treatment ,HUMAN kinematics ,MEDICAL rehabilitation ,OPTOELECTRONICS ,ALGORITHMS ,ARM ,CLINICAL trials ,COMPARATIVE studies ,ELBOW ,KINEMATICS ,RESEARCH methodology ,MEDICAL cooperation ,RESEARCH ,ROBOTICS ,SHOULDER joint ,STROKE ,EVALUATION research ,STROKE rehabilitation - Abstract
The paper proposes a novel method for an accurate and unobtrusive reconstruction of the upper-limb kinematics of stroke patients during robot-aided rehabilitation tasks with end-effector machines. The method is based on a robust analytic procedure for inverse kinematics that simply uses, in addition to hand pose data provided by the robot, upper arm acceleration measurements for computing a constraint on elbow position; it is exploited for task space augmentation. The proposed method can enable in-depth comprehension of planning strategy of stroke patients in the joint space and, consequently, allow developing therapies tailored for their residual motor capabilities. The experimental validation has a twofold purpose: (1) a comparative analysis with an optoelectronic motion capturing system is used to assess the method capability to reconstruct joint motion; (2) the application of the method to healthy and stroke subjects during circle-drawing tasks with InMotion2 robot is used to evaluate its efficacy in discriminating stroke from healthy behavior. The experimental results have shown that arm angles are reconstructed with a RMSE of 8.3 × 10(-3) rad. Moreover, the comparison between healthy and stroke subjects has revealed different features in the joint space in terms of mean values and standard deviations, which also allow assessing inter- and intra-subject variability. The findings of this study contribute to the investigation of motor performance in the joint space and Cartesian space of stroke patients undergoing robot-aided therapy, thus allowing: (1) evaluating the outcomes of the therapeutic approach, (2) re-planning the robotic treatment based on patient needs, and (3) understanding pathology-related motor strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
40. 'It's like you have a hand again': An ultra-precise mind-controlled prosthetic
- Subjects
Data mining ,Prostheses and implants ,Algorithms ,Machine learning ,Time ,Clinical trials ,Editors ,Data warehousing/data mining ,Algorithm ,Health - Abstract
2020 MAR 16 (NewsRx) -- By a News Reporter-Staff News Editor at Clinical Trials Week -- ANN ARBOR--In a major advance in mind-controlled prosthetics for amputees, University of Michigan researchers [...]
- Published
- 2020
41. An extended hazard model with longitudinal covariates.
- Author
-
Tseng, Y. K., Su, Y. R., Mao, M., and Wang, J. L.
- Subjects
CLINICAL trials ,STUDY & teaching of medicine ,MAXIMUM likelihood statistics ,MONTE Carlo method ,ALGORITHMS ,BIOMARKERS - Abstract
In clinical trials and other medical studies, it has become increasingly common to observe simultaneously an event time of interest and longitudinal covariates. In the literature, joint modelling approaches have been employed to analyse both survival and longitudinal processes and to investigate their association. However, these approaches focus mostly on developing adaptive and flexible longitudinal processes based on a prespecified survival model, most commonly the Cox proportional hazards model. In this paper, we propose a general class of semiparametric hazard regression models, referred to as the extended hazard model, for the survival component. This class includes two popular survival models, the Cox proportional hazards model and the accelerated failure time model, as special cases. The proposed model is flexible for modelling event data, and its nested structure facilitates model selection for the survival component through likelihood ratio tests. A pseudo joint likelihood approach is proposed for estimating the unknown parameters and components via a Monte Carlo em algorithm. Asymptotic theory for the estimators is developed together with theory for the semiparametric likelihood ratio tests. The performance of the procedure is demonstrated through simulation studies. A case study featuring data from a Taiwanese HIV/AIDS cohort study further illustrates the usefulness of the extended hazard model. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
42. Optimal Personalised Treatment Computation through In Silico Clinical Trials on Patient Digital Twins*.
- Author
-
Sinisi, Stefano, Alimguzhin, Vadim, Mancini, Toni, Tronci, Enrico, Mari, Federico, Leeners, Brigitte, Eiter, Thomas, Maratea, Marco, and Vallati, Mauro
- Subjects
- *
CLINICAL trials , *INDIVIDUALIZED medicine , *MEDICAL protocols , *EXPERIMENTAL medicine , *ALGORITHMS , *HUMAN reproductive technology , *ANIMAL experimentation - Abstract
In Silico Clinical Trials (ISCT), i.e. clinical experimental campaigns carried out by means of computer simulations, hold the promise to decrease time and cost for the safety and efficacy assessment of pharmacological treatments, reduce the need for animal and human testing, and enable precision medicine. In this paper we present methods and an algorithm that, by means of extensive computer simulation-based experimental campaigns (ISCT) guided by intelligent search, optimise a pharmacological treatment for an individual patient (precision medicine). We show the effectiveness of our approach on a case study involving a real pharmacological treatment, namely the downregulation phase of a complex clinical protocol for assisted reproduction in humans. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
43. An Iterative Approach to Reconstruct the Phase of a Field from an X-Ray Free Electron Laser Intensity Measurement.
- Author
-
Kenneth Shui
- Subjects
X-rays ,ELECTRONS ,ALGORITHMS ,SIMULATION games ,CLINICAL trials - Abstract
In X-ray crystallography and Coherent Diffraction Imaging, only the diffraction patterns’ light density can be directly measured, not its phase. The process of retrieving the phase is defined as the phase retrieval problem. In this project, I explore a classic iterative approach for phase retrieval to apply to an XFEL simulator real space data. The conventional Gerchberg-Saxton algorithm is modified and simulated. Two figures of merit are developed to quantify the algorithm’s performance. Convergence time, initial phase, and noise sensitivity are evaluated. It is determined that the algorithm can be used to solve the phase problem for XFEL applications. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. Blockchain-based Clinical Trials: A Meta-Model Framework for Enhancing Security and Transparency with a Novel Algorithm.
- Author
-
Anwar, Aymen, Goyal, S. B., and Jan, Tony
- Subjects
CLINICAL trials ,DATA privacy ,BLOCKCHAINS ,DATA security ,ALGORITHMS ,DATA management - Abstract
Clinical trials are crucial to medication research, but data security, transparency, and integrity issues often arise. Blockchain technology offers a decentralized, tamper-proof framework for clinical trial data management, promising to overcome these issues. Current blockchain-based clinical trial platforms lack scalability, interoperability, and integrity. A meta-model paradigm for blockchain-based clinical trial security and transparency addresses these constraints. The system employs a unique algorithm with smart contracts and consensus procedures to protect data privacy, reduce redundancy, and promote platform compatibility. The algorithm aims to maximize resource consumption and reduce computational overhead while ensuring security and trust. To improve security and transparency, we analyze the proposed meta-model framework utilizing performance, scalability, and security metrics and benchmarks. We observed that the meta-model framework and algorithm are efficient, scalable, and safe, laying the groundwork for future research. In particular, the framework can minimize clinical trial costs and time while improving data quality, traceability, and accountability. The suggested meta-model framework and algorithm can improve blockchain-based clinical trial security and transparency, making data management more trustworthy and efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. Fully Automated Segmentation of the Pons and Midbrain Using Human T1 MR Brain Images.
- Author
-
Nigro, Salvatore, Cerasa, Antonio, Zito, Giancarlo, Perrotta, Paolo, Chiaravalloti, Francesco, Donzuso, Giulia, Fera, Franceso, Bilotta, Eleonora, Pantano, Pietro, and Quattrone, Aldo
- Subjects
MAGNETIC resonance imaging of the brain ,MESENCEPHALON ,BRAIN stem ,BRAIN diseases ,PONS test ,BRAIN anatomy ,CLINICAL trials ,PATIENTS - Abstract
Purpose: This paper describes a novel method to automatically segment the human brainstem into midbrain and pons, called LABS: Landmark-based Automated Brainstem Segmentation. LABS processes high-resolution structural magnetic resonance images (MRIs) according to a revised landmark-based approach integrated with a thresholding method, without manual interaction. Methods: This method was first tested on morphological T1-weighted MRIs of 30 healthy subjects. Its reliability was further confirmed by including neurological patients (with Alzheimer's Disease) from the ADNI repository, in whom the presence of volumetric loss within the brainstem had been previously described. Segmentation accuracies were evaluated against expert-drawn manual delineation. To evaluate the quality of LABS segmentation we used volumetric, spatial overlap and distance-based metrics. Results: The comparison between the quantitative measurements provided by LABS against manual segmentations revealed excellent results in healthy controls when considering either the midbrain (DICE measures higher that 0.9; Volume ratio around 1 and Hausdorff distance around 3) or the pons (DICE measures around 0.93; Volume ratio ranging 1.024–1.05 and Hausdorff distance around 2). Similar performances were detected for AD patients considering segmentation of the pons (DICE measures higher that 0.93; Volume ratio ranging from 0.97–0.98 and Hausdorff distance ranging 1.07–1.33), while LABS performed lower for the midbrain (DICE measures ranging 0.86–0.88; Volume ratio around 0.95 and Hausdorff distance ranging 1.71–2.15). Conclusions: Our study represents the first attempt to validate a new fully automated method for in vivo segmentation of two anatomically complex brainstem subregions. We retain that our method might represent a useful tool for future applications in clinical practice. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
46. Toronto AI startup BenchSci raises $22-million
- Subjects
Fidelity Investments ,Venture capital companies ,Medical research ,Mutual fund industry ,Biological products ,Algorithms ,Biochemistry ,Clinical trials ,Science literature ,Artificial intelligence ,Investments ,Time ,Antibodies ,Scientists ,Proteins ,General interest ,News, opinion and commentary - Abstract
Byline: SEAN SILCOFF, JOSH O'KANE Lead Toronto startup BenchSci Analytics Inc., which uses artificial intelligence to help biomedical scientists cut time and costs from research, is extending its reach with [...]
- Published
- 2020
47. Introduction to a generalized method for adaptive randomization in trials.
- Author
-
Hoare, Zöe S. J., Whitaker, Christopher J., and Whitaker, Rhiannon
- Subjects
MEDICAL research ,CLINICAL trials ,RANDOMIZED controlled trials ,ALGORITHMS ,MEDICAL care - Abstract
Background: Ideally clinical trials should use some form of randomization for allocating participants to the treatment groups under trial. As an integral part of the process of assessing the effectiveness of these treatment groups, randomization performed well can reduce, if not eliminate, some forms of bias that can be evident in non-randomized trials. Given the vast set of possible randomization methods to choose from we demonstrate a method that incorporates many of the advantages of these other methods. Methods: A step-by-step introduction of how to use the adaptive randomization algorithm for conducting a clinical trial is given. Results: The implications, effects and capabilities of using the adaptive randomization algorithm are fully demonstrated and explained using simulated data and examples from actual trials. Conclusions: This paper provides an introduction to a dynamic type of treatment allocation, which fulfills the CONSORT requirements of participants being randomly allocated whilst maintaining a level of control of the balances overall, within the stratification variables and within the strata simultaneously. Maintaining control of the imbalances within the groups is vital particularly if interim analyses are planned. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
48. Capacity Planning underClinical Trials Uncertaintyin Continuous Pharmaceutical Manufacturing, 2: Solution Method.
- Author
-
Sundaramoorthy, Arul, Li, Xiang, Evans, James M. B., and Barton, Paul I.
- Subjects
- *
PHARMACEUTICAL industry , *CLINICAL trials , *UNCERTAINTY (Information theory) , *CAPACITY requirements planning , *ALGORITHMS , *SOLUTION (Chemistry) , *CHEMICAL decomposition - Abstract
In Part 1 of this paper, we presented a scenario-basedmultiperiodmixed-integer linear programming (MILP) formulation for a capacityplanning problem in continuous pharmaceutical manufacturing underclinical trials uncertainty. The number of scenarios and, thus, theformulation size grows exponentially with the number of products.The model size easily becomes intractable for conventional algorithmsfor more than 8 products. However, industrial-scale problems ofteninvolve 10 or more products, and thus a scalable solution algorithmis essential to solve such large-scale problems in reasonable times.In this part of the paper, we develop a rigorous decomposition strategythat exploits the underlying problem structure. We demonstrate theeffectiveness of the proposed algorithm using several examples containingup to 16 potential products and over 65 000 scenarios. Withthe proposed decomposition algorithm, the solution time scales linearlywith the number of scenarios, whereby a 16-product example with over65 million binary variables, nearly 240 million continuous variables,and over 250 million constraints was solved in less than 6 h ofsolver time. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
49. A new approach for joint modelling of longitudinal measurements and survival times with a cure fraction.
- Author
-
Song, Hui, Peng, Yingwei, and Tu, Dongsheng
- Subjects
CLINICAL trials ,ALGORITHMS ,CANCER in women ,BREAST cancer ,DISEASES in women - Abstract
Copyright of Canadian Journal of Statistics is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2012
- Full Text
- View/download PDF
50. Challenges in implementing and obtaining acceptance for J-Tpeak assessment as the clinical component of CiPA.
- Author
-
Darpo, Borje and Couderc, Jean-Philippe
- Subjects
- *
PHARMACOLOGY , *DRUG use testing , *CLINICAL trials , *QUALITY of life , *ALGORITHMS - Abstract
Abstract Introduction This paper is based on a presentation held at the Annual Safety Pharmacology Society meeting in September 2017, at which challenges for the clinical component of CiPA were presented. FDA has published an automated algorithm for measurement of the J-Tpeak interval on a median beat from a vector magnitude lead derived from a 12-lead ECG. CiPA proposes that J-Tpeak prolongation < 10 ms can be used for drugs with a QTc effect < 20 ms to differentiate between safe and unsafe delayed repolarization and to reduce the level of ECG monitoring in late stage clinical trials. Methods We applied FDA's algorithm, complemented with iCOMPAS, to moxifloxacin and dolasetron data from the IQ-CSRC study with 9 subjects on active and 6 on placebo. The effect on QTcF and corrected J-Tpeak (J-Tpeak_c) was analyzed using concentration-effect modeling. Results There was a good correlation between QTcF and J-Tpeak_c prolongation after oral dosing of 400 mg moxifloxacin with placebo-adjusted, change-from-baseline (ΔΔ) J-Tpeak_c of ~12 ms at concentrations that caused ΔΔQTcF of ~20 ms. On dolasetron, J-Tpeak_c was highly variable, no prolongation was seen and an effect on ΔΔJ-Tpeak_c > 10 ms could be excluded across the observed plasma concentration range. Discussion In this limited analysis performed on the IQ-CSRC study waveforms using FDA's automated algorithm, J-Tpeak prolongation was observed on moxifloxacin, but not on dolasetron, despite clinical observations of proarrhythmias with both drugs. Challenges for the implementation of the J-Tpeak interval as a replacement or complement to the QTc interval, include to demonstrate that the proposed clinical algorithm using a J-Tpeak threshold of 10 ms, can be used to categorize drugs with a QT effect up to ~20 ms as having low pro-arrhythmic risk. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.