28 results
Search Results
2. Coronavirus disease 2019 (COVID-19): an evidence map of medical literature.
- Author
-
Liu, Nan, Chee, Marcel Lucas, Niu, Chenglin, Pek, Pin Pin, Siddiqui, Fahad Javaid, Ansah, John Pastor, Matchar, David Bruce, Lam, Sean Shao Wei, Abdullah, Hairil Rizal, Chan, Angelique, Malhotra, Rahul, Graves, Nicholas, Koh, Mariko Siyue, Yoon, Sungwon, Ho, Andrew Fu Wah, Ting, Daniel Shu Wei, Low, Jenny Guek Hong, and Ong, Marcus Eng Hock
- Subjects
COVID-19 ,MEDICAL literature ,COVID-19 pandemic ,DISEASE mapping ,DIAGNOSIS - Abstract
Background: Since the beginning of the COVID-19 outbreak in December 2019, a substantial body of COVID-19 medical literature has been generated. As of June 2020, gaps and longitudinal trends in the COVID-19 medical literature remain unidentified, despite potential benefits for research prioritisation and policy setting in both the COVID-19 pandemic and future large-scale public health crises.Methods: In this paper, we searched PubMed and Embase for medical literature on COVID-19 between 1 January and 24 March 2020. We characterised the growth of the early COVID-19 medical literature using evidence maps and bibliometric analyses to elicit cross-sectional and longitudinal trends and systematically identify gaps.Results: The early COVID-19 medical literature originated primarily from Asia and focused mainly on clinical features and diagnosis of the disease. Many areas of potential research remain underexplored, such as mental health, the use of novel technologies and artificial intelligence, pathophysiology of COVID-19 within different body systems, and indirect effects of COVID-19 on the care of non-COVID-19 patients. Few articles involved research collaboration at the international level (24.7%). The median submission-to-publication duration was 8 days (interquartile range: 4-16).Conclusions: Although in its early phase, COVID-19 research has generated a large volume of publications. However, there are still knowledge gaps yet to be filled and areas for improvement for the global research community. Our analysis of early COVID-19 research may be valuable in informing research prioritisation and policy planning both in the current COVID-19 pandemic and similar global health crises. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
3. Estimating age-specific COVID-19 fatality risk and time to death by comparing population diagnosis and death patterns: Australian data.
- Author
-
Marschner, Ian C.
- Subjects
COVID-19 ,DIAGNOSIS ,COVID-19 pandemic ,AGE groups ,PUBLIC domain - Abstract
Background: Mortality is a key component of the natural history of COVID-19 infection. Surveillance data on COVID-19 deaths and case diagnoses are widely available in the public domain, but they are not used to model time to death because they typically do not link diagnosis and death at an individual level. This paper demonstrates that by comparing the unlinked patterns of new diagnoses and deaths over age and time, age-specific mortality and time to death may be estimated using a statistical method called deconvolution.Methods: Age-specific data were analysed on 816 deaths among 6235 cases over age 50 years in Victoria, Australia, from the period January through December 2020. Deconvolution was applied assuming logistic dependence of case fatality risk (CFR) on age and a gamma time to death distribution. Non-parametric deconvolution analyses stratified into separate age groups were used to assess the model assumptions.Results: It was found that age-specific CFR rose from 2.9% at age 65 years (95% CI:2.2 - 3.5) to 40.0% at age 95 years (CI: 36.6 - 43.6). The estimated mean time between diagnosis and death was 18.1 days (CI: 16.9 - 19.3) and showed no evidence of varying by age (heterogeneity P = 0.97). The estimated 90% percentile of time to death was 33.3 days (CI: 30.4 - 36.3; heterogeneity P = 0.85). The final age-specific model provided a good fit to the observed age-stratified mortality patterns.Conclusions: Deconvolution was demonstrated to be a powerful analysis method that could be applied to extensive data sources worldwide. Such analyses can inform transmission dynamics models and CFR assessment in emerging outbreaks. Based on these Australian data it is concluded that death from COVID-19 occurs within three weeks of diagnosis on average but takes five weeks in 10% of fatal cases. Fatality risk is negligible in the young but rises above 40% in the elderly, while time to death does not seem to vary by age. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
4. Inverse probability of treatment-weighted competing risks analysis: an application on long-term risk of urinary adverse events after prostate cancer treatments.
- Author
-
Bolch, Charlotte A., Haitao Chu, Jarosek, Stephanie, Cole, Stephen R., Elliott, Sean, Virnig, Beth, and Chu, Haitao
- Subjects
PROSTATE cancer treatment ,ADVERSE health care events ,PROSTATECTOMY ,CANCER radiotherapy ,HEALTH risk assessment ,URINARY organ disease diagnosis ,PROSTATE tumors treatment ,BLADDER diseases ,REPORTING of diseases ,LONGITUDINAL method ,MEDICARE ,HEALTH outcome assessment ,PROBABILITY theory ,RADIOTHERAPY ,URETHRA stricture ,URINARY organ diseases ,DISEASE incidence ,PROPORTIONAL hazards models ,KAPLAN-Meier estimator ,DIAGNOSIS - Abstract
Background: To illustrate the 10-year risks of urinary adverse events (UAEs) among men diagnosed with prostate cancer and treated with different types of therapy, accounting for the competing risk of death.Methods: Prostate cancer is the second most common malignancy among adult males in the United States. Few studies have reported the long-term post-treatment risk of UAEs and those that have, have not appropriately accounted for competing deaths. This paper conducts an inverse probability of treatment (IPT) weighted competing risks analysis to estimate the effects of different prostate cancer treatments on the risk of UAE, using a matched-cohort of prostate cancer/non-cancer control patients from the Surveillance, Epidemiology and End Results (SEER) Medicare database.Results: Study dataset included men age 66 years or older that are 83% white and had a median follow-up time of 4.14 years. Patients that underwent combination radical prostatectomy and external beam radiotherapy experienced the highest risk of UAE (IPT-weighted competing risks: HR 3.65 with 95% CI (3.28, 4.07); 10-yr. cumulative incidence = 36.5%).Conclusions: Findings suggest that IPT-weighted competing risks analysis provides an accurate estimator of the cumulative incidence of UAE taking into account the competing deaths as well as measured confounding bias. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
5. G-computation of average treatment effects on the treated and the untreated.
- Author
-
Wang, Aolin, Nianogo, Roch A., and Arah, Onyebuchi A.
- Subjects
HIGHER education ,HIGH schools ,MONTE Carlo method ,DATA analysis ,HETEROGENEITY ,ANGINA pectoris treatment ,ANGINA pectoris ,COMPUTER simulation ,RESEARCH funding ,SYSTEM analysis ,EDUCATIONAL attainment ,TREATMENT effectiveness ,STATISTICAL models ,DIAGNOSIS - Abstract
Background: Average treatment effects on the treated (ATT) and the untreated (ATU) are useful when there is interest in: the evaluation of the effects of treatments or interventions on those who received them, the presence of treatment heterogeneity, or the projection of potential outcomes in a target (sub-) population. In this paper we illustrate the steps for estimating ATT and ATU using g-computation implemented via Monte Carlo simulation.Methods: To obtain marginal effect estimates for ATT and ATU we used a three-step approach: fitting a model for the outcome, generating potential outcome variables for ATT and ATU separately, and regressing each potential outcome variable on treatment intervention.Results: The estimates for ATT, ATU and average treatment effect (ATE) were of similar magnitude, with ATE being in between ATT and ATU as expected. In our illustrative example, the effect (risk difference [RD]) of a higher education on angina among the participants who indeed have at least a high school education (ATT) was -0.019 (95% CI: -0.040, -0.007) and that among those who have less than a high school education in India (ATU) was -0.012 (95% CI: -0.036, 0.010).Conclusions: The g-computation algorithm is a powerful way of estimating standardized estimates like the ATT and ATU. Its use should be encouraged in modern epidemiologic teaching and practice. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
6. Searching for the optimal measuring frequency in longitudinal studies -- an example utilizing short message service (SMS) to collect repeated measures among patients with low back pain.
- Author
-
Axén, Iben and Bodin, Lennart
- Subjects
LONGITUDINAL method ,TEXT messages ,PATIENT management ,LUMBAR pain ,PAIN management ,EXPERIMENTAL design ,PATIENT compliance ,SELF-evaluation ,RELATIVE medical risk ,DIAGNOSIS - Abstract
Background: Mobile technology has opened opportunities within health care and research to allow for frequent monitoring of patients. This has given rise to detailed longitudinal information and new insights concerning behaviour and development of conditions over time. Responding to frequent questionnaires delivered through mobile technology has also shown good compliance, far exceeding that of traditional paper questionnaires. However, to optimize compliance, the burden on the subjects should be kept at a minimum. In this study, the effect of using fewer data points compared to the full data set was examined, assuming that fewer measurements would lead to better compliance.Method: Weekly text-message responses for 6 months from subjects recovering from an episode of low back pain (LBP) were available for this secondary analysis. Most subjects showed a trajectory with an initial improvement and a steady state thereafter. The data were originally used to subgroup (cluster) patients according to their pain trajectory. The resulting 4-cluster solution was compared with clusters obtained from five datasets with fewer data-points using Kappa agreement as well as inspection of estimated pain trajectories. Further, the relative risk of experiencing a day with bothersome pain was compared week by week to show the effects of discarding some weekly data.Results: One hundred twenty-nine subjects were included in this analysis. Using data from every other weekly measure had the highest agreement with the clusters from the full dataset, weighted Kappa = 0.823. However, the visual description of pain trajectories favoured using the first 18 weekly measurements to fully capture the phases of improvement and steady-state. The weekly relative risks were influenced by the pain trajectories and 18 weeks or every other weekly measure were the optimal designs, next to the full data set.Conclusions: A population recovering from an episode of LBP could be described using every other weekly measurement, an option which requires fewer weekly measures than measuring weekly for 18 weeks. However a higher measuring frequency might be needed in the beginning of a clinical course to fully map the pain trajectories. [ABSTRACT FROM AUTHOR]- Published
- 2016
- Full Text
- View/download PDF
7. An evaluation of computerized adaptive testing for general psychological distress: combining GHQ-12 and Affectometer-2 in an item bank for public mental health research.
- Author
-
Stochl, Jan, Böhnke, Jan R., Pickett, Kate E., and Croudace, Tim J.
- Subjects
COMPUTER adaptive testing ,PSYCHOLOGICAL distress ,MENTAL health ,PUBLIC health ,COMPUTER simulation ,PSYCHOLOGICAL stress ,COMPARATIVE studies ,RESEARCH methodology ,MEDICAL cooperation ,MATHEMATICAL models of psychology ,PSYCHOMETRICS ,QUESTIONNAIRES ,RESEARCH ,RESEARCH funding ,EVALUATION research ,DIAGNOSIS - Abstract
Background: Recent developments in psychometric modeling and technology allow pooling well-validated items from existing instruments into larger item banks and their deployment through methods of computerized adaptive testing (CAT). Use of item response theory-based bifactor methods and integrative data analysis overcomes barriers in cross-instrument comparison. This paper presents the joint calibration of an item bank for researchers keen to investigate population variations in general psychological distress (GPD).Methods: Multidimensional item response theory was used on existing health survey data from the Scottish Health Education Population Survey (n = 766) to calibrate an item bank consisting of pooled items from the short common mental disorder screen (GHQ-12) and the Affectometer-2 (a measure of "general happiness"). Computer simulation was used to evaluate usefulness and efficacy of its adaptive administration.Results: A bifactor model capturing variation across a continuum of population distress (while controlling for artefacts due to item wording) was supported. The numbers of items for different required reliabilities in adaptive administration demonstrated promising efficacy of the proposed item bank.Conclusions: Psychometric modeling of the common dimension captured by more than one instrument offers the potential of adaptive testing for GPD using individually sequenced combinations of existing survey items. The potential for linking other item sets with alternative candidate measures of positive mental health is discussed since an optimal item bank may require even more items than these. [ABSTRACT FROM AUTHOR]- Published
- 2016
- Full Text
- View/download PDF
8. Reliability and criterion validity of self-measured waist, hip, and neck circumferences.
- Author
-
Barrios, Pamela, Martin-Biggers, Jennifer, Quick, Virginia, and Byrd-Bredbenner, Carol
- Subjects
BIOMARKERS ,CRANIOMETRY ,FEASIBILITY studies ,INSTRUCTIONAL films ,BODY mass index ,PHYSIOLOGY ,OBESITY ,ADIPOSE tissues ,HUMAN body composition ,DIAGNOSTIC errors ,RESEARCH evaluation ,WAIST circumference ,SELF diagnosis ,DIAGNOSIS - Abstract
Background: Waist, hip, and neck circumference measurements are cost-effective, non-invasive, useful markers for body fat distribution and disease risk. For epidemiology and intervention studies, including body circumference measurements in self-report surveys could be informative. However, few studies have assessed the test-retest reliability and criterion validity of a self-report tool feasible for use in large scale studies.Methods: At home, mothers of young children viewed a brief, online instructional video on how to measure their waist, hip, and neck circumferences. Afterwards, they created a homemade paper measuring tape from a downloaded file with scissors and tape, took all measurements in duplicate, and entered them into an online survey. A few weeks later, participants visited an anthropometrics lab where they measured themselves again, and trained technicians (n = 9) measured participants in duplicate using standard equipment and procedures. To assess differences between self- and technician-measured circumferences, duplicate measurements for participant home self-measurements, participant lab self-measurements, and technician measurements each were averaged and Wilcoxon signed-rank tests conducted. Agreement between all possible pairs of measurements were examined using Intraclass Correlations (ICCs) and Bland-Altman plots.Results: Participants (n = 41; aged 38.05 ± 3.54SD years; 71 % white) were all mothers that had at least one child under the age of 12 yrs. Technical error of measurements for self- and technician- duplicate measurements varied little (0.08 to 0.76 inches) and had very high reliability (≥0.90). Intraclass Correlations (ICC) comparing self vs technician were high (0.97, 0.96, and 0.84 for waist, hip, and neck). Comparison of self-measurements at home vs lab revealed high test-retest reliability (ICC ≥ 0.87). Differences between participant self- and technician measurements were small (i.e., mean difference ranged from -0.13 to 0.06 inches) with nearly all (≥93 %) differences within Bland-Altman limits of agreement and <10 % exceeding the a priori clinically meaningful difference criterion.Conclusions: This study has demonstrated a simple, inexpensive method for teaching novice mothers of young children to take their own body circumferences resulting in accurate, reliable data. Thus, collecting self-measured and self-reported circumference data in future studies may be a feasible approach in research protocols that has potential to expand our knowledge of body composition beyond that provided by self-reported body mass indexes. [ABSTRACT FROM AUTHOR]- Published
- 2016
- Full Text
- View/download PDF
9. The quality and diagnostic value of open narratives in verbal autopsy: a mixed-methods analysis of partnered interviews from Malawi.
- Author
-
King, C., Zamawe, C., Banda, M., Bar-Zeev, N., Beard, J., Bird, J., Costello, A., Kazembe, P., Osrin, D., Fottrell, E., and VacSurv Consortium
- Subjects
AUTOPSY ,CAREGIVERS ,MOBILE apps ,MIXED methods research ,COMMUNICATION ,CAUSES of death ,DIAGNOSIS ,INTERVIEWING ,RESEARCH evaluation ,RESEARCH funding ,NARRATIVES ,BURDEN of care - Abstract
Background: Verbal autopsy (VA), the process of interviewing a deceased's family or caregiver about signs and symptoms leading up to death, employs tools that ask a series of closed questions and can include an open narrative where respondents give an unprompted account of events preceding death. The extent to which an individual interviewer, who generally does not interpret the data, affects the quality of this data, and therefore the assigned cause of death, is poorly documented. We aimed to examine inter-interviewer reliability of open narrative and closed question data gathered during VA interviews.Methods: During the introduction of VA data collection, as part of a larger study in Mchinji district, Malawi, we conducted partner interviews whereby two interviewers independently recorded open narrative and closed questions during the same interview. Closed questions were collected using a smartphone application (mobile-InterVA) and open narratives using pen and paper. We used mixed methods of analysis to evaluate the differences between recorded responses to open narratives and closed questions, causes of death assigned, and additional information gathered by open narrative.Results: Eighteen partner interviews were conducted, with complete data for 11 pairs. Comparing closed questions between interviewers, the median number of differences was 1 (IQR: 0.5-3.5) of an average 65 answered; mean inter-interviewer concordance was 92% (IQR: 92-99%). Discrepancies in open narratives were summarized in five categories: demographics, history and care-seeking, diagnoses and symptoms, treatment and cultural. Most discrepancies were seen in the reporting of diagnoses and symptoms (e.g., malaria diagnosis); only one pair demonstrated no clear differences. The average number of clinical symptoms reported was 9 in open narratives and 20 in the closed questions. Open narratives contained additional information on health seeking and social issues surrounding deaths, which closed questions did not gather.Conclusions: The information gleaned during open narratives was subject to inter-interviewer variability and contained a limited number of symptom indicators, suggesting that their use for assigning cause of death is questionable. However, they contained rich information on care-seeking, healthcare provision and social factors in the lead-up to death, which may be a valuable source of information for promoting accountable health services. [ABSTRACT FROM AUTHOR]- Published
- 2016
- Full Text
- View/download PDF
10. Optimal cut-point definition in biomarkers: the case of censored failure time outcome.
- Author
-
Rota, Matteo, Antolini, Laura, and Valsecchi, Maria Grazia
- Subjects
BIOMARKERS ,ANTIRETROVIRAL agents ,TIME series analysis ,DIAGNOSIS ,CLINICAL trials - Abstract
Background: Cut-point finding is a crucial step for clinical decision making when dealing with diagnostic (or prognostic) biomarkers. The extension of ROC-based cut-point finding methods to the case of censored failure time outcome is of interest when we are in the presence of a biomarker, measured at baseline, used to identify whether there will be the development, or not, of some disease condition within a given time point τ of clinical interest. Methods: Three widely used cut-point finding methods, namely the Youden index, the concordance probability and the point closest to-(0,1) corner in the ROC plane, are extended to the case of censored failure time outcome resorting to non-parametric estimators of the sensitivity and specificity that account for censoring. The performance of these methods in finding the optimal cut-point is compared under Normal and Gamma distributions of the biomarker (in subjects developing or not the disease condition). Normality ensures that estimators point theoretically to the same cut-point. Two motivating examples are provided in the paper. Results: The point closest-to-(0,1) corner approach has the best performance from simulations in terms of mean square error and relative bias. Conclusions: We discuss the use of the Youden index or concordance probability associated to the cut-point identified through the closest-to-(0,1) corner approach to ease interpretability of the classification performance of the dichotomized biomarker. In addition, the achieved performance of the dichotomized biomarker classification associated to the estimated cut-point can be represented through a confidence interval of the point on the ROC curve. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
11. Current methods for development of rapid reviews about diagnostic tests: an international survey
- Author
-
Arevalo-Rodriguez, Ingrid, Steingart, Karen R., Tricco, Andrea C., Nussbaumer-Streit, Barbara, Kaunelis, David, Alonso-Coello, Pablo, Baxter, Susan, Bossuyt, Patrick M., Emparanza, José Ignacio, and Zamora, Javier
- Published
- 2020
- Full Text
- View/download PDF
12. Meta-DiSc 2.0: a web application for meta-analysis of diagnostic test accuracy data.
- Author
-
Plana, Maria N., Arevalo-Rodriguez, Ingrid, Fernández-García, Silvia, Soto, Javier, Fabregate, Martin, Pérez, Teresa, Roqué, Marta, and Zamora, Javier
- Subjects
WEB-based user interfaces ,WEB 2.0 ,RANDOM effects model ,DIAGNOSIS methods ,SENSITIVITY & specificity (Statistics) - Abstract
Background: Diagnostic evidence of the accuracy of a test for identifying a target condition of interest can be estimated using systematic approaches following standardized methodologies. Statistical methods for the meta-analysis of diagnostic test accuracy (DTA) studies are relatively complex, presenting a challenge for reviewers without extensive statistical expertise. In 2006, we developed Meta-DiSc, a free user-friendly software to perform test accuracy meta-analysis. This statistical program is now widely used for performing DTA meta-analyses. We aimed to build a new version of the Meta-DiSc software to include statistical methods based on hierarchical models and an enhanced web-based interface to improve user experience.Results: In this article, we present the updated version, Meta-DiSc 2.0, a web-based application developed using the R Shiny package. This new version implements recommended state-of-the-art statistical models to overcome the limitations of the statistical approaches included in the previous version. Meta-DiSc 2.0 performs statistical analyses of DTA reviews using a bivariate random effects model. The application offers a thorough analysis of heterogeneity, calculating logit variance estimates of sensitivity and specificity, the bivariate I-squared, the area of the 95% prediction ellipse, and the median odds ratios for sensitivity and specificity, and facilitating subgroup and meta-regression analyses. Furthermore, univariate random effects models can be applied to meta-analyses with few studies or with non-convergent bivariate models. The application interface has an intuitive design set out in four main menus: file upload; graphical description (forest and ROC plane plots); meta-analysis (pooling of sensitivity and specificity, estimation of likelihood ratios and diagnostic odds ratio, sROC curve); and summary of findings (impact of test through downstream consequences in a hypothetical population with a given prevalence). All computational algorithms have been validated in several real datasets by comparing results obtained with STATA/SAS and MetaDTA packages.Conclusion: We have developed and validated an updated version of the Meta-DiSc software that is more accessible and statistically sound. The web application is freely available at www.metadisc.es . [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
13. Statistical methods for evaluating the fine needle aspiration cytology procedure in breast cancer diagnosis.
- Author
-
El Chamieh, Carolla, Vielh, Philippe, and Chevret, Sylvie
- Subjects
NEEDLE biopsy ,CANCER diagnosis ,CYTOLOGY ,BREAST cancer ,LYMPHADENITIS ,TEST methods - Abstract
Background: Statistical issues present while evaluating a diagnostic procedure for breast cancer are non rare but often ignored, leading to biased results. We aimed to evaluate the diagnostic accuracy of the fine needle aspiration cytology(FNAC), a minimally invasive and rapid technique potentially used as a rule-in or rule-out test, handling its statistical issues: suspect test results and verification bias.Methods: We applied different statistical methods to handle suspect results by defining conditional estimates. When considering a partial verification bias, Begg and Greenes method and multivariate imputation by chained equations were applied, however, and a Bayesian approach with respect to each gold standard was used when considering a differential verification bias. At last, we extended the Begg and Greenes method to be applied conditionally on the suspect results.Results: The specificity of the FNAC test above 94%, was always higher than its sensitivity regardless of the proposed method. All positive likelihood ratios were higher than 10, with variations among methods. The positive and negative yields were high, defining precise discriminating properties of the test.Conclusion: The FNAC test is more likely to be used as a rule-in test for diagnosing breast cancer. Our results contributed in advancing our knowledge regarding the performance of FNAC test and the methods to be applied for its evaluation. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
14. Completeness of reporting of clinical prediction models developed using supervised machine learning: a systematic review
- Author
-
Andaur Navarro, Constanza L., Damen, Johanna A. A., Takada, Toshihiko, Nijman, Steven W. J., Dhiman, Paula, Ma, Jie, Collins, Gary S., Bajpai, Ram, Riley, Richard D., Moons, Karel G. M., and Hooft, Lotty
- Published
- 2022
- Full Text
- View/download PDF
15. Methodology of the DCCSS later fatigue study: a model to investigate chronic fatigue in long-term survivors of childhood cancer.
- Author
-
Penson, Adriaan, van Deuren, Sylvia, Bronkhorst, Ewald, Keizer, Ellen, Heskes, Tom, Coenen, Marieke J. H., Rosmalen, Judith G. M., Tissing, Wim J. E., van der Pal, Helena J. H., de Vries, Andrica C. H., van den Heuvel-Eibrink, Marry M., Neggers, Sebastian, Versluys, Birgitta A. B., Louwerens, Marloes, van der Heiden-van der Loo, Margriet, Pluijm, Saskia M. F., Grootenhuis, Martha, Blijlevens, Nicole, Kremer, Leontien C. M., and van Dulmen-den Broeder, Eline
- Subjects
CHILDHOOD cancer ,CANCER fatigue ,CANCER survivors ,SYMPTOMS ,PSYCHOSOCIAL factors ,DIAGNOSIS - Abstract
Background: A debilitating late effect for childhood cancer survivors (CCS) is cancer-related fatigue (CRF). Little is known about the prevalence and risk factors of fatigue in this population. Here we describe the methodology of the Dutch Childhood Cancer Survivor Late Effect Study on fatigue (DCCSS LATER fatigue study). The aim of the DCCSS LATER fatigue study is to examine the prevalence of and factors associated with CRF, proposing a model which discerns predisposing, triggering, maintaining and moderating factors. Triggering factors are related to the cancer diagnosis and treatment during childhood and are thought to trigger fatigue symptoms. Maintaining factors are daily life- and psychosocial factors which may perpetuate fatigue once triggered. Moderating factors might influence the way fatigue symptoms express in individuals. Predisposing factors already existed before the diagnosis, such as genetic factors, and are thought to increase the vulnerability to develop fatigue. Methodology of the participant inclusion, data collection and planned analyses of the DCCSS LATER fatigue study are presented.Results: Data of 1955 CCS and 455 siblings was collected. Analysis of the data is planned and we aim to start reporting the first results in 2022.Conclusion: The DCCSS LATER fatigue study will provide information on the epidemiology of CRF and investigate the role of a broad range of associated factors in CCS. Insight in associated factors for fatigue in survivors experiencing severe and persistent fatigue may help identify individuals at risk for developing CRF and may aid in the development of interventions. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
16. Reporting of the translation and cultural adaptation procedures of the Addenbrooke’s Cognitive Examination version III (ACE-III) and its predecessors: a systematic review
- Author
-
Nadine Mirza, Maria Panagioti, Muhammad Wali Waheed, and Waquas Waheed
- Subjects
Cognitive assessment ,cognitive impairment ,cognitive screening test ,dementia ,diagnosis ,primary care ,Medicine (General) ,R5-920 - Abstract
Abstract Background The ACE-III, a gold standard for screening cognitive impairment, is restricted by language and culture, with no uniform set of guidelines for its adaptation. To develop guidelines a compilation of all the adaptation procedures undertaken by adapters of the ACE-III and its predecessors is needed. Methods We searched EMBASE, Medline and PsychINFO and screened publications from a previous review. We included publications on adapted versions of the ACE-III and its predecessors, extracting translation and cultural adaptation procedures and assessing their quality. Results We deemed 32 papers suitable for analysis. 7 translation steps were identified and we determined which items of the ACE-III are culturally dependent. Conclusions This review lists all adaptations of the ACE, ACE-R and ACE-III, rates the reporting of their adaptation procedures and summarises adaptation procedures into steps that can be undertaken by adapters.
- Published
- 2017
- Full Text
- View/download PDF
17. Machine learning in medicine: a practical introduction.
- Author
-
Sidey-Gibbons, Jenni A. M. and Sidey-Gibbons, Chris J.
- Abstract
Background: Following visible successes on a wide range of predictive tasks, machine learning techniques are attracting substantial interest from medical researchers and clinicians. We address the need for capacity development in this area by providing a conceptual introduction to machine learning alongside a practical guide to developing and evaluating predictive algorithms using freely-available open source software and public domain data.Methods: We demonstrate the use of machine learning techniques by developing three predictive models for cancer diagnosis using descriptions of nuclei sampled from breast masses. These algorithms include regularized General Linear Model regression (GLMs), Support Vector Machines (SVMs) with a radial basis function kernel, and single-layer Artificial Neural Networks. The publicly-available dataset describing the breast mass samples (N=683) was randomly split into evaluation (n=456) and validation (n=227) samples. We trained algorithms on data from the evaluation sample before they were used to predict the diagnostic outcome in the validation dataset. We compared the predictions made on the validation datasets with the real-world diagnostic decisions to calculate the accuracy, sensitivity, and specificity of the three models. We explored the use of averaging and voting ensembles to improve predictive performance. We provide a step-by-step guide to developing algorithms using the open-source R statistical programming environment.Results: The trained algorithms were able to classify cell nuclei with high accuracy (.94 -.96), sensitivity (.97 -.99), and specificity (.85 -.94). Maximum accuracy (.96) and area under the curve (.97) was achieved using the SVM algorithm. Prediction performance increased marginally (accuracy =.97, sensitivity =.99, specificity =.95) when algorithms were arranged into a voting ensemble.Conclusions: We use a straightforward example to demonstrate the theory and practice of machine learning for clinicians and medical researchers. The principals which we demonstrate here can be readily applied to other complex tasks including natural language processing and image recognition. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
18. A mega-ethnography of eleven qualitative evidence syntheses exploring the experience of living with chronic non-malignant pain.
- Author
-
Toye, Fran, Seers, Kate, Hannink, Erin, and Barker, Karen
- Subjects
ANALGESIA ,MEDICAL care ,QUALITATIVE research ,MEDICAL research ,DATABASES ,CHRONIC pain & psychology ,CHRONIC pain treatment ,CHRONIC pain ,ETHNOLOGY ,MEDICAL care research ,EVIDENCE-based medicine ,BIBLIOGRAPHIC databases ,DIAGNOSIS - Abstract
Background: Each year over five million people develop chronic non-malignant pain and can experience healthcare as an adversarial struggle. The aims of this study were: (1) to bring together qualitative evidence syntheses that explore patients' experience of living with chronic non-malignant pain and develop conceptual understanding of what it is like to live with chronic non-malignant pain for improved healthcare; (2) to undertake the first mega-ethnography of qualitative evidence syntheses using the methods of meta-ethnography.Methods: We used the seven stages of meta-ethnography refined for large studies. The innovation of mega-ethnography is to use conceptual findings from qualitative evidence syntheses as primary data. We searched 7 bibliographic databases from inception until February 2016 to identify qualitative evidence syntheses that explored patients' experience of living with chronic non-malignant pain.Results: We identified 82 potential studies from 556 titles, screened 34 full text articles and included 11 qualitative evidence syntheses synthesising a total of 187 qualitative studies reporting more than 5000 international participants living with chronic pain. We abstracted concepts into 7 conceptual categories: (1) my life is impoverished and confined; (2) struggling against my body to be me; (3) the quest for the diagnostic 'holy grail'; (4) lost personal credibility; (5) trying to keep up appearances; (6) need to be treated with dignity; and (7) deciding to end the quest for the grail is not easy. Each conceptual category was supported by at least 7 of the 11 qualitative evidence syntheses.Conclusions: This is the first mega-ethnography, or synthesis of qualitative evidence syntheses using the methods of meta-ethnography. Findings help us to understand that the decision to end the quest for a diagnosis can leave patients feeling vulnerable and this may contribute to the adversarial nature of the clinical encounter. This knowledge demonstrates that treating a patient with a sense that they are worthy of care and hearing their story is not an adjunct to, but integral to health care. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
19. Time-dependent ROC curve analysis in medical research: current methods and applications.
- Author
-
Najwa Kamarudin, Adina, Cox, Trevor, Kolamunnage-Dona, Ruwanthi, and Kamarudin, Adina Najwa
- Subjects
RECEIVER operating characteristic curves ,PROGRESSION-free survival ,METHODOLOGY ,MEDICAL ethics ,MEDICAL research ,CIRRHOSIS of the liver ,COMPUTER simulation ,LIVER ,PHARMACOKINETICS ,STATISTICAL models ,DIAGNOSIS - Abstract
Background: ROC (receiver operating characteristic) curve analysis is well established for assessing how well a marker is capable of discriminating between individuals who experience disease onset and individuals who do not. The classical (standard) approach of ROC curve analysis considers event (disease) status and marker value for an individual as fixed over time, however in practice, both the disease status and marker value change over time. Individuals who are disease-free earlier may develop the disease later due to longer study follow-up, and also their marker value may change from baseline during follow-up. Thus, an ROC curve as a function of time is more appropriate. However, many researchers still use the standard ROC curve approach to determine the marker capability ignoring the time dependency of the disease status or the marker.Methods: We comprehensively review currently proposed methodologies of time-dependent ROC curves which use single or longitudinal marker measurements, aiming to provide clarity in each methodology, identify software tools to carry out such analysis in practice and illustrate several applications of the methodology. We have also extended some methods to incorporate a longitudinal marker and illustrated the methodologies using a sequential dataset from the Mayo Clinic trial in primary biliary cirrhosis (PBC) of the liver.Results: From our methodological review, we have identified 18 estimation methods of time-dependent ROC curve analyses for censored event times and three other methods can only deal with non-censored event times. Despite the considerable numbers of estimation methods, applications of the methodology in clinical studies are still lacking.Conclusions: The value of time-dependent ROC curve methods has been re-established. We have illustrated the methods in practice using currently available software and made some recommendations for future research. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
20. What impact do assumptions about missing data have on conclusions? A practical sensitivity analysis for a cancer survival registry.
- Author
-
Smuk, M., Carpenter, J. R., and Morris, T. P.
- Subjects
MISSING data (Statistics) ,CANCER statistics ,SENSITIVITY analysis ,CANCER patients ,P-value (Statistics) ,RECTUM tumors ,TUMOR diagnosis ,TUMOR treatment ,COLON tumors ,ALGORITHMS ,MATHEMATICAL models ,MEDICAL research ,HEALTH outcome assessment ,PROGNOSIS ,QUESTIONNAIRES ,REPORT writing ,RESEARCH funding ,STATISTICS ,SURVIVAL analysis (Biometry) ,TUMOR classification ,THEORY ,DATA analysis ,ACQUISITION of data ,DIAGNOSIS - Abstract
Background: Within epidemiological and clinical research, missing data are a common issue and often over looked in publications. When the issue of missing observations is addressed it is usually assumed that the missing data are 'missing at random' (MAR). This assumption should be checked for plausibility, however it is untestable, thus inferences should be assessed for robustness to departures from missing at random.Methods: We highlight the method of pattern mixture sensitivity analysis after multiple imputation using colorectal cancer data as an example. We focus on the Dukes' stage variable which has the highest proportion of missing observations. First, we find the probability of being in each Dukes' stage given the MAR imputed dataset. We use these probabilities in a questionnaire to elicit prior beliefs from experts on what they believe the probability would be in the missing data. The questionnaire responses are then used in a Dirichlet draw to create a Bayesian 'missing not at random' (MNAR) prior to impute the missing observations. The model of interest is applied and inferences are compared to those from the MAR imputed data.Results: The inferences were largely insensitive to departure from MAR. Inferences under MNAR suggested a smaller association between Dukes' stage and death, though the association remained positive and with similarly low p values.Conclusions: We conclude by discussing the positives and negatives of our method and highlight the importance of making people aware of the need to test the MAR assumption. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
21. Estimating a population cumulative incidence under calendar time trends.
- Author
-
Hansen, Stefan N., Overgaard, Morten, Andersen, Per K., and Parner, Erik T.
- Subjects
PATHOLOGICAL psychology ,KAPLAN-Meier estimator ,DISEASE risk factors ,PROPORTIONAL hazards models ,MATHEMATICAL models ,PSYCHIATRIC diagnosis ,DIAGNOSIS of obsessive-compulsive disorder ,PSYCHIATRIC epidemiology ,ALGORITHMS ,ATTENTION-deficit hyperactivity disorder ,COMPUTER simulation ,OBSESSIVE-compulsive disorder ,RISK assessment ,TIME ,THEORY ,TOURETTE syndrome ,DISEASE incidence ,DISEASE prevalence ,DIAGNOSIS - Abstract
Background: The risk of a disease or psychiatric disorder is frequently measured by the age-specific cumulative incidence. Cumulative incidence estimates are often derived in cohort studies with individuals recruited over calendar time and with the end of follow-up governed by a specific date. It is common practice to apply the Kaplan-Meier or Aalen-Johansen estimator to the total sample and report either the estimated cumulative incidence curve or just a single point on the curve as a description of the disease risk.Methods: We argue that, whenever the disease or disorder of interest is influenced by calendar time trends, the total sample Kaplan-Meier and Aalen-Johansen estimators do not provide useful estimates of the general risk in the target population. We present some alternatives to this type of analysis.Results: We show how a proportional hazards model may be used to extrapolate disease risk estimates if proportionality is a reasonable assumption. If not reasonable, we instead advocate that a more useful description of the disease risk lies in the age-specific cumulative incidence curves across strata given by time of entry or perhaps just the end of follow-up estimates across all strata. Finally, we argue that a weighted average of these end of follow-up estimates may be a useful summary measure of the disease risk within the study period.Conclusions: Time trends in a disease risk will render total sample estimators less useful in observational studies with staggered entry and administrative censoring. An analysis based on proportional hazards or a stratified analysis may be better alternatives. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
22. Testing for carryover effects after cessation of treatments: a design approach.
- Author
-
Sturdevant, S. Gwynn and Lumley, Thomas
- Subjects
CLINICAL trials -- Design & construction ,DIAGNOSIS of diabetes ,CROSSOVER trials ,PREHYPERTENSION ,SURVIVAL analysis (Biometry) ,NULL hypothesis ,PREVENTION ,DIAGNOSIS ,HYPERTENSION ,THERAPEUTICS ,BLOOD pressure ,EXPERIMENTAL design ,TREATMENT effectiveness ,PASSIVE euthanasia - Abstract
Background: Recently, trials addressing noisy measurements with diagnosis occurring by exceeding thresholds (such as diabetes and hypertension) have been published which attempt to measure carryover - the impact that treatment has on an outcome after cessation. The design of these trials has been criticised and simulations have been conducted which suggest that the parallel-designs used are not adequate to test this hypothesis; two solutions are that either a differing parallel-design or a cross-over design could allow for diagnosis of carryover.Methods: We undertook a systematic simulation study to determine the ability of a cross-over or a parallel-group trial design to detect carryover effects on incident hypertension in a population with prehypertension. We simulated blood pressure and focused on varying criteria to diagnose systolic hypertension.Results: Using the difference in cumulative incidence hypertension to analyse parallel-group or cross-over trials resulted in none of the designs having acceptable Type I error rate. Under the null hypothesis of no carryover the difference is well above the nominal 5 % error rate.Conclusions: When a treatment is effective during the intervention period, reliable testing for a carryover effect is difficult. Neither parallel-group nor cross-over designs using the difference in cumulative incidence appear to be a feasible approach. Future trials should ensure their design and analysis is validated by simulation. [ABSTRACT FROM AUTHOR]- Published
- 2016
- Full Text
- View/download PDF
23. Time to publication among completed diagnostic accuracy studies: associated with reported accuracy estimates.
- Author
-
Korevaar, Daniël A., van Es, Nick, Zwinderman, Aeilko H., Cohen, Jérémie F., and Bossuyt, Patrick M. M.
- Subjects
DIAGNOSIS ,META-analysis ,MEDICAL research ,RANDOMIZED controlled trials ,ACCURACY ,DIAGNOSTIC errors ,LITERATURE ,TIME ,PROPORTIONAL hazards models - Abstract
Background: Previous evaluations have documented that studies evaluating the effectiveness of therapeutic interventions are not always reported, and that those with statistically significant results are published more rapidly than those without. This can lead to reporting bias in systematic reviews and other literature syntheses. We evaluated whether diagnostic accuracy studies that report promising results about the performance of medical tests are also published more rapidly.Methods: We obtained all primary diagnostic accuracy studies included in meta-analyses of Medline-indexed systematic reviews that were published between September 2011 and January 2012. For each primary study, we extracted estimates of diagnostic accuracy (sensitivity, specificity, Youden's index), the completion date of participant recruitment, and the publication date. We calculated the time from completion to publication and assessed associations with reported accuracy estimates.Results: Forty-nine systematic reviews were identified, containing 92 meta-analyses and 924 unique primary studies, of which 756 could be included. Study completion dates were missing for 285 (38 %) of these. Median time from completion to publication in the remaining 471 studies was 24 months (IQR 16 to 35). Primary studies that reported higher estimates of sensitivity (Spearman's rho = -0.14; p = 0.003), specificity (rho = -0.17; p < 0.001), and Youden's index (rho = -0.22; p < 0.001) had significantly shorter times to publication. When comparing time to publication in studies reporting accuracy estimates above versus below the median, the median number of months was 23 versus 25 for sensitivity (p = 0.046), 22 versus 27 for specificity (p = 0.001), and 22 versus 27 for Youden's index (p < 0.001). These differential time lags remained significant in multivariable Cox regression analyses with adjustment for other study characteristics, with hazard ratios of publication of 1.06 (95 % CI 1.02 to 1.11; p = 0.007) for logit-transformed estimates of sensitivity, 1.09 (95 % CI 1.04 to 1.14; p < 0.001) for logit-transformed estimates of specificity, and 1.09 (95 % CI 1.03 to 1.14; p = 0.001) for logit-transformed estimates of Youden's index.Conclusions: Time to publication was significantly shorter for studies reporting higher estimates of diagnostic accuracy compared to those reporting lower estimates. This suggests that searching and analyzing the published literature, rather than all completed studies, can produce a biased view of the performance of medical tests. [ABSTRACT FROM AUTHOR]- Published
- 2016
- Full Text
- View/download PDF
24. Mechanisms and mediation in survival analysis: towards an integrated analytical framework.
- Author
-
Pratschke, Jonathan, Haase, Trutz, Comber, Harry, Sharp, Linda, de Camargo Cancela, Marianna, and Johnson, Howard
- Subjects
COLON cancer patients ,STRUCTURAL equation modeling ,CANCER patient care ,MEDIATION (Statistics) ,MEDIATION -- Social aspects ,DISCRETE-time systems ,AGE distribution ,COMPARATIVE studies ,CAUSES of death ,COLON tumors ,RESEARCH methodology ,MEDICAL cooperation ,NEGOTIATION ,PSYCHOLOGICAL tests ,RESEARCH ,SEX distribution ,SOCIAL participation ,SURVIVAL analysis (Biometry) ,EVALUATION research ,PROPORTIONAL hazards models ,STATISTICAL models ,INTEGRATED Advanced Information Management Systems (National Library of Medicine) ,DIAGNOSIS ,TUMOR treatment - Abstract
Background: A wide-ranging debate has taken place in recent years on mediation analysis and causal modelling, raising profound theoretical, philosophical and methodological questions. The authors build on the results of these discussions to work towards an integrated approach to the analysis of research questions that situate survival outcomes in relation to complex causal pathways with multiple mediators. The background to this contribution is the increasingly urgent need for policy-relevant research on the nature of inequalities in health and healthcare.Methods: The authors begin by summarising debates on causal inference, mediated effects and statistical models, showing that these three strands of research have powerful synergies. They review a range of approaches which seek to extend existing survival models to obtain valid estimates of mediation effects. They then argue for an alternative strategy, which involves integrating survival outcomes within Structural Equation Models via the discrete-time survival model. This approach can provide an integrated framework for studying mediation effects in relation to survival outcomes, an issue of great relevance in applied health research. The authors provide an example of how these techniques can be used to explore whether the social class position of patients has a significant indirect effect on the hazard of death from colon cancer.Results: The results suggest that the indirect effects of social class on survival are substantial and negative (-0.23 overall). In addition to the substantial direct effect of this variable (-0.60), its indirect effects account for more than one quarter of the total effect. The two main pathways for this indirect effect, via emergency admission (-0.12), on the one hand, and hospital caseload, on the other, (-0.10) are of similar size.Conclusions: The discrete-time survival model provides an attractive way of integrating time-to-event data within the field of Structural Equation Modelling. The authors demonstrate the efficacy of this approach in identifying complex causal pathways that mediate the effects of a socio-economic baseline covariate on the hazard of death from colon cancer. The results show that this approach has the potential to shed light on a class of research questions which is of particular relevance in health research. [ABSTRACT FROM AUTHOR]- Published
- 2016
- Full Text
- View/download PDF
25. Head to head comparison of the propensity score and the high-dimensional propensity score matching methods.
- Author
-
Guertin, Jason R., Elham Rahme, Dormuth, Colin R., LeLorier, Jacques, and Rahme, Elham
- Subjects
DIABETES risk factors ,CONFOUNDING variables ,MEDICAL care ,PHARMACEUTICAL industry ,PHARMACEUTICAL services insurance ,DRUG therapy for hyperlipidemia ,AGE distribution ,ANTILIPEMIC agents ,COMPARATIVE studies ,CONFIDENCE intervals ,DATABASES ,HYPERLIPIDEMIA ,RESEARCH methodology ,MEDICAL care research ,MEDICAL cooperation ,PROBABILITY theory ,REFERENCE values ,RESEARCH ,SEX distribution ,LOGISTIC regression analysis ,EVALUATION research ,CASE-control method ,ODDS ratio ,DIAGNOSIS - Abstract
Background: Comparative performance of the traditional propensity score (PS) and high-dimensional propensity score (hdPS) methods in the adjustment for confounding by indication remains unclear. We aimed to identify which method provided the best adjustment for confounding by indication within the context of the risk of diabetes among patients exposed to moderate versus high potency statins.Method: A cohort of diabetes-free incident statins users was identified from the Quebec's publicly funded medico-administrative database (Full Cohort). We created two matched sub-cohorts by matching one patient initiated on a lower potency to one patient initiated on a high potency either on patients' PS or hdPS. Both methods' performance were compared by means of the absolute standardized differences (ASDD) regarding relevant characteristics and by means of the obtained measures of association.Results: Eight out of the 18 examined characteristics were shown to be unbalanced within the Full Cohort. Although matching on either method achieved balance within all examined characteristic, matching on patients' hdPS created the most balanced sub-cohort. Measures of associations and confidence intervals obtained within the two matched sub-cohorts overlapped.Conclusion: Although ASDD suggest better matching with hdPS than with PS, measures of association were almost identical when adjusted for either method. Use of the hdPS method in adjusting for confounding by indication within future studies should be recommended due to its ability to identify confounding variables which may be unknown to the investigators. [ABSTRACT FROM AUTHOR]- Published
- 2016
- Full Text
- View/download PDF
26. Propensity score to detect baseline imbalance in cluster randomized trials: the role of the c-statistic.
- Author
-
Leyrat, Clémence, Caille, Agnès, Foucher, Yohann, and Giraudeau, Bruno
- Subjects
INFERENTIAL statistics ,CONFOUNDING variables ,STATISTICS ,CLUSTER randomized controlled trials ,SIMULATION methods & models ,KNEE diseases ,OSTEOARTHRITIS diagnosis ,OSTEOARTHRITIS treatment ,ALGORITHMS ,CLINICAL trials ,COMPUTER simulation ,EXPERIMENTAL design ,PROBABILITY theory ,RESEARCH evaluation ,RESEARCH funding ,BIOINFORMATICS ,PAIN measurement ,STANDARDS ,DIAGNOSIS ,THERAPEUTICS - Abstract
Background: Despite randomization, baseline imbalance and confounding bias may occur in cluster randomized trials (CRTs). Covariate imbalance may jeopardize the validity of statistical inferences if they occur on prognostic factors. Thus, the diagnosis of a such imbalance is essential to adjust statistical analysis if required.Methods: We developed a tool based on the c-statistic of the propensity score (PS) model to detect global baseline covariate imbalance in CRTs and assess the risk of confounding bias. We performed a simulation study to assess the performance of the proposed tool and applied this method to analyze the data from 2 published CRTs.Results: The proposed method had good performance for large sample sizes (n =500 per arm) and when the number of unbalanced covariates was not too small as compared with the total number of baseline covariates (≥40% of unbalanced covariates). We also provide a strategy for pre selection of the covariates needed to be included in the PS model to enhance imbalance detection.Conclusion: The proposed tool could be useful in deciding whether covariate adjustment is required before performing statistical analyses of CRTs. [ABSTRACT FROM AUTHOR]- Published
- 2016
- Full Text
- View/download PDF
27. Subgroup identification for treatment selection in biomarker adaptive design.
- Author
-
Tzu-Pin Lu, Chen, James J., and Lu, Tzu-Pin
- Subjects
BIOMARKERS ,CANCER genetics ,CANCER treatment ,DRUG development ,DISCRIMINANT analysis ,ADENOCARCINOMA ,LUNG tumors ,ALGORITHMS ,BIOLOGICAL assay ,COMPUTER simulation ,EXPERIMENTAL design ,PROGNOSIS ,PATIENT selection ,STATISTICAL models ,DIAGNOSIS - Abstract
Background: Advances in molecular technology have shifted new drug development toward targeted therapy for treatments expected to benefit subpopulations of patients. Adaptive signature design (ASD) has been proposed to identify the most suitable target patient subgroup to enhance efficacy of treatment effect. There are two essential aspects in the development of biomarker adaptive designs: 1) an accurate classifier to identify the most appropriate treatment for patients, and 2) statistical tests to detect treatment effect in the relevant population and subpopulations. We propose utilization of classification methods to identity patient subgroups and present a statistical testing strategy to detect treatment effects.Methods: The diagonal linear discriminant analysis (DLDA) is used to identify targeted and non-targeted subgroups. For binary endpoints, DLDA is directly applied to classify patient into two subgroups; for continuous endpoints, a two-step procedure involving model fitting and determination of a cutoff-point is used for subgroup classification. The proposed strategy includes tests for treatment effect in all patients and in a marker-positive subgroup, with a possible follow-up estimation of treatment effect in the marker-negative subgroup. The proposed method is compared to the ASD classification method using simulated datasets and two publically available cancer datasets.Results: The DLDA-based classifier performs well in terms of sensitivity, specificity, positive and negative predictive values, and accuracy in the simulation data and the two cancer datasets, with superior accuracy compared to the ASD method. The subgroup testing strategy is shown to be useful in detecting treatment effect in terms of power and control of study-wise error.Conclusion: Accuracy of a classifier is essential for adaptive designs. A poor classifier not only assigns patients to inappropriate treatments, but also reduces the power of the test, resulting in incorrect conclusions. The proposed procedure provides an effective approach for subgroup identification and subgroup analysis. [ABSTRACT FROM AUTHOR]- Published
- 2015
- Full Text
- View/download PDF
28. The Ottawa SAH search algorithms: protocol for a multi- centre validation study of primary subarachnoid hemorrhage prediction models using health administrative data (the SAHepi prediction study protocol)
- Author
-
English, S. W., McIntyre, L., Saigle, V., Chassé, M., Fergusson, D. A., Turgeon, A. F., Lauzier, F., Griesdale, D., Garland, A., Zarychanski, R., Algird, A., and van Walraven, C.
- Published
- 2018
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.