Dwan, Kerry, Devane, Declan, Griffin, James, Lall, Ranjit, Patel, Smitaa, Rhodes, Sarah, Smith, Valerie, Williamson, Paula, and Kirkham, Jamie
Introduction Study publication bias arises when studies are published or not depending on their results [1,2]. Empirical research suggests consistently that published work is more likely to include statistically significant findings than unpublished research [3]. Study publication bias may lead to overestimation of treatment effects; it has been recognised as a threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. There is evidence that research without statistically significant results takes longer to achieve publication than research with significant results, further biasing the available evidence base over time [4–7]. This ‘‘time-lag bias’’ (or ‘‘pipeline bias’’) will tend to add to the bias since results from early available evidence tend to be inflated and exaggerated [8,9]. Within-study selective reporting bias relates to studies that have been published. It is defined as the selection, based on the results of a subset of the original variables recorded, for inclusion in a publication [10]. Several different types of selective reporting within a study may occur. For example, selective reporting of analyses may include intention-to-treat analyses versus per-protocol analyses, endpoint values versus change from baseline, or different time points or subgroups [11]. Here we focus on the selective reporting of outcomes from those that were intended to be measured within a study (as pre-specified in the trial protocol, for example); outcome reporting bias (ORB). Randomised controlled trials (RCTs) are planned experiments, involving the random assignment of participants to interventions. They are seen as the gold standard of study designs to evaluate the effectiveness of a treatment in health research in humans [12]. The likely bias from selective outcome reporting is to overestimate the effect of the experimental treatment. Studies comparing trial publications to protocols and/or trial registries are also accumulating evidence on the proportion of studies in which at least one primary outcome was changed, introduced, or omitted [13]. Thus, the bias from missing outcome data that may affect a meta-analysis is on two levels: non-publication due to lack of submission or rejection of study reports (a study level problem) and the selective non-reporting of outcomes within published studies based on the results (an outcome level problem). While much effort has been invested in trying to identify the former [1,2], it is equally important to understand the nature and frequency of missing data from the latter level. The most recent version of this systematic review [14] summarised the empirical evidence for the existence of study publication bias and outcome reporting bias. It found that 12 of the 20 included empirical studies had consistent evidence of an association between positive or statistically significant results and publication and that statistically significant outcomes have a higher odds of being reported fully. The ORBIT (Outcome Reporting Bias In Trials) study conducted by authors of this review, found that a third of Cochrane reviews identified at least one trial with high suspicion of outcome reporting bias for a single reviewer selected systematic review primary outcome [15]. Work has also been published to show how to identify outcome reporting bias within a review and relevant trial reports [16]. The objective of this review is to update the previous version of this review [14] and summarise the evidence from empirical cohort studies that have assessed study publication bias and/or outcome reporting bias in inception cohorts of RCTs (a sample of clinical trials registered at onset or on a roster e.g. approved by an ethics committee during a specified period of time.) Methods Study Inclusion Criteria We will include research that assesses an inception cohort of RCTs for study publication bias and/or outcome reporting bias. We will focus on inception cohorts with study protocols being registered before the start of the study as this type of prospective design is deemed more reliable. We will exclude cohorts based on prevalence archives, in which a protocol is registered after a study is launched or completed since such cohorts can already be affected by publication and selection bias. Both cohorts containing RCTs exclusively or containing a mix of RCTs and non-RCTs will be eligible. For those studies where it is not possible to identify the study type (i.e. whether any included studies were RCTs), we will attempt to contact the authors to try to resolve this. In cases where it cannot be resolved, studies will be excluded. Those studies containing exclusively non-RCTs will be excluded. The assessment of RCTs in the included studies has to involve a comparison of the protocol against all publications (for outcome reporting bias) or information from trialists (for study publication bias). Search Strategy Two authors will independently assess titles and abstracts and then full text articles. No masking of authors details will be used during the screening of abstracts. MEDLINE (2012 to 2020) and SCOPUS (2012 to 2020) will be searched without language restrictions (see Appendix for all search strategies).EMBASE will not be searched because SCOPUS is a much larger database than EMBASE, it offers more coverage of scientific, technical, medical and social science literature than any other database. Over 90% of the sources indexed by EMBASE are also indexed by SCOPUS plus many other indexed sources as well. Additional steps will be taken to complement electronic database searches: the lead or contact authors of all identified studies will be asked to identify further studies and references of included studies will be checked for further eligible studies. The Cochrane Methodology Register will not be searched as it has not been updated since it was last searched in 2012. Quality Assessment To assess the methodological quality of the included studies, the same criteria was applied as in the original version of this review [14] which was based on quality assessment in a review by Hopewell et al [17]. 1. Was there complete follow up (after data-analysis) of all the trials in the cohort? Yes ≥90%. No < 90%. Unclear. 2. Was publication ascertained through personal contact with the investigators? Yes =personal contact with investigators, or searching the literature and personal contact with the investigator. No= searching the literature only. Unclear. 3. Were positive and negative findings clearly defined? Yes =clearly defined. No= not clearly defined. Unclear. 4. Were protocols compared to publications? (outcome reporting bias only) Yes =protocols were compared to publications. No= protocols were not considered in the study. Unclear. Data Extraction A flow diagram to show the status of approved protocols will be completed for each empirical study by two authors independently using information available in the publication or further publications. Disagreements will be resolved through discussion. Lead or contact authors of the empirical studies will then be contacted by email and sent the flow diagram for their study to check the extracted data along with requests for further information or clarification of definitions if required. No masking will be used. Characteristics of the cohorts will be extracted independently by two authors. We will record the definitions of ‘published’ employed in each empirical study. Further, we will look at the way the significance of the results of the studies in each cohort were investigated (i.e. the direction of results and whether the study considered a p-value ≤ 0.05 as the definition of significance and where there were no statistical tests whether the results were categorised as negative, positive, important or unimportant). Two authors will extract data independently on the number of positive, negative or null trials that were published in each cohort and we will extract all information on the main objectives of each empirical study and separate these according to whether they relate to study level or outcome level bias. Data Analysis This review will provide a descriptive summary of the included empirical studies. If possible, we will conduct a random effects meta-analysis combining results from the different cohorts. If this is not possible due to methodological heterogeneity we will display results on a forest plot without combining the studies. The aims are to calculate • The odds ratio comparing publication for positive versus null or negative trials. • The odds ratio comparing full reporting of statistically significant versus non significant outcomes. • The proportion of studies with at least one primary outcome that was changed, introduced or omitted. • To consider how reporting has changed over time. References 1. Song F, Parekh S, Hooper L, Loke YK, Ryder J, et al. (2010) Dissemination and publication of research findings: an updated review of related biases.. Health Technol Assess 14. 2. Rothstein HR, Sutton AJ, Borenstein M (2005) Publication Bias in Meta-Analysis. Prevention, Assessment and Adjustments: Wiley. 3. Dickersin K, Min YI (1993) NIH clinical trials and publication bias. Online J Curr Clin Trials Doc No 50. 4. Stern JM, Simes RJ (1997) Publication bias: evidence of delayed publication in a cohort study of clinical research projects. BMJ 315: 640–645. 5. Ioannidis JP (1998) Effect of the statistical significance of results on the time to completion and publication of randomized efficacy trials. JAMA 279: 281–286. 6. Scherer RW, Langenberg P, von Elm E (2007) Full publication of results initially presented in abstracts. Cochrane Database Syst Rev: MR000005. 7. Decullier E, Lheritier V, Chapuis F (2005) Fate of biomedical research protocols and publication bias in France: retrospective cohort study. BMJ 331: 19. 8. Ioannidis J, Lau J (2001) Evolution of treatment effects over time: empirical insight from recursive cumulative metaanalyses. Proc Natl Acad Sci U S A 98: 831–836. 9. Trikalinos TA, Churchill R, Ferri M, Leucht S, Tuunainen A, et al. (2004) Effect sizes in cumulative meta-analyses of mental health randomized trials evolved over time. J Clin Epidemiol 57: 1124–1130. 10. Hutton JL, Williamson PR (2000) Bias in meta-analysis due to outcome variable selection within studies. Applied Statistics 49: 359–370. 11. Williamson PR, Gamble C, Altman DG, Hutton JL (2005) Outcome selection bias in meta-analysis. Stat Methods Med Res 14: 515–524. 12. Kane R L, Wang J, Garrard J (2007) Reporting in randomized clinical trials improved after adoption of the CONSORT statement. J Clin Epidemiol 60: 241–249. 13. Dwan K, Altman DG, Cresswell L, Blundell M, Gamble C, et al. (2011) Comparison of protocols and registry entries to published reports for randomised controlled trials. Cochrane Database of Systematic Reviews: MR000031. 14. Dwan K, Gamble C, Williamson PR, Kirkham JJ, for the Reporting Bias Group (2013) Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias — An Updated Review. PLoS ONE 8(7): e66844. 15. Kirkham JJ, Dwan K, Altman DG, Gamble C, Dodd S, et al. (2010) The impact of outcome reporting bias in a cohort of systematic reviews. BMJ 340: c365. 16. Dwan K, Gamble C, Kolamunnage-Dona R, Mohammed S, Powell C, et al. (2010) Assessing the potential for outcome reporting bias in a review: A tutorial. Trials 11. 17. Hopewell S, Loudon K, Clarke MJ, Oxman AD, Dickersin K (2009) Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane Database Syst Rev: MR000006. Appendix Appendix 1 Search Strategy Medline search strategy (2012 to 2020) 1. publication$.tw. 2. exp Publications/ 3. publish$.tw. 4. exp Publishing/ 5. 1 or 2 or 3 or 4 6. bias.tw. 7. exp "Bias (Epidemiology)"/ 8. 6 or 7 9. 5 and 8 10. exp Publication Bias/ 11. 9 or 10 12. selective report$.tw. 13. selective non report$.tw. 14. selective non-report$.tw. 15. outcome report$ bias.tw. 16. 12 or 13 or 14 or 15 17. 11 or 16 18. cohort.tw. 19. exp Cohort Studies/ 20. randomized controlled trial$.tw. 21. randomised controlled trial$.tw. 22. 18 or 19 or 20 or 21 23. 17 and 22 Scopus search strategy (2012 to 2020) 1. TITLE-ABS-KEY(publications$) 2. TITLE-ABS-KEY(publish$) 3. (TITLE-ABS-KEY(publications$)) OR (TITLE-ABS-KEY(publish$)) 4. TITLE-ABS-KEY(bias) 5. ((TITLE-ABS-KEY(publications$)) OR (TITLE-ABS-KEY(publish$))) AND (TITLE-ABS-KEY(bias)) 6. TITLE-ABS-KEY(selective report$) 7. TITLE-ABS-KEY(selective non report$) 8. TITLE-ABS-KEY(selective non-report$) 9. TITLE-ABS-KEY(outcome report$ bias) 10. (TITLE-ABS-KEY(selective report$)) OR (TITLE-ABS-KEY(selective non report$)) OR (TITLE-ABS-KEY(selective non-report$)) OR (TITLE-ABS-KEY(outcome report$ bias)) 11. (((TITLE-ABS-KEY(publications$)) OR (TITLE-ABS-KEY(publish$))) AND (TITLE-ABS-KEY(bias))) OR ((TITLE-ABS-KEY(selective report$)) OR (TITLE-ABS-KEY(selective non report$)) OR (TITLE-ABS-KEY(selective non-report$)) OR (TITLE-ABS-KEY(outcome report$ bias))) 12. TITLE-ABS-KEY(cohort) 13. TITLE-ABS-KEY(randomized controlled trial$) 14. TITLE-ABS-KEY(randomised controlled trial$) 15. (TITLE-ABS-KEY(cohort)) OR (TITLE-ABS-KEY(randomized controlled trial$)) OR (TITLE-ABS-KEY(randomised controlled trial$)) 16. ((((TITLE-ABS-KEY(publications$)) OR (TITLE-ABS-KEY(publish$))) AND (TITLE-ABS-KEY(bias))) OR ((TITLE-ABS-KEY(selective report$)) OR (TITLE-ABS-KEY(selective non report$)) OR (TITLE-ABS-KEY(selective non-report$)) OR (TITLE-ABS-KEY(outcome report$ bias)))) AND ((TITLE-ABS-KEY(cohort)) OR (TITLE-ABS-KEY(randomized controlled trial$)) OR (TITLE-ABS-KEY(randomised controlled trial$)))