1. Peerspectives: peer review training initiative for the biomedical sciences
- Author
-
Rohmann, Jessica, Wülk, Nadja, Piccininni, Marco, Grillmaier, Hannah, Abdikarim, Iman, Kurth, Tobias, and Glatz, Toivo
- Subjects
doctoral student training ,transparency ,Scholarly Publishing ,educational initiative ,Epidemiology ,review quality ,Medicine and Health Sciences ,pre/post assessment ,Public Health ,Other Medicine and Health Sciences ,Social and Behavioral Sciences ,Library and Information Science - Abstract
Background & Rationale Scientific journals publish scholarly articles and provide an important platform for transparent presentation, exchange, and discussion of new scientific developments. Peer review, though often criticized, plays an integral role in ensuring integrity and quality in this scientific process. Given its importance, it is surprising that the scientific peer review and editorial processes generally remain fully absent in the curricula of advanced academic programs. Indeed, in Publon’s 2018 report on the global state of peer review, 88% of survey respondents indicated that peer review training is important or extremely important for ensuring high quality peer review (Publons and Publons, 2018). Furthermore, a 2016 study of 170,000 researchers conducted by Wiley found that 77% of reviewers expressed interest in receiving further training (Warne, 2016). Nevertheless, many scientists report lacking guidance on how to review a scientific paper (Mulligan, Hall and Raphael, 2013). As a result, the first peer reviews performed by early-career researchers (ECRs) are often conducted in a self-guided, “learning-by-doing” setting, which can jeopardize quality and timeliness. Due to the steadily growing number of articles submitted every day and lack of incentives to peer review, journals report increasing difficulties in finding high-quality reviewers willing to accept review invitations (Heinemann, 2015; Publons and Publons, 2018). This was exacerbated during the COVID-19 pandemic (Kurth et al., 2020). Illustrating a further challenge, a 2020 study found that 12% of reviews included unprofessional comments, while 41% of reviews were incomplete, inaccurate or contained unsubstantiated critiques (Gerwing et al., 2020). Although several peer review training resources are available (EQUATOR network, 2021) it remains unclear to what extent new reviewers use these (largely online) tools and if they are effective. Of the few published studies on the topic, it appears that short duration training (Schroter et al., 2004), receiving written feedback from editors (Callaham, Knopp and Gallagher, 2002), or simply matching new reviewers with experienced ones (Houry, Green and Callaham, 2012; Wong et al., 2017) are of limited value in the attempt to improve quality. To date, we only identified two rather informal efforts to explicitly engage students in peer review. The studies were descriptive in nature, lacked formal assessment, and were small in scope (Xu et al., 2016; Podder et al., 2018). In fact, most published studies describing peer review training interventions lack rigorous evaluation, transparency in reporting, sufficient sample size and hands-on, “real world” application. Project Conceptualization At the Charité - Universitätsmedizin Berlin, doctoral students increasingly pursue cumulative, publication-based dissertation projects instead of monographs, while generally having little or no prior publication experience. Unsurprisingly, these students seek coursework that goes beyond basic scientific writing and introductions to statistics. They seek exposure to best practices in modern study design and data analysis strategies employed in cutting-edge biomedical research. To address all the aforementioned gaps and engage students in a meaningful, hands-on way, we created an elective course for students in the Health Data Sciences (HDS) PhD Program in 2019: https://iph.charite.de/en/academic_programs/phd_in_health_data_sciences/peerspectives/ The basic structure was a series of four interactive lectures with take-home assignments followed by four hands-on workshops. In the workshops, groups of four students were paired with a mentor with journal editing experience (workshop leaders) to produce four peer review reports for manuscripts that were currently under consideration at a journal partner (see the “Intervention” section for details). Our pilot study of Peerspectives with four participants indicated it provided relevant training and was well-received by the students, editor-mentors, and partner journal’s staff. As a next step, after increasing the program’s capacity to accommodate larger groups over several semesters, we seek to gain insights into the effectiveness of the program. Setting and participants Following a pilot in summer of 2019, we began offering Peerspectives as a recurring semester-long elective course at the Charité - Universitätsmedizin Berlin (Germany). The course was led by instructors affiliated with the Health Data Sciences PhD program and students could earn 4 credits towards their studies upon successful completion. In the first semester run of the course (October 2020 - March 2021), due to limited capacity and high demand spots in the course were initially offered to doctoral students enrolled in the Health Data Sciences PhD program and remaining spots were then made available to other doctoral students in the biomedical sciences both at the Charité and other national and international institutions through an application process. Interested students were asked to provide details about prior training in epidemiology and (bio)statistics as well as to detail their motivation to participate, which were used for participant selection. Students not selected were encouraged to re-apply for future runs of the course. In the second, third, and fourth runs of the course (April 2021 - August 2021, October 2021 - March 2022, and April 2022 to August 2022), recent post-docs as well as Master’s students in higher semesters, nearing completion of their graduate programs, were also invited to apply for the course. During these recruitment periods, we also advertised the course more intensively outside of our institution in a targeted effort to reach interested students from international universities and those with more diverse academic backgrounds (e.g. fields adjacent to the health data sciences). Course instructors, coordinators, former students, and workshop leaders were encouraged to help spread the word in their networks and on social media. The maximum course capacity was contingent on the number of available workshop leaders each semester. In the four runs of the course, we enrolled approximately 20 students per semester. Once the students were offered a spot in the Peerspectives course, they were asked whether they were interested in participating in the scientific evaluation study (see next section). Recruitment/Enrollment All students who were selected to take part in the Peerspectives course were asked whether they would like to participate in our scientific evaluation study. If so, they were asked to provide written informed consent after reviewing the detailed, written participant information materials. Students were informed that their choice to participate in the scientific study would in no way influence their ability to successfully pass the course and receive the 4 credit points. During the course, neither instructors nor workshop leaders were aware whether a student in the course was participating in the evaluation study. We continued enrollment until the minimum sample size target was exceeded (see below). Ethical Considerations The evaluation study of Peerspectives received approval from the ethics committee of the Charité - Universitätsmedizin Berlin on 17.11.2020 (EA4/190/20). Intervention The semester-long Peerspectives course provides peer review training in a hybrid structure. Due to the circumstances of the COVID-19 pandemic and to accommodate interested students outside of Berlin, all runs of the course (after the pilot) were held fully online via Zoom. The first half of the course consists of four interactive lectures of 180 minutes led by faculty of the Health Data Sciences PhD Program at the Charité - Universitätsmedizin Berlin focused on (1) the role of scientific journals, editors, peer reviewers, and authors in scientific publishing; (2) sex and gender related aspects in peer review, ethical guidelines for peer review, and open science; (3) the conduct of peer review, including step-by-step guidance on how to write a constructive peer review report; and (4) a live demonstration of drafting a peer review report for a “real” scientific manuscript currently under review at the partner journal. Following each lecture, students are given a reflection assignment to be completed and submitted before discussing as a large group at the start of the next session. In the second half of the course, students work together in assigned, small groups to produce a peer-review report for each of the four “live” manuscripts provided by the partnering scientific journal. For every workshop group, four course participants are paired with one workshop leader who has prior peer-review and editing experience for a scientific journal (“editor-mentor”). These editor-mentors are recruited from a growing personal network of the course creators; they participate on a voluntary basis without remuneration; and, they are in no way involved in handling of or decision-making regarding the manuscripts under review at the partner journal level. Prior to each workshop meeting, the students draft the peer review report together, with a different student taking the lead organizational role each week. The draft report is then discussed and revised together with the workshop leader in a 180-minute workshop meeting. Once all workshop group members and the workshop leader approve the final review report, it is submitted to the journal by the workshop leader also on behalf of the trainee group (crediting all group members by name). Upon receipt of the journal’s decision on the paper, the workshop leader disseminates comments from the editors and other peer reviewers to all workshop group members and the group has a chance to discuss these together. Attendance at all lectures and workshops, as well as submission of homework assignments, and active participation in the workshops is required to receive course credit. In extenuating circumstances, make-up assignments are provided to compensate for missed sessions. When not possible, only a certificate of attendance (without credit points) is issued. Assessments and procedures All course participants, regardless of whether they were also participants in the evaluation study, were required to sign a confidentiality agreement with the partnering journal developed for the purposes of Peerspectives, since the manuscripts used in the course are “live” and contemporaneously under review at the partnering journal. In addition to providing written informed consent, all evaluation study participants were further asked to provide information about their age, gender, educational background and prior methods training, and any prior reviewing experience on a short questionnaire. Before starting the course, all study participants are asked to complete an online pre-course survey to self-assess their own levels of knowledge and relevant skills. The same 8-question survey is administered again after the conclusion of the course, with additional room for students to provide feedback about the course to the instructors. To assess the effectiveness of the Peerspectives course as part of a semester of doctoral studies, we will evaluate the program using a pre-/post-assessment comparison. For this purpose, all study participants are requested to draft a peer review report of a manuscript on their own once before (“pre-course assessment”) and once after completing the course (“post-course assessment”) under simulated real-world conditions. Accordingly, participants are told that they may use any resources available to them (“open-book”); however, they are explicitly instructed to work on these review reports alone and not in consultation with others. To mimic real-world peer review conditions, participants are given two weeks to complete the assessment task. Reminders are sent to any participants who had not yet submitted their reports one week before the deadline, one day before the deadline, and on the day of the deadline (to simulate a real-world reviewing experience). Participants may request a one-week extension of the deadline, in which case, they are again sent reminders at the same intervals leading up to the new, extended deadline. In cases of non-responding participants, to mimic the chasing mechanisms for unfinished peer reviews used by many journal’s manuscript submission management systems, up to three additional reminders are sent until the peer review report is received. Following the conclusion of a sufficient number of runs of the course to reach the sample size needed for the scientific evaluation, all submitted pre- and post- course assessments will be sent to trained assessors (experienced editors at a partnering scientific journal) under a pseudonym and scored using the validated Review Quality Instrument (RQI), version 3.2 (van Rooyen, Black and Godlee, 1999). References: Callaham, M.L., Knopp, R.K. and Gallagher, E.J. (2002) ‘Effect of written feedback by editors on quality of reviews: two randomized trials’, JAMA: the journal of the American Medical Association, 287(21), pp. 2781–2783. EQUATOR network (2021) Peer review training and resources. Available at: https://www.equator-network.org/toolkits/peer-reviewing-research/peer-review-training-and-resources/#PRTraining (Accessed: 4 March 2021). Gerwing, T.G. et al. (2020) ‘Quantifying professionalism in peer review’, Research integrity and peer review, 5, p. 9. Heinemann, L. (2015) ‘Reviewer: an endangered species?!’, Journal of diabetes science and technology, 9(2), pp. 167–168. Houry, D., Green, S. and Callaham, M. (2012) ‘Does mentoring new peer reviewers improve review quality? A randomized trial’, BMC medical education, 12, p. 83. Kurth, T. et al. (2020) ‘Parallel pandemic: The crush of covid-19 publications tests the capacity of scientific publishing’, BMJ [Preprint]. Mulligan, A., Hall, L. and Raphael, E. (2013) ‘Peer review in a changing world: An international study measuring the attitudes of researchers’, Journal of the American Society for Information Science and Technology , 64(1), pp. 132–161. Podder, V. et al. (2018) ‘Collective Conversational Peer Review of Journal Submission: A Tool to Integrate Medical Education and Practice’, Annals of neurosciences, 25(2), pp. 112–119. Publons and Publons (2018) ‘Publons’ Global State Of Peer Review 2018’. doi:10.14322/publons.gspr2018. van Rooyen, S. et al. (1999) ‘Effect of open peer review on quality of reviews and on reviewers’ recommendations: a randomised trial’, BMJ , 318(7175), pp. 23–27. van Rooyen, S., Black, N. and Godlee, F. (1999) ‘Development of the review quality instrument (RQI) for assessing peer reviews of manuscripts’, Journal of clinical epidemiology, 52(7), pp. 625–629. Schroter, S. et al. (2004) ‘Effects of training on quality of peer review: randomised controlled trial’, BMJ, p. 673. doi:10.1136/bmj.38023.700775.ae. Warne, V. (2016) ‘Rewarding reviewers - sense or sensibility? A Wiley study explained’, Learned Publishing, pp. 41–50. doi:10.1002/leap.1002. Wong, V.S.S. et al. (2017) ‘Mentored peer review of standardized manuscripts as a teaching tool for residents: a pilot randomized controlled multi-center study’, Research Integrity and Peer Review. doi:10.1186/s41073-017-0032-0. Xu, J. et al. (2016) ‘Mentored peer reviewing for PhD faculty and students’, Nurse education today, 37, pp. 1–2.
- Published
- 2022
- Full Text
- View/download PDF