231 results on '"Oxman, Andrew D"'
Search Results
2. Contextualizing critical thinking about health using digital technology in secondary schools in Kenya: a qualitative analysis
- Author
-
Chesire, Faith, Ochieng, Marlyn, Mugisha, Michael, Ssenyonga, Ronald, Oxman, Matt, Nsangi, Allen, Semakula, Daniel, Nyirazinyoye, Laetitia, Lewin, Simon, Sewankambo, Nelson K., Kaseje, Margaret, Oxman, Andrew D., and Rosenbaum, Sarah
- Published
- 2022
- Full Text
- View/download PDF
3. Health communication in and out of public health emergencies: to persuade or to inform?
- Author
-
Oxman, Andrew D., Fretheim, Atle, Lewin, Simon, Flottorp, Signe, Glenton, Claire, Helleve, Arnfinn, Vestrheim, Didrik Frimann, Iversen, Bjørn Gunnar, and Rosenbaum, Sarah E.
- Published
- 2022
- Full Text
- View/download PDF
4. Effects of the Informed Health Choices podcast on the ability of parents of primary school children in Uganda to assess the trustworthiness of claims about treatment effects: one-year follow up of a randomised trial
- Author
-
Semakula, Daniel, Nsangi, Allen, Oxman, Andrew D., Oxman, Matt, Austvoll-Dahlgren, Astrid, Rosenbaum, Sarah, Morelli, Angela, Glenton, Claire, Lewin, Simon, Nyirazinyoye, Laetitia, Kaseje, Margaret, Chalmers, Iain, Fretheim, Atle, Rose, Christopher J., and Sewankambo, Nelson K.
- Published
- 2020
- Full Text
- View/download PDF
5. Effects of the Informed Health Choices primary school intervention on the ability of children in Uganda to assess the reliability of claims about treatment effects, 1-year follow-up: a cluster-randomised trial
- Author
-
Nsangi, Allen, Semakula, Daniel, Oxman, Andrew D., Austvoll-Dahlgren, Astrid, Oxman, Matt, Rosenbaum, Sarah, Morelli, Angela, Glenton, Claire, Lewin, Simon, Kaseje, Margaret, Chalmers, Iain, Fretheim, Atle, Ding, Yunpeng, and Sewankambo, Nelson K.
- Published
- 2020
- Full Text
- View/download PDF
6. Who can you trust? A review of free online sources of “trustworthy” information about treatment effects for patients and the public
- Author
-
Oxman, Andrew D. and Paulsen, Elizabeth J.
- Published
- 2019
- Full Text
- View/download PDF
7. The GRADE Evidence to Decision (EtD) framework for health system and public health decisions
- Author
-
Moberg, Jenny, Oxman, Andrew D., Rosenbaum, Sarah, Schünemann, Holger J., Guyatt, Gordon, Flottorp, Signe, Glenton, Claire, Lewin, Simon, Morelli, Angela, Rada, Gabriel, Alonso-Coello, Pablo, and for the GRADE Working Group
- Published
- 2018
- Full Text
- View/download PDF
8. A comparative evaluation of PDQ-Evidence
- Author
-
Johansen, Marit, Rada, Gabriel, Rosenbaum, Sarah, Paulsen, Elizabeth, Motaze, Nkengafac Villyen, Opiyo, Newton, Wiysonge, Charles S., Ding, Yunpeng, Mukinda, Fidele K., and Oxman, Andrew D.
- Published
- 2018
- Full Text
- View/download PDF
9. Does the use of the Informed Healthcare Choices (IHC) primary school resources improve the ability of grade-5 children in Uganda to assess the trustworthiness of claims about the effects of treatments: protocol for a cluster-randomised trial.
- Author
-
Nsangi, Allen, Semakula, Daniel, Oxman, Andrew D., Oxman, Matthew, Rosenbaum, Sarah, Austvoll-Dahlgren, Astrid, Nyirazinyoye, Laetitia, Kaseje, Margaret, Chalmers, Iain, Fretheim, Atle, and Sewankambo, Nelson K.
- Subjects
MEDICAL decision making ,HEALTH of school children ,CHILDREN ,CRITICAL thinking ,HEALTH education ,RANDOMIZED controlled trials ,CHILD behavior ,DECISION making ,CURRICULUM ,EXPERIMENTAL design ,HEALTH attitudes ,HEALTH behavior ,JUDGMENT (Psychology) ,SCHOOL health services ,THOUGHT & thinking ,INFORMATION literacy - Abstract
Background: The ability to appraise claims about the benefits and harms of treatments is crucial for informed health care decision-making. This research aims to enable children in East African primary schools (the clusters) to acquire and retain skills that can help them make informed health care choices by improving their ability to obtain, process and understand health information. The trial will evaluate (at the individual participant level) whether specially designed learning resources can teach children some of the key concepts relevant to appraising claims about the benefits and harms of health care interventions (treatments).Methods: This is a two-arm, cluster-randomised trial with stratified random allocation. We will recruit 120 primary schools (the clusters) between April and May 2016 in the central region of Uganda. We will stratify participating schools by geographical setting (rural, semi-urban, or urban) and ownership (public or private). The Informed Healthcare Choices (IHC) primary school resources consist of a textbook and a teachers' guide. Each of the students in the intervention arm will receive a textbook and attend nine lessons delivered by their teachers during a school term, with each lesson lasting 80 min. The lessons cover 12 key concepts that are relevant to assessing claims about treatments and making informed health care choices. The second arm will carry on with the current primary school curriculum. We have designed the Claim Evaluation Tools to measure people's ability to apply key concepts related to assessing claims about the effects of treatments and making informed health care choices. The Claim Evaluation Tools use multiple choice questions addressing each of the 12 concepts covered by the IHC school resources. Using the Claim Evaluation Tools we will measure two primary outcomes: (1) the proportion of children who 'pass', based on an absolute standard and (2) their average scores.Discussion: As far as we are aware this is the first randomised trial to assess whether key concepts needed to judge claims about the effects of treatment can be taught to primary school children. Whatever the results, they will be relevant to learning how to promote critical thinking about treatment claims. Trial status: the recruitment of study participants was ongoing at the time of manuscript submission.Trial Registration: Pan African Clinical Trial Registry, trial identifier: PACTR201606001679337 . Registered on 13 June 2016. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
10. Policymaker experiences with rapid response briefs to address health-system and technology questions in Uganda.
- Author
-
Mijumbi-Deve, Rhona, Rosenbaum, Sarah E., Oxman, Andrew D., Lavis, John N., and Sewankambo, Nelson K.
- Subjects
MEDICAL care ,HEALTH policy ,EVIDENCE-based medicine ,DECISION making in clinical medicine ,PUBLIC health ,DECISION making ,MEDICAL research ,POLICY sciences ,TECHNOLOGY - Abstract
Background: Health service and systems researchers have developed knowledge translation strategies to facilitate the use of reliable evidence for policy, including rapid response briefs as timely and responsive tools supporting decision making. However, little is known about users' experience with these newer formats for presenting evidence. We sought to explore Ugandan policymakers' experience with rapid response briefs in order to develop a format acceptable for policymakers.Methods: We used existing research regarding evidence formats for policymakers to inform the initial version of rapid response brief format. We conducted user testing with healthcare policymakers at various levels of decision making in Uganda, employing a concurrent think-aloud method, collecting data on elements including usability, usefulness, understandability, desirability, credibility and value of the document. We modified the rapid response briefs format based on the results of the user testing and sought feedback on the new format.Results: The participants generally found the format of the rapid response briefs usable, credible, desirable and of value. Participants expressed frustrations regarding several aspects of the document, including the absence of recommendations, lack of clarity about the type of document and its potential uses (especially for first time users), and a crowded front page. Participants offered conflicting feedback on preferred length of the briefs and use and placement of partner logos. Users had divided preferences for the older and newer formats.Conclusion: Although the rapid response briefs were generally found to be of value, there are major and minor frustrations impeding an optimal user experience. Areas requiring further research include how to address policymakers' expectations of recommendations in these briefs and their optimal length. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
11. Assessing the complexity of interventions within systematic reviews: development, content and use of a new tool (iCAT_SR).
- Author
-
Lewin, Simon, Hendry, Maggie, Chandler, Jackie, Oxman, Andrew D., Michie, Susan, Shepperd, Sasha, Reeves, Barnaby C., Tugwell, Peter, Hannes, Karin, Rehfuess, Eva A., Welch, Vivien, Mckenzie, Joanne E., Burford, Belinda, Petkovic, Jennifer, Anderson, Laurie M., Harris, Janet, and Noyes, Jane
- Subjects
SYSTEMATIC reviews ,MEDICAL care ,COMPUTER software ,TECHNOLOGICAL innovations ,LOGIC ,EVIDENCE - Abstract
Background: Health interventions fall along a spectrum from simple to more complex. There is wide interest in methods for reviewing 'complex interventions', but few transparent approaches for assessing intervention complexity in systematic reviews. Such assessments may assist review authors in, for example, systematically describing interventions and developing logic models. This paper describes the development and application of the intervention Complexity Assessment Tool for Systematic Reviews (iCAT_SR), a new tool to assess and categorise levels of intervention complexity in systematic reviews.Methods: We developed the iCAT_SR by adapting and extending an existing complexity assessment tool for randomized trials. We undertook this adaptation using a consensus approach in which possible complexity dimensions were circulated for feedback to a panel of methodologists with expertise in complex interventions and systematic reviews. Based on these inputs, we developed a draft version of the tool. We then invited a second round of feedback from the panel and a wider group of systematic reviewers. This informed further refinement of the tool.Results: The tool comprises ten dimensions: (1) the number of active components in the intervention; (2) the number of behaviours of recipients to which the intervention is directed; (3) the range and number of organizational levels targeted by the intervention; (4) the degree of tailoring intended or flexibility permitted across sites or individuals in applying or implementing the intervention; (5) the level of skill required by those delivering the intervention; (6) the level of skill required by those receiving the intervention; (7) the degree of interaction between intervention components; (8) the degree to which the effects of the intervention are context dependent; (9) the degree to which the effects of the interventions are changed by recipient or provider factors; (10) and the nature of the causal pathway between intervention and outcome. Dimensions 1-6 are considered 'core' dimensions. Dimensions 7-10 are optional and may not be useful for all interventions.Conclusions: The iCAT_SR tool facilitates more in-depth, systematic assessment of the complexity of interventions in systematic reviews and can assist in undertaking reviews and interpreting review findings. Further testing of the tool is now needed. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
12. Developing and evaluating communication strategies to support informed decisions and practice based on evidence (DECIDE): protocol and preliminary results
- Author
-
Treweek, Shaun, Oxman, Andrew D, Alderson, Philip, Bossuyt, Patrick M, Brandt, Linn, Brożek, Jan, Davoli, Marina, Flottorp, Signe, Harbour, Robin, Hill, Suzanne, Liberati, Alessandro, Liira, Helena, Schünemann, Holger J, Rosenbaum, Sarah, Thornton, Judith, Vandvik, Per Olav, Alonso-Coello, Pablo, DECIDE Consortium, Treweek, Shaun, Oxman, Andrew D, Alderson, Philip, Bossuyt, Patrick M, Brandt, Linn, Brożek, Jan, Davoli, Marina, Flottorp, Signe, Harbour, Robin, Hill, Suzanne, Liberati, Alessandro, Liira, Helena, Schünemann, Holger J, Rosenbaum, Sarah, Thornton, Judith, Vandvik, Per Olav, Alonso-Coello, Pablo, and DECIDE Consortium
- Abstract
BACKGROUND Healthcare decision makers face challenges when using guidelines, including understanding the quality of the evidence or the values and preferences upon which recommendations are made, which are often not clear. METHODS GRADE is a systematic approach towards assessing the quality of evidence and the strength of recommendations in healthcare. GRADE also gives advice on how to go from evidence to decisions. It has been developed to address the weaknesses of other grading systems and is now widely used internationally. The Developing and Evaluating Communication Strategies to Support Informed Decisions and Practice Based on Evidence (DECIDE) consortium (http://www.decide-collaboration.eu/), which includes members of the GRADE Working Group and other partners, will explore methods to ensure effective communication of evidence-based recommendations targeted at key stakeholders: healthcare professionals, policymakers, and managers, as well as patients and the general public. Surveys and interviews with guideline producers and other stakeholders will explore how presentation of the evidence could be improved to better meet their information needs. We will collect further stakeholder input from advisory groups, via consultations and user testing; this will be done across a wide range of healthcare systems in Europe, North America, and other countries. Targeted communication strategies will be developed, evaluated in randomized trials, refined, and assessed during the development of real guidelines. DISCUSSION Results of the DECIDE project will improve the communication of evidence-based healthcare recommendations. Building on the work of the GRADE Working Group, DECIDE will develop and evaluate methods that address communication needs of guideline users. The project will produce strategies for communicating recommendations that have been rigorously evaluated in diverse settings, and it will support the transfer of research into practice in healthcare systems glob
- Published
- 2013
13. Can an educational podcast improve the ability of parents of primary school children to assess the reliability of claims made about the benefits and harms of treatments: study protocol for a randomised controlled trial.
- Author
-
Semakula, Daniel, Nsangi, Allen, Oxman, Matt, Austvoll-Dahlgren, Astrid, Rosenbaum, Sarah, Kaseje, Margaret, Nyirazinyoye, Laetitia, Fretheim, Atle, Chalmers, Iain, Oxman, Andrew D., and Sewankambo, Nelson K.
- Subjects
PODCASTING ,HEALTH of school children ,PARENT-child relationships ,PARENT-child caregiver relationships ,THERAPEUTICS ,RANDOMIZED controlled trials ,EDUCATION of parents ,COMPARATIVE studies ,DECISION making ,EXPERIMENTAL design ,HEALTH attitudes ,HEALTH education ,INCOME ,MASS media ,RESEARCH methodology ,MEDICAL cooperation ,PSYCHOLOGICAL tests ,READABILITY (Literary style) ,RESEARCH ,RISK assessment ,SCHOOLS ,THOUGHT & thinking ,EVIDENCE-based medicine ,INFORMATION literacy ,EVALUATION research ,EDUCATIONAL attainment - Abstract
Background: Claims made about the effects of treatments are very common in the media and in the population more generally. The ability of individuals to understand and assess such claims can affect their decisions and health outcomes. Many people in both low- and high-income countries have inadequate aptitude to assess information about the effects of treatments. As part of the Informed Healthcare Choices project, we have prepared a series of podcast episodes to help improve people's ability to assess claims made about treatment effects. We will evaluate the effect of the Informed Healthcare Choices podcast on people's ability to assess claims made about the benefits and harms of treatments. Our study population will be parents of primary school children in schools with limited educational and financial resources in Uganda.Methods: This will be a two-arm, parallel-group, individual-randomised trial. We will randomly allocate consenting participants who meet the inclusion criteria for the trial to either listen to nine episodes of the Informed Healthcare Choices podcast (intervention) or to listen to nine typical public service announcements about health issues (control). Each podcast includes a story about a treatment claim, a message about one key concept that we believe is important for people to be able to understand to assess treatment claims, an explanation of how that concept applies to the claim, and a second example illustrating the concept. We designed the Claim Evaluation Tools to measure people's ability to apply key concepts related to assessing claims made about the effects of treatments and making informed health care choices. The Claim Evaluation Tools that we will use include multiple-choice questions addressing each of the nine concepts covered by the podcast. Using the Claim Evaluation Tools, we will measure two primary outcomes: (1) the proportion that 'pass', based on an absolute standard and (2) the average score.Discussion: As far as we are aware this is the first randomised trial to assess the use of mass media to promote understanding of the key concepts needed to judge claims made about the effects of treatments.Trial Registration: Pan African Clinical Trials Registry, PACTR201606001676150. Registered on 12 June 2016. http://www.pactr.org/ATMWeb/appmanager/atm/atmregistry?dar=true&tNo=PACTR201606001676150 . [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
14. A tailored intervention to implement guideline recommendations for elderly patients with depression in primary care: a pragmatic cluster randomised trial.
- Author
-
Aakhus, Eivind, Granlund, Ingeborg, Odgaard-Jensen, Jan, Oxman, Andrew D., and Flottorp, Signe A.
- Subjects
DEPRESSION in old age ,OLDER patients ,GERIATRIC psychiatry ,DEPRESSED persons ,CHRONIC disease treatment ,PRIMARY care ,MENTAL health ,MEDICAL care ,THERAPEUTICS ,MENTAL depression ,CLUSTER analysis (Statistics) ,COMPARATIVE studies ,INTERVIEWING ,RESEARCH methodology ,MEDICAL cooperation ,MEDICAL protocols ,PRIMARY health care ,RESEARCH ,EVALUATION research ,RANDOMIZED controlled trials - Abstract
Background: Elderly patients with depression are underdiagnosed, undertreated and run a high risk of a chronic course. General practitioners adhere to clinical practice guidelines to a limited degree. In the international research project Tailored Implementation for Chronic Diseases, we tested the effectiveness of tailored interventions to improve care for patients with chronic diseases. In Norway, we examined this approach to improve adherence to six guideline recommendations for elderly patients with depression targeting healthcare professionals, patients and administrators.Methods: We conducted a cluster randomised trial in 80 Norwegian municipalities. We identified determinants of practice for six recommendations and subsequently tailored interventions to address these determinants. The interventions targeted healthcare professionals, administrators and patients and consisted of outreach visits, a website presenting the recommendations and the underlying evidence, tools to manage depression in the elderly and other web-based resources, including a continuous medical education course for general practitioners. The primary outcome was mean adherence to the recommendations. Secondary outcomes were improvement in depression symptoms as measured by patients and general practitioners. We offered outreach visits to all general practitioners and practice staff in the intervention municipalities. We used electronic software that extracted eligible patients from the general practitioners' lists. We collected data by interviewing general practitioners or sending them a questionnaire about their practice for four patients on their list and by sending a questionnaire to the patients.Results: One hundred twenty-four of the 900 general practitioners (14 %) participated in the data collection, 51 in the intervention group and 73 in the control group. We interviewed 77 general practitioners, 47 general practitioners completed the questionnaire, and 134 patients responded to the questionnaire. Amongst the general practitioners who provided data, adherence to the recommendations was 1.6 percentage points higher in the intervention group than in the control group (95 % CI -6 to 9).Conclusions: The effectiveness of our tailored intervention to implement recommendations for elderly patients with depression in primary care is uncertain, due to the low response rate in the data collection. However, it is unlikely that the effect was large. It remains uncertain how best to improve adherence to evidence-based recommendations and thereby improve the quality of care for these patients.Trial Registration: ClinicalTrials.gov: NCT01913236 . [ABSTRACT FROM AUTHOR]- Published
- 2016
- Full Text
- View/download PDF
15. Policymakers’ and other stakeholders’ perceptions of key considerations for health system decisions and the presentation of evidence to inform those considerations: an international survey
- Author
-
Vogel, Joshua P, Oxman, Andrew D, Glenton, Claire, Rosenbaum, Sarah, Lewin, Simon, Gülmezoglu, A, and Souza, João
- Published
- 2013
- Full Text
- View/download PDF
16. Tailoring interventions to implement recommendations for the treatment of elderly patients with depression: a qualitative study.
- Author
-
Aakhus, Eivind, Granlund, Ingeborg, Oxman, Andrew D., and Flottorp, Signe A.
- Subjects
GERIATRIC psychology ,PRIMARY health care ,COMMUNITY health services ,MENTAL health services - Abstract
Background: To improve adherence to evidence-based recommendations, it is logical to identify determinants of practice and tailor interventions to address these. We have previously prioritised six recommendations to improve treatment of elderly patients with depression, and identified determinants of adherence to these recommendations. The aim of this article is to describe how we tailored interventions to address the determinants for the implementation of the recommendations. Methods: We drafted an intervention plan, based on the determinants we had identified in a previous study. We conducted six group interviews with representatives of health professionals (GPs and nurses), implementation researchers, quality improvement officers, professional and voluntary organisations and relatives of elderly patients with depression. We informed about the gap between evidence and practice for elderly patients with depression and presented the prioritised determinants that applied to each recommendation. Participants brainstormed individually and then in groups, suggesting interventions to address the determinants. We then presented evidence on the effectiveness of strategies for implementing depression guidelines. We asked the groups to prioritise the suggested interventions considering the perceived impact of determinants and of interventions, the research evidence underlying the interventions, feasibility and cost. We audiotaped and transcribed the interviews and applied a five step framework for our analysis. We created a logic model with links between the determinants, the interventions, and the targeted improvements in adherence. Results: Six groups with 29 individuals provided 379 suggestions for interventions. Most suggestions could be fit within the drafted plan, but the groups provided important amendments or additions. We sorted the interventions into six categories: resources for municipalities to develop a collaborative care plan, resources for health professionals, resources for patients and their relatives, outreach visits, educational and web-based tools. Some interventions addressed one determinant, while other interventions addressed several determinants. Conclusions: It was feasible and helpful to use group interviews and combine open and structured approaches to identify interventions that addressed prioritised determinants to adherence to the recommendations. This approach generated a large number of suggested interventions. We had to prioritise to tailor the interventions strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
17. Feasibility of a rapid response mechanism to meet policymakers' urgent needs for research evidence about health systems in a low income country: a case study.
- Author
-
Mijumbi, Rhona M., Oxman, Andrew D., Panisset, Ulysses, and Sewankambo, Nelson K.
- Subjects
- *
HEALTH policy , *MEDICAL care , *EVIDENCE-based medicine , *LOW-income countries ,UGANDA. Ministry of Health - Abstract
Objectives Despite the recognition of the importance of evidence-informed health policy and practice, there are still barriers to translating research findings into policy and practice. The present study aimed to establish the feasibility of a rapid response mechanism, a knowledge translation strategy designed to meet policymakers' urgent needs for evidence about health systems in a low income country, Uganda. Rapid response mechanisms aim to address the barriers of timeliness and relevance of evidence at the time it is needed. Methods A rapid response mechanism (service) designed a priori was offered to policymakers in the health sector in Uganda. In the form of a case study, data were collected about the profile of users of the service, the kinds of requests for evidence, changes in answers, and courses of action influenced by the mechanism and their satisfaction with responses and the mechanism in general. Results We found that in the first 28 months, the service received 65 requests for evidence from 30 policymakers and stakeholders, the majority of whom were from the Ministry of Health. The most common requests for evidence were about governance and organization of health systems. It was noted that regular contact between the policymakers and the researchers at the response service was an important factor in response to, and uptake of the service. The service seemed to increase confidence for policymakers involved in the policymaking process. Conclusion Rapid response mechanisms designed to meet policymakers' urgent needs for research evidence about health systems are feasible and acceptable to policymakers in low income countries. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
18. Tailored interventions to implement recommendations for elderly patients with depression in primary care: a study protocol for a pragmatic cluster randomised controlled trial.
- Author
-
Aakhus, Eivind, Granlund, Ingeborg, Odgaard-Jensen, Jan, Wensing, Michel, Oxman, Andrew D., and Flottorp, Signe A.
- Subjects
DEPRESSION in old age ,OLDER patients ,DEPRESSED persons ,PRIMARY care ,RANDOMIZED controlled trials ,EDUCATION - Abstract
Background The prevalence of depression is high and the elderly have an increased risk of developing chronic course. International data suggest that depression in the elderly is under-recognised, the latency before clinicians provide a treatment plan is longer and elderly patients with depression are not offered psychotherapy to the same degree as younger patients. Although recommendations for the treatment of elderly patients with depression exist, health-care professionals adhere to these recommendations to a limited degree only. We conducted a systematic review to identify recommendations for managing depression in the elderly and prioritised six recommendations. We identified and prioritised the determinants of practice related to the implementation of these recommendations in primary care, and subsequently discussed and prioritised interventions to address the identified determinants. The objective of this study is to evaluate the effectiveness of these tailored interventions for the six recommendations for the management of elderly patients with depression in primary care. Methods/design We will conduct a pragmatic cluster randomised trial comparing the implementation of the six recommendations using tailored interventions with usual care. We will randomise 80 municipalities into one of two groups: an intervention group, to which we will deliver tailored interventions to implement the six recommendations, and a control group, to which we will not deliver any intervention. We will randomise municipalities rather than patients, individual clinicians or practices, because we will deliver the intervention for the first three recommendations at the municipal level and we want to minimise the risk of contamination across GP practices for the other three recommendations. The primary outcome is the proportion of actions taken by GPs that are consistent with the recommendations. Discussion This trial will investigate whether a tailored implementation approach is an effective strategy for improving collaborative care in the municipalities and health-care professionals' practice towards elderly patients with depression in primary care. The effectiveness evaluation described in this protocol will be accompanied with a process evaluation exploring why and how the interventions were effective or ineffective. Trial registration ClinicalTrials.gov: NCT01913236 [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
19. A checklist for identifying determinants of practice: A systematic review and synthesis of frameworks and taxonomies of factors that prevent or enable improvements in healthcare professional practice.
- Author
-
Flottorp, Signe A., Oxman, Andrew D., Krause, Jane, Musila, Nyokabi R., Wensing, Michel, Godycki-Cwirko, Maciek, Baker, Richard, and Eccles, Martin P.
- Subjects
- *
MEDICAL practice , *MEDICAL personnel , *ORGANIZATIONAL change , *ELECTRONIC spreadsheets , *PROFESSIONAL practice , *PROFESSIONAL relationships - Abstract
Background: Determinants of practice are factors that might prevent or enable improvements. Several checklists, frameworks, taxonomies, and classifications of determinants of healthcare professional practice have been published. In this paper, we describe the development of a comprehensive, integrated checklist of determinants of practice (the TICD checklist). Methods: We performed a systematic review of frameworks of determinants of practice followed by a consensus process. We searched electronic databases and screened the reference lists of key background documents. Two authors independently assessed titles and abstracts, and potentially relevant full text articles. We compiled a list of attributes that a checklist should have: comprehensiveness, relevance, applicability, simplicity, logic, clarity, usability, suitability, and usefulness. We assessed included articles using these criteria and collected information about the theory, model, or logic underlying how the factors (determinants) were selected, described, and grouped, the strengths and weaknesses of the checklist, and the determinants and the domains in each checklist. We drafted a preliminary checklist based on an aggregated list of determinants from the included checklists, and finalized the checklist by a consensus process among implementation researchers. Results: We screened 5,778 titles and abstracts and retrieved 87 potentially relevant papers in full text. Several of these papers had references to papers that we also retrieved in full text. We also checked potentially relevant papers we had on file that were not retrieved by the searches. We included 12 checklists. None of these were completely comprehensive when compared to the aggregated list of determinants and domains. We developed a checklist with 57 potential determinants of practice grouped in seven domains: guideline factors, individual health professional factors, patient factors, professional interactions, incentives and resources, capacity for organisational change, and social, political, and legal factors. We also developed five worksheets to facilitate the use of the checklist. Conclusions: Based on a systematic review and a consensus process we developed a checklist that aims to be comprehensive and to build on the strengths of each of the 12 included checklists. The checklist is accompanied with five worksheets to facilitate its use in implementation research and quality improvement projects. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
20. SUPPORT Tools for evidence-informed health Policymaking (STP) 9: Assessing the applicability of the findings of a systematic review.
- Author
-
Lavis, John N., Oxman, Andrew D., Souza, Nathan M., Lewin, Simon, Gruen, Russell L., and Fretheim, Atle
- Subjects
- *
HEALTH policy , *RESEARCH , *HUMAN biology , *MEDICAL care , *PHARMACEUTICAL policy - Abstract
Differences between health systems may often result in a policy or programme option that is used in one setting not being feasible or acceptable in another. Or these differences may result in an option not working in the same way in another setting, or even achieving different impacts in another setting. A key challenge that policymakers and those supporting them must face is therefore the need to understand whether research evidence about an option can be applied to their setting. Systematic reviews make this task easier by summarising the evidence from studies conducted in a variety of different settings. Many systematic reviews, however, do not provide adequate descriptions of the features of the actual settings in which the original studies were conducted. In this article, we suggest questions to guide those assessing the applicability of the findings of a systematic review to a specific setting. These are: 1. Were the studies included in a systematic review conducted in the same setting or were the findings consistent across settings or time periods? 2. Are there important differences in on-the-ground realities and constraints that might substantially alter the feasibility and acceptability of an option? 3. Are there important differences in health system arrangements that may mean an option could not work in the same way? 4. Are there important differences in the baseline conditions that might yield different absolute effects even if the relative effectiveness was the same? 5. What insights can be drawn about options, implementation, and monitoring and evaluation? Even if there are reasonable grounds for concluding that the impacts of an option might differ in a specific setting, insights can almost always be drawn from a systematic review about possible options, as well as approaches to the implementation of options and to monitoring and evaluation. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
21. SUPPORT Tools for evidence-informed health Policymaking (STP) 7: Finding systematic reviews.
- Author
-
Lavis, John N., Oxman, Andrew D., Grimshaw, Jeremy, Johansen, Marit, Boyko, Jennifer A., Lewin, Simon, and Fretheim, Atle
- Subjects
- *
HEALTH policy , *STAKEHOLDERS , *PUBLIC interest , *RESEARCH , *META-analysis - Abstract
Systematic reviews are increasingly seen as a key source of information in policymaking, particularly in terms of assisting with descriptions of the impacts of options. Relative to single studies they offer a number of advantages related to understanding impacts and are also seen as a key source of information for clarifying problems and providing complementary perspectives on options. Systematic reviews can be undertaken to place problems in comparative perspective and to describe the likely harms of an option. They also assist with understanding the meanings that individuals or groups attach to a problem, how and why options work, and stakeholder views and experiences related to particular options. A number of constraints have hindered the wider use of systematic reviews in policymaking. These include a lack of awareness of their value and a mismatch between the terms employed by policymakers, when attempting to retrieve systematic reviews, and the terms used by the original authors of those reviews. Mismatches between the types of information that policymakers are seeking, and the way in which authors fail to highlight (or make obvious) such information within systematic reviews have also proved problematic. In this article, we suggest three questions that can be used to guide those searching for systematic reviews, particularly reviews about the impacts of options being considered. These are: 1. Is a systematic review really what is needed? 2. What databases and search strategies can be used to find relevant systematic reviews? 3. What alternatives are available when no relevant review can be found? [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
22. SUPPORT Tools for evidence-informed health Policymaking (STP) 8: Deciding how much confidence to place in a systematic review.
- Author
-
Lewin, Simon, Oxman, Andrew D., Lavis, John N., and Fretheim, Atle
- Subjects
- *
HEALTH policy , *MATHEMATICAL variables , *DECISION making , *MEDICAL care , *RANDOMIZED controlled trials - Abstract
The reliability of systematic reviews of the effects of health interventions is variable. Consequently, policymakers and others need to assess how much confidence can be placed in such evidence. The use of systematic and transparent processes to determine such decisions can help to prevent the introduction of errors and bias in these judgements. In this article, we suggest five questions that can be considered when deciding how much confidence to place in the findings of a systematic review of the effects of an intervention. These are: 1. Did the review explicitly address an appropriate policy or management question? 2. Were appropriate criteria used when considering studies for the review? 3. Was the search for relevant studies detailed and reasonably comprehensive? 4. Were assessments of the studies' relevance to the review topic and of their risk of bias reproducible? 5. Were the results similar from study to study? [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
23. SUPPORT Tools for evidence-informed health Policymaking (STP) 3: Setting priorities for supporting evidence-informed policymaking.
- Author
-
Lavis, John N., Oxman, Andrew D., Lewin, Simon, and Fretheim, Atle
- Subjects
- *
HEALTH policy , *RESEARCH , *STAKEHOLDERS , *FINANCE , *COMMUNICATION , *MEDICAL care - Abstract
Policymakers have limited resources for developing -- or supporting the development of -- evidence-informed policies and programmes. These required resources include staff time, staff infrastructural needs (such as access to a librarian or journal article purchasing), and ongoing professional development. They may therefore prefer instead to contract out such work to independent units with more suitably skilled staff and appropriate infrastructure. However, policymakers may only have limited financial resources to do so. Regardless of whether the support for evidence-informed policymaking is provided in-house or contracted out, or whether it is centralised or decentralised, resources always need to be used wisely in order to maximise their impact. Examples of undesirable practices in a priority-setting approach include timelines to support evidence-informed policymaking being negotiated on a case-by-case basis (instead of having clear norms about the level of support that can be provided for each timeline), implicit (rather than explicit) criteria for setting priorities, ad hoc (rather than systematic and explicit) priority-setting process, and the absence of both a communications plan and a monitoring and evaluation plan. In this article, we suggest questions that can guide those setting priorities for finding and using research evidence to support evidence-informed policymaking. These are: 1. Does the approach to prioritisation make clear the timelines that have been set for addressing high-priority issues in different ways? 2. Does the approach incorporate explicit criteria for determining priorities? 3. Does the approach incorporate an explicit process for determining priorities? 4. Does the approach incorporate a communications strategy and a monitoring and evaluation plan? [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
24. SUPPORT Tools for evidence-informed health Policymaking (STP) 2: Improving how your organisation supports the use of research evidence to inform policymaking.
- Author
-
Oxman, Andrew D., Vandvik, Per Olav, Lavis, John N., Fretheim, Atle, and Lewin, Simon
- Subjects
- *
HEALTH policy , *RESEARCH , *DECISION making , *CONFLICT management , *ORGANIZATION - Abstract
In this article, we address ways of organising efforts to support evidence-informed health policymaking. Efforts to link research to action may include a range of activities related to the production of research that is both highly relevant to -- and appropriately synthesised for -- policymakers. Such activities may include a mix of efforts used to link research to action, as well as the evaluation of such efforts. Little is known about how best to organise the range of activity options available and, until recently, there have been relatively few organisations responsible for supporting the use of research evidence in developing health policy. We suggest five questions that can help guide considerations of how to improve organisational arrangements to support the use of research evidence to inform health policy decision making. These are: 1. What is the capacity of your organisation to use research evidence to inform decision making? 2. What strategies should be used to ensure collaboration between policymakers, researchers and stakeholders? 3. What strategies should be used to ensure independence as well as the effective management of conflicts of interest? 4. What strategies should be used to ensure the use of systematic and transparent methods for accessing, appraising and using research evidence? 5. What strategies should be used to ensure adequate capacity to employ these methods? [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
25. SUPPORT Tools for Evidence-informed policymaking in health 18: Planning monitoring and evaluation of policies.
- Author
-
Fretheim, Atle, Oxman, Andrew D., Lavis, John N., and Lewin, Simon
- Subjects
- *
HEALTH policy , *PATIENT monitoring , *DECISION making , *STAKEHOLDERS , *FINANCE , *VACCINES - Abstract
The term monitoring is commonly used to describe the process of systematically collecting data to inform policymakers, managers and other stakeholders whether a new policy or programme is being implemented in accordance with their expectations. Indicators are used for monitoring purposes to judge, for example, if objectives are being achieved, or if allocated funds are being spent appropriately. Sometimes the term evaluation is used interchangeably with the term monitoring, but the former usually suggests a stronger focus on the achievement of results. When the term impact evaluation is used, this usually implies that there is a specific attempt to try to determine whether the observed changes in outcomes can be attributed to a particular policy or programme. In this article, we suggest four questions that can be used to guide the monitoring and evaluation of policy or programme options. These are: 1. Is monitoring necessary? 2. What should be measured? 3. Should an impact evaluation be conducted? 4. How should the impact evaluation be done? [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
26. SUPPORT Tools for evidence-informed health Policymaking (STP) 17: Dealing with insufficient research evidence.
- Author
-
Oxman, Andrew D., Lavis, John N., Fretheim, Atle, and Lewin, Simon
- Subjects
- *
HEALTH policy , *DECISION making , *HEALTH care reform , *MEDICAL personnel , *PERSONNEL management , *METHODOLOGY - Abstract
In this article, we address the issue of decision making in situations in which there is insufficient evidence at hand. Policymakers often have insufficient evidence to know with certainty what the impacts of a health policy or programme option will be, but they must still make decisions. We suggest four questions that can be considered when there may be insufficient evidence to be confident about the impacts of implementing an option. These are: 1. Is there a systematic review of the impacts of the option? 2. Has inconclusive evidence been misinterpreted as evidence of no effect? 3. Is it possible to be confident about a decision despite a lack of evidence? 4. Is the option potentially harmful, ineffective or not worth the cost? [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
27. SUPPORT Tools for evidence-informed health Policymaking (STP) 16: Using research evidence in balancing the pros and cons of policies.
- Author
-
Oxman, Andrew D., Lavis, John N., Fretheim, Atle, and Lewin, Simon
- Subjects
- *
HEALTH policy , *DECISION making , *ECONOMIC models , *FINANCIAL statements , *MEDICAL care costs - Abstract
In this article, we address the use of evidence to inform judgements about the balance between the pros and cons of policy and programme options. We suggest five questions that can be considered when making these judgements. These are: 1. What are the options that are being compared? 2. What are the most important potential outcomes of the options being compared? 3. What is the best estimate of the impact of the options being compared for each important outcome? 4. How confident can policymakers and others be in the estimated impacts? 5. Is a formal economic model likely to facilitate decision making? [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
28. SUPPORT Tools for evidence-informed health Policymaking (STP) 15: Engaging the public in evidence-informed policymaking.
- Author
-
Oxman, Andrew D., Lewin, Simon, Lavis, John N., and Fretheim, Atle
- Subjects
- *
HEALTH policy , *GOVERNMENT policy , *CIVIL society , *CONSUMERS , *DEMOCRACY , *MEDICAL care - Abstract
In this article, we address strategies to inform and engage the public in policy development and implementation. The importance of engaging the public (both patients and citizens) at all levels of health systems is widely recognised. They are the ultimate recipients of the desirable and undesirable impacts of public policies, and many governments and organisations have acknowledged the value of engaging them in evidence-informed policy development. The potential benefits of doing this include the establishment of policies that include their ideas and address their concerns, the improved implementation of policies, improved health services, and better health. Public engagement can also be viewed as a goal in itself by encouraging participative democracy, public accountability and transparency. We suggest three questions that can be considered with regard to public participation strategies. These are: 1. What strategies can be used when working with the mass media to inform the public about policy development and implementation? 2. What strategies can be used when working with civil society groups to inform and engage them in policy development and implementation? 3. What methods can be used to involve consumers in policy development and implementation? [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
29. SUPPORT Tools for evidence-informed health Policymaking (STP) 12: Finding and using research evidence about resource use and costs.
- Author
-
Oxman, Andrew D., Fretheim, Atle, Lavis, John N., and Lewin, Simon
- Subjects
- *
HEALTH policy , *MEDICAL care , *COST effectiveness , *MEDICAL care costs , *STAKEHOLDERS - Abstract
In this article, we address considerations about resource use and costs. The consequences of a policy or programme option for resource use differ from other impacts (both in terms of benefits and harms) in several ways. However, considerations of the consequences of options for resource use are similar to considerations related to other impacts in that policymakers and their staff need to identify important impacts on resource use, acquire and appraise the best available evidence regarding those impacts, and ensure that appropriate monetary values have been applied. We suggest four questions that can be considered when assessing resource use and the cost consequences of an option. These are: 1. What are the most important impacts on resource use? 2. What evidence is there for important impacts on resource use? 3. How confident is it possible to be in the evidence for impacts on resource use? 4. Have the impacts on resource use been valued appropriately in terms of their true costs? [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
30. SUPPORT Tools for evidence-informed health Policymaking (STP) 10: Taking equity into consideration when assessing the findings of a systematic review.
- Author
-
Oxman, Andrew D., Lavis, John N., Lewin, Simon, and Fretheim, Atle
- Subjects
- *
HEALTH policy , *EQUITY (Law) , *MEDICAL care , *POPULATION , *SOCIAL classes - Abstract
In this article we address considerations of equity. Inequities can be defined as "differences in health which are not only unnecessary and avoidable but, in addition, are considered unfair and unjust". These have been well documented in relation to social and economic factors. Policies or programmes that are effective can improve the overall health of a population. However, the impact of such policies and programmes on inequities may vary: they may have no impact on inequities, they may reduce inequities, or they may exacerbate them, regardless of their overall effects on population health. We suggest four questions that can be considered when using research evidence to inform considerations of the potential impact a policy or programme option is likely to have on disadvantaged groups, and on equity in a specific setting. These are: 1. Which groups or settings are likely to be disadvantaged in relation to the option being considered? 2. Are there plausible reasons for anticipating differences in the relative effectiveness of the option for disadvantaged groups or settings? 3. Are there likely to be different baseline conditions across groups or settings such that that the absolute effectiveness of the option would be different, and the problem more or less important, for disadvantaged groups or settings? 4. Are there important considerations that should be made when implementing the option in order to ensure that inequities are reduced, if possible, and that they are not increased? [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
31. SUPPORT Tools for evidence-informed Policymaking in health 11: Finding and using evidence about local conditions.
- Author
-
Lewin, Simon, Oxman, Andrew D., Lavis, John N., Fretheim, Atle, Marti, Sebastian Garcia, and Munabi-Babigumira, Susan
- Subjects
- *
HEALTH policy , *DECISION making , *MEDICAL care , *DISEASE prevalence , *CONSUMERS - Abstract
Evidence about local conditions is evidence that is available from the specific setting(s) in which a decision or action on a policy or programme option will be taken. Such evidence is always needed, together with other forms of evidence, in order to inform decisions about options. Global evidence is the best starting point for judgements about effects, factors that modify those effects, and insights into ways to approach and address problems. But local evidence is needed for most other judgements about what decisions and actions should be taken. In this article, we suggest five questions that can help to identify and appraise the local evidence that is needed to inform a decision about policy or programme options. These are: 1. What local evidence is needed to inform a decision about options? 2. How can the necessary local evidence be found? 3. How should the quality of the available local evidence be assessed? 4. Are there important variations in the availability, quality or results of local evidence? 5. How should local evidence be incorporated with other information? [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
32. SUPPORT Tools for evidence-informed health Policymaking (STP) 1: What is evidence-informed policymaking?
- Author
-
Oxman, Andrew D., Lavis, John N., Lewin, Simon, and Fretheim, Atle
- Subjects
- *
HEALTH policy , *DECISION making , *HEALTH status indicators , *MEDICAL care - Abstract
In this article, we discuss the following three questions: What is evidence? What is the role of research evidence in informing health policy decisions? What is evidence-informed policymaking? Evidence-informed health policymaking is an approach to policy decisions that aims to ensure that decision making is well-informed by the best available research evidence. It is characterised by the systematic and transparent access to, and appraisal of, evidence as an input into the policymaking process. The overall process of policymaking is not assumed to be systematic and transparent. However, within the overall process of policymaking, systematic processes are used to ensure that relevant research is identified, appraised and used appropriately. These processes are transparent in order to ensure that others can examine what research evidence was used to inform policy decisions, as well as the judgements made about the evidence and its implications. Evidence-informed policymaking helps policymakers gain an understanding of these processes. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
33. SUPPORT Tools for Evidence-informed policymaking in health 6: Using research evidence to address how an option will be implemented.
- Author
-
Fretheim, Atle, Munabi-Babigumira, Susan, Oxman, Andrew D., Lavis, John N., and Lewin, Simon
- Subjects
HEALTH policy ,RESEARCH ,MEDICAL care ,MEDICAL personnel ,ORGANIZATIONAL change - Abstract
After a policy decision has been made, the next key challenge is transforming this stated policy position into practical actions. What strategies, for instance, are available to facilitate effective implementation, and what is known about the effectiveness of such strategies? We suggest five questions that can be considered by policymakers when implementing a health policy or programme. These are: 1. What are the potential barriers to the successful implementation of a new policy? 2. What strategies should be considered in planning the implementation of a new policy in order to facilitate the necessary behavioural changes among healthcare recipients and citizens? 3. What strategies should be considered in planning the implementation of a new policy in order to facilitate the necessary behavioural changes in healthcare professionals? 4. What strategies should be considered in planning the implementation of a new policy in order to facilitate the necessary organisational changes? 5. What strategies should be considered in planning the implementation of a new policy in order to facilitate the necessary systems change [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
34. SUPPORT Tools for evidence-informed health Policymaking (STP) 5: Using research evidence to frame options to address a problem.
- Author
-
Lavis, John N., Wilson, Michael G., Oxman, Andrew D., Grimshaw, Jeremy, Lewin, Simon, and Fretheim, Atle
- Subjects
HEALTH policy ,PATIENT monitoring ,RESEARCH ,BUDGET ,COST control - Abstract
Policymakers and those supporting them may find themselves in one or more of the following three situations that will require them to characterise the costs and consequences of options to address a problem. These are: 1. A decision has already been taken and their role is to maximise the benefits of an option, minimise its harms, optimise the impacts achieved for the money spent, and (if there is substantial uncertainty about the likely costs and consequences of the option) to design a monitoring and evaluation plan, 2. A policymaking process is already underway and their role is to assess the options presented to them, or 3. A policymaking process has not yet begun and their role is therefore to identify options, characterise the costs and consequences of these options, and look for windows of opportunity in which to act. In situations like these, research evidence, particularly about benefits, harms, and costs, can help to inform whether an option can be considered viable. In this article, we suggest six questions that can be used to guide those involved in identifying policy and programme options to address a high-priority problem, and to characterise the costs and consequences of these options. These are: 1. Has an appropriate set of options been identified to address a problem? 2. What benefits are important to those who will be affected and which benefits are likely to be achieved with each option? 3. What harms are important to those who will be affected and which harms are likely to arise with each option? 4. What are the local costs of each option and is there local evidence about their cost-effectiveness? 5. What adaptations might be made to any given option and could they alter its benefits, harms and costs? 6. Which stakeholder views and experiences might influence an option's acceptability and its benefits, harms, and costs? [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
35. SUPPORT Tools for evidence-informed health Policymaking (STP) 4: Using research evidence to clarify a problem.
- Author
-
Lavis, John N., Wilson, Michael G., Oxman, Andrew D., Lewin, Simon, and Fretheim, Atle
- Subjects
HEALTH policy ,PERIODICALS ,PUBLICATIONS ,MEDICAL care ,DECISION making - Abstract
Policymakers and those supporting them often find themselves in situations that spur them on to work out how best to define a problem. These situations may range from being asked an awkward or challenging question in the legislature, through to finding a problem highlighted on the front page of a newspaper. The motivations for policymakers wanting to clarify a problem are diverse. These may range from deciding whether to pay serious attention to a particular problem that others claim is important, through to wondering how to convince others to agree that a problem is important. Debates and struggles over how to define a problem are a critically important part of the policymaking process. The outcome of these debates and struggles will influence whether and, in part, how policymakers take action to address a problem. Efforts at problem clarification that are informed by an appreciation of concurrent developments are more likely to generate actions. These concurrent developments can relate to policy and programme options (e.g. the publication of a report demonstrating the effectiveness of a particular option) or to political events (e.g. the appointment of a new Minister of Health with a personal interest in a particular issue). In this article, we suggest questions that can be used to guide those involved in identifying a problem and characterising its features. These are: 1. What is the problem? 2. How did the problem come to attention and has this process influenced the prospect of it being addressed? 3. What indicators can be used, or collected, to establish the magnitude of the problem and to measure progress in addressing it? 4. What comparisons can be made to establish the magnitude of the problem and to measure progress in addressing it? 5. How can the problem be framed (or described) in a way that will motivate different groups? [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
36. SUPPORT Tools for evidence-informed health Policymaking (STP) 14: Organising and using policy dialogues to support evidence-informed policymaking.
- Author
-
Lavis, John N., Boyko, Jennifer A., Oxman, Andrew D., Lewin, Simon, and Fretheim, Atle
- Subjects
HEALTH policy ,DECISION making ,STAKEHOLDERS ,MALARIA treatment - Abstract
Policy dialogues allow research evidence to be considered together with the views, experiences and tacit knowledge of those who will be involved in, or affected by, future decisions about a high-priority issue. Increasing interest in the use of policy dialogues has been fuelled by a number of factors: 1. The recognition of the need for locally contextualised 'decision support' for policymakers and other stakeholders 2. The recognition that research evidence is only one input into the decision-making processes of policymakers and other stakeholders 3. The recognition that many stakeholders can add significant value to these processes, and 4. The recognition that many stakeholders can take action to address high-priority issues, and not just policymakers. In this article, we suggest questions to guide those organising and using policy dialogues to support evidence-informed policymaking. These are: 1. Does the dialogue address a high-priority issue? 2. Does the dialogue provide opportunities to discuss the problem, options to address the problem, and key implementation considerations? 3. Is the dialogue informed by a pre-circulated policy brief and by a discussion about the full range of factors that can influence the policymaking process? 4. Does the dialogue ensure fair representation among those who will be involved in, or affected by, future decisions related to the issue? 5. Does the dialogue engage a facilitator, follow a rule about not attributing comments to individuals, and not aim for consensus? 6. Are outputs produced and follow-up activities undertaken to support action? [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
37. SUPPORT Tools for evidence-informed health Policymaking (STP) 13: Preparing and using policy briefs to support evidence-informed policymaking.
- Author
-
Lavis, John N., Permanand, Govin, Oxman, Andrew D., Lewin, Simon, and Fretheim, Atle
- Subjects
HEALTH policy ,STAKEHOLDERS ,MEDICAL care ,DECISION making ,PRIMARY health care - Abstract
Policy briefs are a relatively new approach to packaging research evidence for policymakers. The first step in a policy brief is to prioritise a policy issue. Once an issue is prioritised, the focus then turns to mobilising the full range of research evidence relevant to the various features of the issue. Drawing on available systematic reviews makes the process of mobilising evidence feasible in a way that would not otherwise be possible if individual relevant studies had to be identified and synthesised for every feature of the issue under consideration. In this article, we suggest questions that can be used to guide those preparing and using policy briefs to support evidence-informed policymaking. These are: 1. Does the policy brief address a high-priority issue and describe the relevant context of the issue being addressed? 2. Does the policy brief describe the problem, costs and consequences of options to address the problem, and the key implementation considerations? 3. Does the policy brief employ systematic and transparent methods to identify, select, and assess synthesised research evidence? 4. Does the policy brief take quality, local applicability, and equity considerations into account when discussing the synthesised research evidence? 5. Does the policy brief employ a graded-entry format? 6. Was the policy brief reviewed for both scientific quality and system relevance? [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
38. Translating research into policy: lessons learned from eclampsia treatment and malaria control in three southern African countries.
- Author
-
Woelk, Godfrey, Daniels, Karen, Cliff, Julie, Lewin, Simon, Sevene, Esperança, Fernandes, Benedita, Mariano, Alda, Matinhure, Sheillah, Oxman, Andrew D, Lavis, John N, and Lundborg, Cecilia Stålsby
- Abstract
Background: Little is known about the process of knowledge translation in low- and middle-income countries. We studied policymaking processes in Mozambique, South Africa and Zimbabwe to understand the factors affecting the use of research evidence in national policy development, with a particular focus on the findings from randomized control trials (RCTs). We examined two cases: the use of magnesium sulphate (MgSO(4)) in the treatment of eclampsia in pregnancy (a clinical case); and the use of insecticide treated bed nets and indoor residual household spraying for malaria vector control (a public health case).Methods: We used a qualitative case-study methodology to explore the policy making process. We carried out key informants interviews with a range of research and policy stakeholders in each country, reviewed documents and developed timelines of key events. Using an iterative approach, we undertook a thematic analysis of the data.Findings: Prior experience of particular interventions, local champions, stakeholders and international networks, and the involvement of researchers in policy development were important in knowledge translation for both case studies. Key differences across the two case studies included the nature of the evidence, with clear evidence of efficacy for MgSO(4 )and ongoing debate regarding the efficacy of bed nets compared with spraying; local researcher involvement in international evidence production, which was stronger for MgSO(4 )than for malaria vector control; and a long-standing culture of evidence-based health care within obstetrics. Other differences were the importance of bureaucratic processes for clinical regulatory approval of MgSO(4), and regional networks and political interests for malaria control. In contrast to treatment policies for eclampsia, a diverse group of stakeholders with varied interests, differing in their use and interpretation of evidence, was involved in malaria policy decisions in the three countries.Conclusion: Translating research knowledge into policy is a complex and context sensitive process. Researchers aiming to enhance knowledge translation need to be aware of factors influencing the demand for different types of research; interact and work closely with key policy stakeholders, networks and local champions; and acknowledge the roles of important interest groups. [ABSTRACT FROM AUTHOR]- Published
- 2009
- Full Text
- View/download PDF
39. NorthStar, a support tool for the design and evaluation of quality improvement interventions in healthcare.
- Author
-
Akl, Elie A., Treweek, Shaun, Foy, Robbie, Francis, Jill, and Oxman, Andrew D.
- Subjects
COMPUTER software ,ELECTRONIC systems ,JOB performance ,MEDICAL care ,CLINICAL trials ,INTERVENTION (Social services) - Abstract
Background: The Research-Based Education and Quality Improvement (ReBEQI) European partnership aims to establish a framework and provide practical tools for the selection, implementation, and evaluation of quality improvement (QI) interventions. We describe the development and preliminary evaluation of the software tool NorthStar, a major product of the ReBEQI project. Methods: We focused the content of NorthStar on the design and evaluation of QI interventions. A lead individual from the ReBEQI group drafted each section, and at least two other group members reviewed it. The content is based on published literature, as well as material developed by the ReBEQI group. We developed the software in both a Microsoft Windows HTML help system version and a web-based version. In a preliminary evaluation, we surveyed 33 potential users about the acceptability and perceived utility of NorthStar. Results: NorthStar consists of 18 sections covering the design and evaluation of QI interventions. The major focus of the intervention design sections is on how to identify determinants of practice (factors affecting practice patterns), while the major focus of the intervention evaluation sections is on how to design a cluster randomised trial. The two versions of the software can be transferred by email or CD, and are available for download from the internet. The software offers easy navigation and various functions to access the content. Potential users (55% response rate) reported above-moderate levels of confidence in carrying out QI research related tasks if using NorthStar, particularly when developing a protocol for a cluster randomised trial Conclusion: NorthStar is an integrated, accessible, practical, and acceptable tool to assist developers and evaluators of QI interventions. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
40. Educational outreach to general practitioners reduces children'sasthma symptoms: a cluster randomised controlled trial.
- Author
-
Zwarenstein, Merrick, Bheekie, Angeni, Lombard, Carl, Swingler, George, Ehrlich, Rodney, Eccles, Martin, Sladden, Michael, Pather, Sandra, Grimshaw, Jeremy, and Oxman, Andrew D.
- Subjects
ASTHMA in children ,SYMPTOMS ,CHILDREN'S health ,HEALTH promotion - Abstract
Background: Childhood asthma is common in Cape Town, a province of South Africa, but is underdiagnosed by general practitioners. Medications are often prescribed inappropriately, and care is episodic. The objective of this study is to assess the impact of educational outreach to general practitioners on asthma symptoms of children in their practice. Methods: This is a cluster randomised trial with general practices as the unit of intervention, randomisation, and analysis. The setting is Mitchells Plain (population 300,000), a dormitory town near Cape Town. Solo general practitioners, without nurse support, operate from storefront practices. Caregiver-reported symptom data were collected for 318 eligible children (2 to 17 years) with moderate to severe asthma, who were attending general practitioners in Mitchells Plain. One year post-intervention follow-up data were collected for 271 (85%) of these children in all 43 practices. Practices randomised to intervention (21) received two 30-minute educational outreach visits by a trained pharmacist who left materials describing key interventions to improve asthma care. Intervention and control practices received the national childhood asthma guideline. Asthma severity was measured in a parent-completed survey administered through schools using a symptom frequency and severity scale. We compared intervention and control group children on the change in score from pre-to one-year post-intervention. Results: Symptom scores declined an additional 0.84 points in the intervention vs. control group (on a nine-point scale. p = 0.03). For every 12 children with asthma exposed to a doctor allocated to the intervention, one extra child will have substantially reduced symptoms. Conclusion: Educational outreach was accepted by general practitioners and was effective. It could be applied to other health care quality problems in this setting. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
41. Improving the use of research evidence in guideline development: 16. Evaluation.
- Author
-
Oxman, Andrew D., Schünemann, Holger J., and Fretheim, Atle
- Subjects
- *
PUBLIC health research , *MEDICAL care , *DOCUMENTARY evidence , *EVALUATION , *DECISION making - Abstract
Background: The World Health Organization (WHO), like many other organisations around the world, has recognised the need to use more rigorous processes to ensure that health care recommendations are informed by the best available research evidence. This is the last of a series of 16 reviews that have been prepared as background for advice from the WHO Advisory Committee on Health Research to WHO on how to achieve this. Objectives: We reviewed the literature on evaluating guidelines and recommendations, including their quality, whether they are likely to be up-to-date, and their implementation. We also considered the role of guideline developers in undertaking evaluations that are needed to inform recommendations. Methods: We searched PubMed and three databases of methodological studies for existing systematic reviews and relevant methodological research. We did not conduct systematic reviews ourselves. Our conclusions are based on the available evidence, consideration of what WHO and other organisations are doing and logical arguments. Key questions and answers: Our answers to these questions were informed by a review of instruments for evaluating guidelines, several studies of the need for updating guidelines, discussions of the pros and cons of different research designs for evaluating the implementation of guidelines, and consideration of the use of uncertainties identified in systematic reviews to set research priorities. How should the quality of guidelines or recommendations be appraised? • WHO should put into place processes to ensure that both internal and external review of guidelines is undertaken routinely. • A checklist, such as the AGREE instrument, should be used. • The checklist should be adapted and tested to ensure that it is suitable to the broad range of recommendations that WHO produces, including public health and health policy recommendations, and that it includes questions about equity and other items that are particularly important for WHO guidelines. When should guidelines or recommendations be updated? • Processes should be put into place to ensure that guidelines are monitored routinely to determine if they are in need of updating. • People who are familiar with the topic, such as Cochrane review groups, should do focused, routine searches for new research that would require revision of the guideline. • Periodic review of guidelines by experts not involved in developing the guidelines should also be considered. • Consideration should be given to establishing guideline panels that are ongoing, to facilitate routine updating, with members serving fixed periods with a rotating membership. How should the impact of guidelines or recommendations be evaluated? • WHO headquarters and regional offices should support member states and those responsible for policy decisions and implementation to evaluate the impact of their decisions and actions by providing advice regarding impact assessment, practical support and coordination of efforts. • Before-after evaluations should be used cautiously and when there are important uncertainties regarding the effects of a policy or its implementation, randomised evaluations should be used when possible. What responsibility should WHO take for ensuring that important uncertainties are addressed by future research when the evidence needed to inform recommendations is lacking? • Guideline panels should routinely identify important uncertainties and research priorities. This source of potential priorities for research should be used systematically to inform priority-setting processes for global research. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
42. Improving the use of research evidence in guideline development: 14. Reporting guidelines.
- Author
-
Oxman, Andrew D., Schünemann, Holger J., and Fretheim, Atle
- Subjects
- *
PUBLIC health research , *MEDICAL care , *DOCUMENTARY evidence , *PUBLIC health , *STANDARDIZATION - Abstract
Background: The World Health Organization (WHO), like many other organisations around the world, has recognised the need to use more rigorous processes to ensure that health care recommendations are informed by the best available research evidence. This is the 14th of a series of 16 reviews that have been prepared as background for advice from the WHO Advisory Committee on Health Research to WHO on how to achieve this. Objectives: We reviewed the literature on reporting guidelines and recommendations. Methods: We searched PubMed and three databases of methodological studies for existing systematic reviews and relevant methodological research. We did not conduct systematic reviews ourselves. Our conclusions are based on the available evidence, consideration of what WHO and other organisations are doing and logical arguments. Key questions and answers: There is little empirical evidence that addresses these questions. Our answers are based on logical arguments and standards put forward by other groups. What standard types of recommendations or reports should WHO use? • WHO should develop standard formats for reporting recommendations to facilitate recognition and use by decision makers for whom the recommendations are intended, and to ensure that all the information needed to judge the quality of a guideline, determine its applicability and, if needed, adapt it, is reported. • WHO should develop standard formats for full systematically developed guidelines that are sponsored by WHO, rapid assessments, and guidelines that are endorsed by WHO. • All three formats should include the same information as full guidelines, indicating explicitly what the group preparing the guideline did not do, as well as the methods that were used. • These formats should be used across clinical, public health and health systems recommendations. How should recommendations be formulated and reported? • Reports should be structured, using headings that correspond to those suggested by the Conference on Guideline Standardization or similar headings. • The quality of evidence and strength of recommendations should be reported explicitly using a standard approach. • The way in which recommendations are formulated should be adapted to the specific characteristics of a specific guideline. • Urgent attention should be given to developing a template that provides decision makers with the relevant global evidence that is needed to inform a decision and offers practical methods for incorporating the context specific evidence and judgements that are needed. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
43. Improving the use of research evidence in guideline development: 12. Incorporating considerations of equity.
- Author
-
Oxman, Andrew D., Schünemann, Holger J., and Fretheim, Atle
- Subjects
- *
PUBLIC health research , *MEDICAL care , *DOCUMENTARY evidence , *EQUITY (Law) , *ECONOMIC status - Abstract
Background: The World Health Organization (WHO), like many other organisations around the world, has recognised the need to use more rigorous processes to ensure that health care recommendations are informed by the best available research evidence. This is the 12th of a series of 16 reviews that have been prepared as background for advice from the WHO Advisory Committee on Health Research to WHO on how to achieve this. Objectives: We reviewed the literature on incorporating considerations of equity in guidelines and recommendations. Methods: We searched PubMed and three databases of methodological studies for existing systematic reviews and relevant methodological research. We did not conduct systematic reviews ourselves. Our conclusions are based on the available evidence, consideration of what WHO and other organisations are doing and logical arguments. Key questions and answers: We found few directly relevant empirical methodological studies. These answers are based largely on logical arguments. When and how should inequities be addressed in systematic reviews that are used as background documents for recommendations? • The following question should routinely be considered: Are there plausible reasons for anticipating differential relative effects across disadvantaged and advantaged populations? • If there are plausible reasons for anticipating differential effects, additional evidence should be included in a review to inform judgments about the likelihood of differential effects. What questions about equity should routinely be addressed by those making recommendations on behalf of WHO? • The following additional questions should routinely be considered: • How likely is it that the results of available research are applicable to disadvantaged populations and settings? • How likely are differences in baseline risk that would result in differential absolute effects across disadvantaged and advantaged populations? • How likely is it that there are important differences in trade-offs between the expected benefits and harms across disadvantaged and advantaged populations? • Are there different implications for disadvantaged and advantaged populations, or implications for addressing inequities? What context specific information is needed to inform adaptation and decision making in a specific setting with regard to impacts on equity? • Those making recommendations on behalf of WHO should routinely consider and offer advice about the importance of the following types of context specific data that might be needed to inform adaptation and decision making in a specific setting: • Effect modifiers for disadvantaged populations and for the likelihood of differential effects • Baseline risk in relationship to social and economic status • Utilization and access to care in relationship to social and economic status • Costs in relationship to social and economic status • Ethics and laws that may impact on strategies for addressing inequities • Availability of resources to address inequities What implementation strategies are likely be needed to ensure that recommendations are implemented equitably? • Organisational changes are likely to be important to address inequities. While it may only be possible to consider these in relationship to specific settings, consideration should be given to how best to provide support for identifying and addressing needs for organisational changes. In countries with pervasive inequities institutional, cultural and political changes may first be needed. • Appropriate indicators of social and economic status should be used to monitor the effects of implementing recommendations on disadvantaged populations and on changes in social and economic status. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
44. Improving the use of research evidence in guideline development: 8. Synthesis and presentation of evidence.
- Author
-
Oxman, Andrew D., Schünemann, Holger J., and Fretheim, Atle
- Subjects
- *
PUBLIC health research , *MEDICAL care , *DOCUMENTARY evidence , *DECISION making - Abstract
Background: The World Health Organization (WHO), like many other organisations around the world, has recognised the need to use more rigorous processes to ensure that health care recommendations are informed by the best available research evidence. This is the eighth of a series of 16 reviews that have been prepared as background for advice from the WHO Advisory Committee on Health Research to WHO on how to achieve this. Objectives: We reviewed the literature on the synthesis and presentation of research evidence, focusing on four key questions. Methods: We searched PubMed and three databases of methodological studies for existing systematic reviews and relevant methodological research. We did not conduct systematic reviews ourselves. Our conclusions are based on the available evidence, consideration of what WHO and other organisations are doing and logical arguments. Key questions and answers: We found two reviews of instruments for critically appraising systematic reviews, several studies of the importance of using extensive searches for reviews and determining when it is important to update reviews, and consensus statements about the reporting of reviews that informed our answers to the following questions. How should existing systematic reviews be critically appraised? • Because preparing systematic reviews can take over a year and require capacity and resources, existing reviews should be used when possible and updated, if needed. • Standard criteria, such as A MeaSurement Tool to Assess Reviews (AMSTAR), should be used to critically appraise existing systematic reviews, together with an assessment of the relevance of the review to the questions being asked. When and how should WHO undertake or commission new reviews? • Consideration should be given to undertaking or commissioning a new review whenever a relevant, up-to-date review of good quality is not available. • When time or resources are limited it may be necessary to undertake rapid assessments. The methods that are used to do these assessments should be reported, including important limitations and uncertainties and explicit consideration of the need and urgency of undertaking a full systematic review. • Because WHO has limited capacity for undertaking systematic reviews, reviews will often need to be commissioned when a new review is needed. Consideration should be given to establishing collaborating centres to undertake or support this work, similar to what some national organisations have done. How should the findings of systematic reviews be summarised and presented to committees responsible for making recommendations? • Concise summaries (evidence tables) of the best available evidence for each important outcome, including benefits, harms and costs, should be presented to the groups responsible for making recommendations. These should include an assessment of the quality of the evidence and a summary of the findings for each outcome. • The full systematic reviews, on which the summaries are based, should also be available to both those making recommendations and users of the recommendations. What additional information is needed to inform recommendations and how should this information be synthesised with information about effects and presented to committees? • Additional information that is needed to inform recommendations includes factors that might modify the expected effects, need (prevalence, baseline risk or status), values (the relative importance of key outcomes), costs and the availability of resources. • Any assumptions that are made about values or other factors that may vary from setting to setting should be made explicit.… [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
45. Improving the use of research evidence in guideline development: 7. Deciding what evidence to include.
- Author
-
Oxman, Andrew D., Schünemann, Holger J., and Fretheim, Atle
- Subjects
- *
MEDICAL care , *PUBLIC health research , *SCIENTIFIC observation , *DECISION making - Abstract
Background: The World Health Organization (WHO), like many other organisations around the world, has recognised the need to use more rigorous processes to ensure that health care recommendations are informed by the best available research evidence. This is the seventh of a series of 16 reviews that have been prepared as background for advice from the WHO Advisory Committee on Health Research to WHO on how to achieve this. Objectives: We reviewed the literature on what constitutes "evidence" in guidelines and recommendations. Methods: We searched PubMed and three databases of methodological studies for existing systematic reviews and relevant methodological research. We did not conduct systematic reviews ourselves. Our conclusions are based on the available evidence, consideration of what WHO and other organisations are doing and logical arguments. Key question and answers: We found several systematic reviews that compared the findings of observational studies with randomised trials, a systematic review of methods for evaluating bias in non-randomised trials and several descriptive studies of methods used in systematic reviews of population interventions and harmful effects. What types of evidence should be used to address different types of questions? • The most important type of evidence for informing global recommendations is evidence of the effects of the options (interventions or actions) that are considered in a recommendation. This evidence is essential, but not sufficient for making recommendations about what to do. Other types of required evidence are largely context specific. • The study designs to be included in a review should be dictated by the interventions and outcomes being considered. A decision about how broad a range of study designs to consider should be made in relationship to the characteristics of the interventions being considered, what evidence is available, and the time and resources available. • There is uncertainty regarding what study designs to include for some specific types of questions, particularly for questions regarding population interventions, harmful effects and interventions where there is only limited human evidence. • Decisions about the range of study designs to include should be made explicitly. • Great caution should be taken to avoid confusing a lack of evidence with evidence of no effect, and to acknowledge uncertainty. • Expert opinion is not a type of study design and should not be used as evidence. The evidence (experience or observations) that is the basis of expert opinions should be identified and appraised in a systematic and transparent way. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
46. Improving the use of research evidence in guideline development: 6. Determining which outcomes are important.
- Author
-
Schünemann, Holger J., Oxman, Andrew D., and Fretheim, Atle
- Subjects
- *
PUBLIC health research , *MEDICAL care , *MULTICULTURALISM , *HEALTH outcome assessment , *DOCUMENTATION - Abstract
Background: The World Health Organization (WHO), like many other organisations around the world, has recognised the need to use more rigorous processes to ensure that health care recommendations are informed by the best available research evidence. This is the sixth of a series of 16 reviews that have been prepared as background for advice from the WHO Advisory Committee on Health Research to WHO on how to achieve this. Objectives: We reviewed the literature on determining which outcomes are important for the development of guidelines. Methods: We searched five databases of methodological studies for existing systematic reviews and relevant methodological research. We did not conduct a complete systematic review ourselves. Our conclusions are based on the available evidence, consideration of what WHO and other organisations are doing and logical arguments. Key questions and answers: We did not find a systematic review that addresses any of the following key questions and we found limited relevant research evidence. What methods should WHO use to identify important outcomes? • Methods of outcome identification should be transparent and explicit. • The consultation process should start with identification of all relevant outcomes associated with an intervention. • Those affected, including consumers, should be involved in the selection of outcomes. • A question driven approach (what is important?) is preferable to a data driven approach (what data are at hand?) to identify important outcomes. What type of outcomes should WHO consider and how should cultural diversity be taken account of in the selection of outcomes? • Desirable (benefits, less burden and savings) and undesirable effects should be considered in all guidelines. • Undesirable effects include harms (including the possibility of unanticipated adverse effects), greater burden (e.g. having to go to the doctor) and costs (including opportunity costs). • Important outcomes (e.g. mortality, morbidity, quality of life) should be preferred over surrogate, indirect outcomes (e.g. cholesterol levels, lung function) that may or may not correlate with patient important outcomes. • Ethical considerations should be part of the evaluation of important outcomes (e.g. impacts on autonomy). • If the importance of outcomes is likely to vary across cultures, stakeholders from diverse cultures should be consulted and involved in the selection of outcomes. How should the importance of outcomes be ranked? • Outcomes should be ranked by relative importance, separated into benefits and downsides. • Information from research on values and preferences should inform the ranking of outcomes whenever possible. • If the importance of outcomes is likely to vary across cultures, ranking of outcomes should be done in specific settings. • If evidence is lacking for an important outcome, this should be acknowledged, rather than ignoring the outcome. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
47. Improving the use of research evidence in guideline development: 2. Priority setting.
- Author
-
Oxman, Andrew D., Schünemann, Holger J., and Fretheim, Atle
- Subjects
- *
PUBLIC health research , *MEDICAL care , *DOCUMENTATION , *EQUITY (Law) , *INFORMATION resources - Abstract
Background: The World Health Organization (WHO), like many other organisations around the world, has recognised the need to use more rigorous processes to ensure that health care recommendations are informed by the best available research evidence. This is the second of a series of 16 reviews that have been prepared as background for advice from the WHO Advisory Committee on Health Research to WHO on how to achieve this. Objectives: We reviewed the literature on priority setting for health care guidelines, recommendations and technology assessments. Methods: We searched PubMed and three databases of methodological studies for existing systematic reviews and relevant methodological research. We did not conduct systematic reviews ourselves. Our conclusions are based on the available evidence, consideration of what WHO and other organisations are doing and logical arguments. Key questions and answers: There is little empirical evidence to guide the choice of criteria and processes for establishing priorities, but there are broad similarities in the criteria that are used by various organisations and practical arguments for setting priorities explicitly rather than implicitly, What criteria should be used to establish priorities?: • WHO has limited resources and capacity to develop recommendations. It should use these resources where it has the greatest chance of improving health, equity, and efficient use of healthcare resources. • We suggest the following criteria for establishing priorities for developing recommendations based on WHO's aims and strategic advantages: • Problems associated with a high burden of illness in low and middle-income countries, or new and emerging diseases. • No existing recommendations of good quality. • The feasibility of developing recommendations that will improve health outcomes, reduce inequities or reduce unnecessary costs if they are implemented. • Implementation is feasible, will not exhaustively use available resources, and barriers to change are not likely to be so high that they cannot be overcome. • Additional priorities for WHO include interventions that will likely require system changes and interventions where there might be a conflict in choices between individual and societal perspectives. What processes should be used to agree on priorities?: • The allocation of resources to the development of recommendations should be part of the routine budgeting process rather than a separate exercise. • Criteria for establishing priorities should be applied using a systematic and transparent process. • Because data to inform judgements are often lacking, unmeasured factors should also be considered - explicitly and transparently. • The process should include consultation with potential end users and other stakeholders, including the public, using well-constructed questions, and possibly using Delphi-like procedures. • Groups that include stakeholders and people with relevant types of expertise should make decisions. Group processes should ensure full participation by all members of the group. • The process used to select topics should be documented and open to inspection. Should WHO have a centralised or decentralised process?: • Both centralised and decentralised processes should be used. Decentralised processes can be considered as separate "tracks". • Separate tracks should be used for considering issues for specific areas, populations, conditions or concerns. The rationales for designating special tracks should be defined clearly; i.e. why they warrant special consideration. • Updating of guidelines could also be considered as a separate "track", taking account of issues such as the need for corrections and the availability of new evidence. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
48. Improving the use of research evidence in guideline development: introduction.
- Author
-
Oxman, Andrew D., Fretheim, Atle, and Schünemann, Holger J.
- Subjects
- *
PUBLIC health research , *ASSOCIATIONS, institutions, etc. , *DOCUMENTARY evidence , *HEALTH policy , *DOCUMENTATION - Abstract
In 2005 the World Health Organisation (WHO) asked its Advisory Committee on Health Research (ACHR) for advice on ways in which WHO can improve the use of research evidence in the development of recommendations, including guidelines and policies. The ACHR established the Subcommittee on the Use of Research Evidence (SURE) to collect background documentation and consult widely among WHO staff, international experts and end users of WHO recommendations to inform its advice to WHO. We have prepared a series of reviews of methods that are used in the development of guidelines as part of this background documentation. We describe here the background and methods of these reviews, which are being published in Health Research Policy and Systems together with this introduction. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
49. Rational Prescribing in Primary care (RaPP): process evaluation of an intervention to improve prescribing of antihypertensive and cholesterol-lowering drugs.
- Author
-
Fretheim, Atle, Håvelsrud, Kari, and Oxman, Andrew D.
- Subjects
HYPERTENSION ,HYPERCHOLESTEREMIA ,DISEASE management ,DECISION making in clinical medicine ,EVIDENCE-based medicine ,MEDICAL practice - Abstract
Background: A randomised trial of a multifaceted intervention for improving adherence to clinical practice guidelines for the pharmacological management of hypertension and hypercholesterolemia increased prescribing of thiazides, butdetected no impact onthe use of cardiovascular risk assessment toolsor achievement of treatment targets. We carried out a predominantly quantitative process evaluation to help explain and interpret the trial-findings. Methods: Several data-sources were used including: questionnaires completed by pharmacists immediately after educational outreach visits, semi-structured interviews with physicians subjected to the intervention, and data extracted from their electronic medical records. Multivariate regression analyses were conducted to explore the association between possible explanatory variables and the observed variation across practices for the three main outcomes. Results: The attendance rate during the educational sessions in each practice was high; few problems were reported, and the physicians were perceived as being largely supportive of the recommendations we promoted, except for some scepticism regarding the use of thiazides as firstline antihypertensive medication. Multivariate regression models could explain only a small part of the observed variation across practices and across trial-outcomes, and key factors that might explain the observed variation in adherence to the recommendations across practices were not identified. Conclusion: This study did not provide compelling explanations for the trial results. Possible reasons for this include a lack of statistical power and failure to include potential explanatory variables in our analyses, particularly organisational factors. More use of qualitative research methods in the course of the trial could have improved our understanding. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
50. Improving the use of research evidence in guideline development: 9. Grading evidence and recommendations.
- Author
-
Schünemann, Holger J., Fretheim, Atle, and Oxman, Andrew D.
- Subjects
PUBLIC health research ,MEDICAL care ,STANDARDIZATION ,DOCUMENTARY evidence - Abstract
Background: The World Health Organization (WHO), like many other organisations around the world, has recognised the need to use more rigorous processes to ensure that health care recommendations are informed by the best available research evidence. This is the ninth of a series of 16 reviews that have been prepared as background for advice from the WHO Advisory Committee on Health Research to WHO on how to achieve this. Objectives: We reviewed the literature on grading evidence and recommendations in guidelines. Methods: We searched PubMed and three databases of methodological studies for existing systematic reviews and relevant methodological research. We did not conduct a full systematic review ourselves. Our conclusions are based on the available evidence, consideration of what WHO and other organisations are doing and logical arguments. Key questions and answers: Should WHO grade the quality of evidence and the strength of recommendations? • Users of recommendations need to know how much confidence they can place in the underlying evidence and the recommendations. The degree of confidence depends on a number of factors and requires complex judgments. These judgments should be made explicitly in WHO recommendations. A systematic and explicit approach to making judgments about the quality of evidence and the strength of recommendations can help to prevent errors, facilitate critical appraisal of these judgments, and can help to improve communication of this information. What criteria should be used to grade evidence and recommendations? • Both the quality of evidence and the strength of recommendations should be graded. The criteria used to grade the strength of recommendations should include the quality of the underlying evidence, but should not be limited to that. • The approach to grading should be one that has wide international support and is suitable for a wide range of different types of recommendations. The Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach, which is currently suggested in the Guidelines for WHO Guidelines, is being used by an increasing number of other organizations internationally. It should be used more consistently by WHO. Further developments of this approach should ensure its wide applicability. Should WHO use the same grading system for all of its recommendations? • Although there are arguments for and against using the same grading system across a wide range of different types of recommendations, WHO should use a uniform grading system to prevent confusion for developers and users of recommendations. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.