Evidence-informed policy making: The role of monitoring and evaluation

A summary of

UNICEF (2008). Bridging the gap: The role of monitoring and evaluation in evidence-based policy making. Geneva, Switzerland: UNICEF Regional Office for CEE/CIS.

How to cite this NCCMT summary:

National Collaborating Centre for Methods and Tools (2011). Evidence-informed policy making: The role of monitoring and evaluation. Hamilton, ON: McMaster University. (Updated 29 April, 2011) Retrieved from http://www.nccmt.ca/resources/search/82.

Categories: Method, Evaluate, Knowledge management, Policy development

These summaries are written by the NCCMT to condense and to provide an overview of the resources listed in the Registry of Methods and Tools and to give suggestions for their use in a public health context. For more information on individual methods and tools included in the review, please consult the authors/developers of the original resources.

Relevance for Public Health
This method discusses how evaluation and monitoring are used in the process of developing evidence-based policies within the aid and development sector. Specific sections of this report are relevant for public health decision-makers and policy-makers. For example, this method would be of particular interest to policy-makers examining different policy options to improve health through changes in the built environment based on research and evaluation findings.
Description

Evidence-based policy making has been gaining currency as one way to demonstrate accountability in policy decisions. Using strong evidence can make a difference to policy making by:

  • achieving recognition of a policy issue (the first step in the policy-making process);
  • informing the design and choice of policy (to analyze the policy issue);
  • forecasting the future (forecasting models to assess how a policy can influence both short- and long-term outcomes);
  • monitoring policy implementation (to assess the expected results of a policy decision); and
  • evaluating policy impact (to measure the impact of a policy).

Monitoring and evaluation play a strategic role in the policy-making process by seeking to improve the relevance, efficiency and effectiveness of policy decisions.

This resource consists of three sections:
Part 1: Evidence-based policy making
Part 2: The strategic intent of evaluations, studies and research
Part 3: The strategic intent of data collection and dissemination

The focus of this summary statement will be on specific chapters within Parts 1 and 2 that are applicable to public health decision-making. Information outlining application within specific contexts is not included.

Implementing the Method/Tool
Steps for Using Method/Tool

Part 1: Evidence-based policy making

A. Evidence-based policy making and the role of monitoring and evaluation
Some of the shifts in evaluation that can facilitate the use of evaluation for evidence-based policy making include the following:

  • Focus on the evaluation of regional or large programs or policies, rather than small projects, to enable evaluation to have a strategic role within your organization.
  • Use a systemic approach to evaluation where policy decisions are informed by relevant, integrated monitoring and evaluation systems institutionalized within the organization.
  • Nurture participatory approaches to evaluation that provide a forum for greater dialogue among stakeholders.
  • Strengthen evaluation capacity within your organization.
  • Institutionalize monitoring and evaluation systems with quality standards.

What is evidence-based policy? (p. 27)
The degree to which the policy-making process uses evidence lies on a continuum.

  • Evidence-based policy: uses the best available evidence to help planners make better-informed decisions. Evidence may include information from integrated monitoring and evaluation systems, academic research, historical experience and 'good practice' information. It is recognized that not all sources of evidence are sound to inform policy making.
  • Opinion-based policy: relies heavily on the selective use of research evidence or the views of individuals or groups based on particular ideological viewpoints, values, etc.
  • Evidence-influenced policy: recognizes that policy making is an inherently political process, and that decision-makers may not be able to translate evidence into policy options according to quality standards due to constraints.

The nature of evidence (p. 29-32)
Different types of evidence are used in the policy-making process: systematic reviews, single studies and evaluations, pilot and case studies, expert advice and Internet information.

The following types of research and evaluation are used in policy making (p. 32-33):

  • Instrumental use: Research feeds directly into decision making for policy and practice.
  • Conceptual use: Research and evaluation influence the understanding of a situation and provide new ways of thinking and insights into different policy options.
  • Mobilization of support: Research and evaluation are used to justify a particular course of action or inaction.
  • Wider influence: Evidence has an influence beyond the organization or events currently taking place. Although this form of influence is rare, knowledge influencing civil society this way can lead to large-scale shifts in thinking and action.

Knowledge as power? The need for evidence-based policy options

Various factors influence the policy-making process (see Figure 5 on p. 33). Different people use evidence in different ways in the policy process. Policy-makers use information to:

  • show the effectiveness of policy and the relationship between the risks and benefits;
  • ascertain the acceptability of a policy to stakeholders; and
  • develop consensus among divergent interests and stakeholders.

The following factors influence policy making and policy implementation:

  • practice of political life
  • resources
  • experience

Evidence for policy has three components:

  • hard data (research, evaluation)
  • analytical argumentation (which places hard data within a wider context)
  • stakeholder opinion

Factors facilitating the use of evidence into policy include (p. 36):

  • timely, relevant and clear research and evaluation with sound methodology
  • results that are congruent with existing ideologies, and that are convenient and feasible
  • policy-makers who believe evidence can act as an important counterbalance to expert opinion
  • strong advocates for research and evaluation findings
  • partnerships between policy-makers, decision-makers and researchers in generating evidence
  • strong implementation findings
  • implementation is reversible if needed


Evidence into practice: increasing the uptake of evidence in both policy and practice
Key issues to address in developing an integrated monitoring and evaluation strategy include:

  • What research and evaluation designs are appropriate for specific research questions? How do you know when the methodology is sound?
  • What is an appropriate balance between primary (new studies) and secondary research (analysis of existing data)?
  • How can you balance the need for rigour with the need for timely findings of practical relevance?
  • What approaches can you use to identify knowledge gap? How should you prioritize the gaps?
  • How should research and evaluation be commissioned and managed to fill identified knowledge gaps?
  • How can research and evaluation capacity be developed to increase the availability of research-based information?
  • How can you manage the tension between the desirability of independent, neutral researchers and evaluators and the need for close partnerships between decision-makers and evaluators?
  • How should you communicate evidence? How can policy-makers and decision-makers be involved in research and evaluation to ensure that findings are more readily applied?

Other suggested strategies include the following:

  • getting appropriate 'buy-in': when policy-makers and decision-makers take ownership of an initiative
  • improving the dialogue between policy-makers and researchers and evaluators
  • matching strong demand with a good supply of appropriate evidence
  • improving understandability of evidence
  • using effective evidence dissemination and incentives

B: The Relationship between Evaluation and Politics
Definitions of evaluation and politics
(p. 47-50)
Evaluation is the process of determining the merit, worth or value of something. This process has two purposes: to provide information and judgment. While seeking information relates evaluation to research, providing value judgments or evaluative conclusions provides a link to politics. Both evaluation and politics are concerned with values, value judgments and value conflicts in public life. Politics is the authoritative allocation of values of a society, which involves making moral decisions about what is good and bad.

Evaluation researchers' views on the links between evaluation and politics (p. 50-54)
Evaluation and politics are linked in the following ways:

  1. The policies and programs that evaluation deals with are the result of political decisions.
  2. Evaluation feeds into decision making by providing information for political decisions (such as to assess or justify the need for a new program).
  3. Evaluation is used for policy implementation (to ensure a policy is being implemented in a cost effective or effective way).
  4. Evaluation has a political stance by making implicit political statements (e.g., challenging the legitimacy of existing programs) and serves as a tool for critical inquiry.
  5. Evaluation supports accountability in decision making (to determine the effectiveness of a program and the need to sustain, change or end the program).
  6. Politics influences evaluation design, process and use of findings (see also p. 56).
  7. Evaluators can be collaborators or advocates of decision-makers, the program or policy itself, or the clients of the program. Evaluators can represent the voice of vulnerable stakeholders who are often not included in evaluation design or implementation. In this way, evaluation serves to build capacity within organizations and communities.

This document provides insight into different viewpoints of the evaluator—as a value-neutral evaluator, as a value-sensitive evaluator and as a value-critical evaluator (p. 55-63).

C: Monitoring and Evaluation and the Knowledge Function
This chapter situates monitoring and evaluation in the wider context of knowledge management as an element of organizational learning and performance strengthening.

The knowledge function
The knowledge function is concerned with the acquisition, organization, production, communication and use of knowledge within organizations and beyond their boundaries. Within the knowledge function, knowledge management refers to the management activities supporting all of these steps, which seek to enhance the organization, integration, sharing and delivery of knowledge. The knowledge function consists of these steps (see diagram on p. 76):

  • strategic knowledge planning
  • knowledge acquisition
  • organization and storage of information
  • generation of knowledge products
  • communication and exchange of knowledge
  • application and use of knowledge — information from evaluations needs to be adapted in order to be implemented

Linking knowledge and action (p. 84-86)
These approaches can be used to enhance the usefulness of knowledge from monitoring and evaluation sources within organizational knowledge systems:

  1. Adopting processes for critical review and quality assessment to ensure that users of monitoring and evaluation data can readily assess the relevance of information to different issue areas; soundness of underlying program; validity or applicability of lessons learned; accessibility of data and reference materials.
  2. Increasing the use of pilot approaches to increase the validity and credibility of knowledge from monitoring and evaluation. Pilot projects (to test the effectiveness of a program or policy) usually involve a higher standard of monitoring and evaluation, including baseline measurement, periodic assessment and the use of control or comparison groups.
  3. Strengthening platforms for the organization, presentation and communication of knowledge, including monitoring and evaluation data. There are opportunities to integrate qualitative data with other information sources and to promote analysis of datasets and evaluation findings.
  4. Developing capacity for knowledge impact evaluation to increase understanding of the effectiveness of a knowledge translation strategy, and the impact of knowledge use both within and outside of an organization.

D: Ten Steps to a Results-based Monitoring and Evaluation System
There has been a shift in monitoring and evaluation away from implementation-based approaches to results-based strategies. Results-based systems answer the 'so what?' question. It is not enough to implement programs and assume successful implementation means that there have been actual improvements in health outcomes. This section is a summary of the book, Ten Steps to a Results-Based Monitoring and Evaluation System by Kusek and Rist (2004).

The ten steps to building a performance-based monitoring and evaluation system (p. 103-105)

  • Step 1 – Conduct a readiness assessment: determine the capacity and willingness within organization(s) to develop a results-based monitoring and evaluation system. The assessment addresses such issues as the presence or absence of champions, the barriers to building a system, etc.
  • Step 2 – Agree on outcomes to monitor and evaluate: ensure outcomes come from strategic priorities.
  • Step 3 – Develop key indicators to monitor outcomes: assess the degree to which outcomes are being achieved. Both political and methodological issues in creating credible indicators are critical.
  • Step 4 – Gather baseline data on indicators: assess initial conditions.
  • Step 5 – Plan for improvements; set realistic targets: set intermediate goals since most outcomes are long term, complex and not quickly achieved.
  • Step 6 – Monitor for results: establish data collection, analysis and reporting guidelines, establish means of quality control, etc.
  • Step 7 – Evaluate information to support decision making: use evaluation studies throughout this process to assess results and movement toward outcomes.
  • Step 8 – Analyze and report findings: determine what findings are to be reported, in what format and at what intervals.
  • Step 9 – Use the findings: get the information to the appropriate users in a timely way so that they can consider the findings in their management of a program or policy.
  • Step 10 – Sustain the monitoring and evaluation system: implement a long-term process including building and maintaining elements of a sustainable system.

Part 2: The strategic intent of the evaluation function

E: Enhancing the use of evaluations for evidence-based policy making
Although considerable resources are devoted to program evaluation, the use of evaluation findings is very low. There is widespread concern over the inability of evaluation to influence decision making in a significant way and the misuse of evaluation findings (e.g., use of poorly-designed evaluation studies).

Defining evaluation use

Assessing evaluation use involves assessing the following:

  • Evaluation use: how evaluation findings and recommendations are used by policy-makers, decision-makers, practitioners, etc.
  • Evaluation influence: how the evaluation has influenced decisions and actions.
  • The consequences of the evaluation: how the process of conducting the evaluation, the findings and recommendations impact on the organizations involved, the policy dialogue and the intended populations. The decision to conduct an evaluation and the choice of evaluation methodology can have important impacts.

A number of measurement issues need to be addressed in the assessment of evaluation use, influence or consequences:

  • The time period over which outcomes and impacts are measured: due to time constraints, evaluators are often required to assess outcomes and impacts at an early stage in program implementation, when it may be too early to accurately assess these. An evaluability assessment done prior to the evaluation can determine if it is possible to measure outcomes or impacts at that point in time if an evaluation were to be conducted.
  • Intensity of effects: assess the scope and intensity of the evaluation's influence on decision making.
  • Reporting bias: funding agencies influence decision making, which will make it more difficult to determine the influence of evaluation when organizations do not acknowledge how evaluation affected decision making.
  • Attribution: since decision-makers receive information from multiple sources, it is difficult to determine the extent to which evaluation influences a particular decision.

Assessing the influence of an evaluation (attribution analysis) (p. 125-126)
The figure on p. 126 illustrates an approach to attribution analysis used by the World Bank Operations Evaluation Department.

Examples of effective evaluation use
(p. 127-130)

Ways to strengthen evaluation use (p. 131-138)

  • Create ownership of the evaluation.
  • Use effective communication strategies.
  • Decide what to evaluate by focusing on a few critical questions.
  • Base the evaluation on a program theory and logic model.
  • Understand the political context.
  • Appropriately time the launch and completion of the evaluation.
  • Define the appropriate evaluation methodology.
  • Use process analysis and formative evaluation strategies.
  • Build evaluation capacity.
  • Communicate the findings of the evaluation.
  • Develop a follow-up action plan as a way to promote use of evaluation findings.

Annexes of Authors Vitae (p. 210) and Abbreviations (p. 217)

Who is involved
Several individuals would be involved in using this method to inform their processes for policy making, evaluation and monitoring. These include program directors, program managers, policy analysts, health analysts, research and evaluation specialists, epidemiologists, project specialists, team leaders and others.
Conditions for Use
Not specified
Evaluation and Measurement Characteristics
Evaluation
Information not available
Validity
Not applicable
Reliability
Not applicable
Methodological Rating
Not applicable
Method/Tool Development
Developer(s)

Marco Segone (Editor)
Marie-Helene Adrien Debora McWhinney
Michael Bamberger David Parker
Ross F. Conner Oliver Petrovic
Dragana Djokovic-Papic Nicolas Pron
Attila Hancioglu Ray Rist
Vladica Jankovic Mohammed Azzedine Salah
Dennis Jobin Daniel Vadnais
Ove Karlsson Vestman Vladen Vasic
Jody Zall Kusek Azzedina Vukovic
Keith Mackay

Evaluation Working Papers
UNICEF

Method of Development
This method is a compilation of essays from senior officers in institutions dealing with evidence-based policy making, and the role of monitoring and evaluation. It brings together knowledge and lessons learned about evidence-based policy making from officials who are policy-makers, researchers and evaluators, working for national and local governments, UNICEF, the World Bank and the International Development Evaluation Association.
Release Date
2008
Contact Person/Source
Marco Segone
Senior Regional Monitoring and Evaluation Advisor
UNICEF
email: msegone@unicef.org

Resources

Title of Primary Resource
Bridging the gap: The role of monitoring and evaluation in evidence-based policy making.
File Attachment
None
Web-link
Reference
UNICEF (2008). Bridging the gap: The role of monitoring and evaluation in evidence-based policy making. Geneva, Switzerland: UNICEF Regional Office for CEE/CIS.
Type of Material
Report
Format
On-line Access
Cost to Access
None.
Language
English
Conditions for Use
Not specified

Have you used this resource? Share your story!