The 10 sections of the resource provide a number of useful tools to help users plan, implement and evaluate programs.
A) Needs and Resources: What are the underlying needs and conditions in the community?
Data Collection Plan Tool (p. 22)
A clear data collection plan helps ensure that data collection is on track. The key components of a data collection plan include the following:
- risk/protective factors to be assessed
- indicators used for measurement
- how/where to get the data
- persons responsible
- dates by which key tasks are to be completed
B) Goals: What are the goals, target populations and objectives (i.e., desired outcomes) that will address the needs and change the underlying conditions?
The ABCDE Method for Writing Objectives (p. 36)
- A - Audience (the population for whom the desired outcome is intended)
- B - Behaviour (a clear statement of expected behaviour change)
- C - Condition (what is the time frame for implementation and measurement?)
- D - Degree (how much change is expected?)
- E - Evidence (how will the change be measured?)
Goals Tool (p.40)
This section organizes program goals, objectives and intended populations to link each goal to a specific objective and population.
C) Capacities: What organizational capacities are needed to implement the program?
Capacity Tools (p. 75-76)
The developers outline a four-step process to assess organizational capacity across human, technical, fiscal and structural/linkage capacities. The process includes identifying specific aspects of capacity to be assessed, determining the requirements needed for each capacity assessment item, determining if capacity is sufficient and choosing strategies to enhance capacity.
D) Plan: What is the plan for this program?
Planning Tool (p. 87-92)
The Planning Tool help users link specific program components to identified objectives and identify specific tasks within each program component. The Planning Tool outlines the following elements of a program plan:
- providing a program name and summary
- identifying program components
- identifying anticipated outputs
- planning each program component
- identifying target groups
- choosing collaboration partners
- maximizing program integration
- identifying implementation barriers
- completing summary checklists
E) Process: how will the quality of program implementation be assessed?
Process Evaluation Planning Tool (p. 96)
Several process evaluation questions are listed with their corresponding data collection activities:
- Did the program follow the basic plan for service delivery?
- What are the program characteristics?
- What are participants' characteristics?
- What is the participants' satisfaction with the program?
- What is the staff's perception of the program?
- What were the individual participants' dosages of the program?
- What were the program components' levels of quality?
Implementation Tool (p. 110-112)
The Implementation Tool helps users determine if the program was employed according to the program plan. This tool elaborates on questions posed in the Process Evaluation Planning Tool, providing information on the following:
- monitoring component outputs
- planning activities for each program component
- recruitment of target groups
- collaboration partners
- program integration
F) Outcomes: How well did the program work?
Comparisons of the Common Evaluation Designs (p. 123)
This table is useful for assessing the advantages and disadvantages of different types of outcome evaluation designs when deciding which kind of evaluation design is suited to your program. The developers recommend conducting a pre-post method with a comparison group, but if that is not feasible, than to do a pre-post. The following evaluation designs are assessed:
- post only (measure outcomes after the program has been delivered)
- pre-post (measure change by comparing baseline levels to outcomes after the program has been delivered)
- retrospective pre-post (participants rate themselves at the end of the program on pre- and post-levels)
- pre-post with comparison group (to compare changes in participants relative to a similar group who did not receive the intervention)
- pre-post with control group (participants are randomly assigned to a control group or intervention group from the same target population)
Data Collection Methods at a Glance (p. 124-125)
This table highlights strengths and weaknesses of data collection methods, both quantitative and qualitative. While quantitative methods typically seek to answer who, what, where and how much, qualitative methods answer why and how.
Linking Design, Collection and Analysis at a Glance (p. 131)
This section links specific evaluation designs (post, pre-post and pre-post with comparison group or pre-post with control group) with the appropriate data collection method, data analysis method and the number of groups involved.
Outcome Evaluation Tool (p. 133-135)
The Outcome Evaluation Tool is designed to organized the following information:
- summary of the needs assessment findings including relevant risk and protective factors
- the target group (including numbers)
- measures used
- evaluation design selected
- sample size: the number of people assessed in the evaluation
- data analysis method
- pre- and post-scores and their differences
- interpretation of the results
G) Continuous Quality Improvement: How will continuous quality improvement strategies be incorporated?
CQI Tool (p.141)
Summarizing and using the information learned from planning, implementing and evaluating the program will inform a continuous quality improvement strategy. The CQI Tool summarizes findings from earlier steps and determines how this information will inform changes to program implementation. The following questions are listed:
- Have the needs of the target group/resources in the community changed?
- Have the goals/desired outcomes/target population changed?
- Are new and improved evidence-based/best practice technologies available?
- Does the program continue to fit with your agency (both philosophically and logistically) and your community?
- Have the resources available to address the identified needs changed?
- How well did you plan? What suggestions do you have for improvement?
- How well was the program implemented? How well did you follow the plan you created? What were the main conclusions from the process evaluation?
- How well did the program reach its outcomes? What were the main conclusions from the outcome evaluation?