Center for the Application of Prevention Technologies (CAPT) banner

Process and Outcomes Evaluation

Prevention program evaluations must include details about how program processes were carried out, as well as positive and negative outcomes.

Process Evaluation

Process evaluation examines how prevention program activities are delivered. It helps practitioners to determine how closely the intervention was implemented as planned and how well it reached the target population.

Process evaluation can also be used to monitor and document prevention program operations. It can answer questions such as:

  • Who received services?
  • What type of services did they receive?
  • How much or how long did they receive these services?

Measuring Participation

Measuring participation is more than just counting people served by a program. Simple numbers don’t tell the whole story. Instead, consider documenting processes in ways that more readily describe how the intervention is, or is not, working. Appropriate methods to measure program participation include:

  • Interviews
  • Documentaries
  • Participatory photography
  • Digital storytelling (using media such as audio, music, narratives, visual images, and photographs to capture a unique story)

Measuring Fidelity

Fidelity refers to the degree to which a program is implemented as its developer intended. If a program closely adheres to the original strategy, it’s more likely to replicate the positive outcomes of the program’s initial implementation or testing.

To measure fidelity, consider the following:

  • Start with research. See who has implemented the strategy you’re interested in and where it’s been implemented most successfully. Then, from among these successful programs, select the one that is most similar to your own and use it as a model.
  • Talk to people. Find out which elements community members thought contributed most to the program’s success. Then replicate, to the best of your ability, what they did. Adopt the language they used. Work with the same group of stakeholders. Granted, you may not be able to implement the strategy in exactly the same way, but it’s better to model your approach on a similar (and successful) program than on one that is nothing like your own.

Learn more about fidelity and adaptation.

Outcome Evaluation

Outcome evaluation measures a program’s results and helps determine whether a program or strategy produced the changes it intended to achieve. Because many environmental prevention strategies either rely on policy change or function like policies, a promising approach to evaluating the effectiveness of strategies is through policy analysis. This is a broad approach that examines the overall impact of a policy on large populations such as communities, states, or the nation. A comprehensive outcome evaluation will include:

  • An assessment of the impacts of each program component
  • Data from a population group
  • Choices of evaluation designs
  • A selection of comparison groups

Assessing the Impact of Individual Program Components

Unlike individual change strategies, community change strategies often include multiple interventions, each targeting its own set of risk factors or intervening variables. This can make it difficult to know which interventions are generating observed changes in a population.

It can be helpful to focus more on the success of the model, as a whole, than on individual variables. Consider the big picture: Is the model producing successful results? While it can be useful to understand how each component works, sometimes (especially within the confines of small, low-budget grants) it’s acceptable to just determine that the model works—without knowing exactly why.

Collecting Data From a Population Group

Collecting data from an entire population can be an overwhelming (or impossible) task, depending on the size of the population. That is why it’s important to use what data is already available. Look for existing data sources that describe behaviors of interest for your population group and sources that may capture changes in those behaviors. Learn more about finding epidemiological data and analyzing epidemiological data. Access SAMHSA data.

Choosing Evaluation Designs

Choosing an evaluation design for community change strategies is challenging. However, this is difficult to do because the unit of analysis is the population. Program evaluation designs typically involve assessing change among individuals before and after an intervention or comparing an intervention/treatment group to a comparison group.

Implementing a community change strategy is often more fluid than implementing an individual change strategy. Policies, for example, are typically embedded in national, economic, political, cultural, and social structures. Often, multiple steps are involved in approval and implementation, which can sometimes take months or even years.

You may decide to use an interrupted time series design for evaluation. This design looks at trends over time, both before and after an intervention is implemented, and determines at what point the trend is interrupted (for example, where the observed change occurred). You can then determine if the observed change in the trend came after the intervention was implemented. You could also use an interrupted time series design with a control or comparison group, such as another similar community. This will allow you to capture any changes, over time, in the community that received the intervention versus the community that did not receive the intervention.

Adding a comparison group or community helps you determine whether your target population would have improved over time even if it had not experienced your intervention. The more similar the two groups are, the more confident you can be that your program contributed to any detected changes. But what does it mean to be similar?

No two communities are exactly alike. What’s important is that the intervention and comparison communities are similar with respect to variables that may affect program outcomes (such as gender, race or ethnicity, socioeconomic status, or education). The more similar the chosen variables are between the two groups, the more confident you can be that your intervention contributed to any detected change.

When selecting a comparison group or community, consider similarities in:

  • Demographic characteristics
  • Education systems, such as community colleges or universities
  • Economic climate (for example: agricultural community vs. industrial community)
  • Substance use problems

Of course, the comparison group of community acts as a control, so they should not be receiving the same environmental intervention as your community.

Publications and Resources

Access more CAPT tools and other learning resources.

Last Updated: 04/05/2016