Using Process Evaluation to Monitor Program Implementation

Process evaluation involves analyzing how program activities are delivered. Prevention practitioners seek to find the answers to these central questions:

  • Who delivers the program and how often?
  • To what extent was the program implemented as planned?
  • How is the program received by the target group and program staff?
  • What are barriers to program delivery?
  • Was the data used to make program improvements/refinements? If so, what changes were made?

These questions enable practitioners to also assess the quality of implementation, which is critical to maximizing the program’s intended benefits and demonstrating strategy effectiveness. Process evaluation also provides the information needed to make adjustments to strategy implementation to strengthen effectiveness.

Specifically, process evaluation can be used to:

  • Paint a clear and compelling picture of the population targeted with each strategy
  • Reach important target audiences of stakeholders
  • Provide data for program improvement efforts
  • Distribute the information through as many channels as possible to reach target audience

Why Not Just Look at Results?

Outcome evaluation looks at results. It measures the direct effects of program activities on targeted recipients, such as the degree to which a program increased knowledge about the use of alcohol, tobacco, and other drugs. But results don’t tell the whole story. Evaluation that only focuses on outcomes is sometimes called a “black box” evaluation because it does not take process evaluation into consideration.

Disappointing outcome evaluation results can frequently be illuminated by examining how the program was implemented, the number of clients served, dropout rates, and how clients experienced the program. Those same kinds of questions can also explain positive evaluation results. (You can’t take credit for positive results if you can’t show what caused them.) Outcome evaluation alone, without a process evaluation component, won’t provide information about why a program did or didn’t work.

Defining Quality

Quality implementation means that the implementers of each strategy have:

  • Assured the strategy matches the cultural, developmental, and gender characteristics of the population
  • Received training or technical assistance to support appropriate implementation of the intervention
  • Worked with the program developer, policy expert, or the evaluator to understand core components—the elements most responsible for demonstrated outcomes
  • Assessed the need for any adaptations to the strategy, especially core components, in order to meet the particular needs of the target population
  • Sought input from the program developer about planned adaptations to assure they are consistent with the core components
  • Planned necessary adaptations to target population, program content or materials, delivery setting or timeframe to assure integrity of implementation
  • Sought to deliver program’s core components with fidelity when possible
  • Tracked implementation through process evaluation as well as all planned and unanticipated adaptations to inform outcome evaluation findings
  • Used process evaluation data to inform and strengthen implementation when outcome evaluation did not reveal desired program results

Process evaluation measures should be designed to assess how well the implementers adhered to those items.

What to Do with Disappointing Process Data

If process data suggest the program was not effective, practitioners should work with project stakeholders to:

  • Determine if the program was implemented with quality
  • Determine if strategy was appropriate for the population’s needs (review needs assessment data and program theory)
  • Review evaluation strategies and measures to ensure they are appropriate and valid
Last Updated: 09/24/2015