Selecting an Appropriate Evaluation Design

The first step in any project is to develop a plan for getting the work done. The plan for an evaluation project is called the “design.”

All too often, prevention practitioners launch into their evaluation without coming up with a plan. They start thinking about how to collect data before determining what to collect. This is usually accompanied by the phrase, “Let’s do a survey!” But before choosing methods, practitioners need to back up.

Designing an evaluation is a process that starts out general, but which ultimately becomes very specific. The first step is to clarify the purpose of the evaluation. That leads to developing questions that then require information and data obtained from methods. But the methods come last, not first.

One of the primary purposes of the evaluation is to determine if the program or intervention had the desired effect. A classic research study found that individual behaviors and workers’ performance can improve simply because they know they are part of the study. The “Hawthorne Effect,” as this phenomenon is known, is a thorn in the side of evaluators.

To correct for the Hawthorne Effect, some type of comparison generally needs to be made to make sure the change was the result of the intervention, and not due to the attention received. There is a continuum of evaluation rigor among these different methods of comparison.

Experimental Design

Typically, the most rigorous evaluation approach is an experimental design where participants are randomly assigned to a program or a control group. Those participants in the program group receive an intervention, while those in the control get either the existing program or in some cases, no program. Some type of pre/post assessment is provided to both groups and the results are then compared to determine if there were differences between groups. This method provides the greatest support for ruling out plausible explanations.

Quasi-experimental Design

Similar in structure to the experimental design, the quasi-experimental design does not rely on random assignment in making assignments to a program or comparison group. A quasi-experimental design is frequently used when there is not a sufficient number of participants available to randomly assign to a program and control group. As a result, in a quasi-experimental design, a significant challenge is identifying comparison groups and then collecting data from them. To score well on most federal lists of evidence-based programs, it is important, at a minimum, to use a quasi-experimental design.

Factors to Consider In Choosing an Evaluation Design

The Purpose of the Evaluation

The purpose of the evaluation varies greatly from program to program. As discussed previously, evaluation can be defined as the systematic collection of information about program activities, characteristics, and outcomes to reduce uncertainty, improve effectiveness, and make decisions. Most frequently, the emphasis is on whether or not the program had the intended effect. But, information could be collected that focuses only on the number of individuals served. Or evaluation data may be collected to use for marketing purposes. Each of these different strategies would require differing evaluation designs and differing evaluation skills.

What Will Be Evaluated

Is it the entire project or only certain components? For example, if you are part of a coalition, you may think about evaluating individual coalition initiatives like the provision of after-school activities. But don’t forget that you may also want to evaluate changes in the coalition over time—things like growth of coalition membership, formal agreements with key community organizations—and the impact of the coalition on the overall community.

Who Wants to Know What

Keep all of your stakeholders in mind. Program providers may want to know what’s working and what isn’t. Funders may want to know if the program is cost-effective and supported in the community. They may also have specific measures that are required. Communication between funders, project staff, and evaluators is essential to ensure the necessary data are identified, collected, analyzed, and reported in a manner that is understood from the beginning.

When Results Are Needed

An evaluation is often bound by schedules and deadlines that are beyond your control. Think about school calendars and funding cycles as examples. If your reporting needs are short-term, don’t ask questions that require long-term follow-up. Process information is generally needed quickly. Short-term outcome results often need to be reported back in a timely manner (usually within 6 to 12 months of program implementation), while more long-term results are typically not available until sometime after program completion (often 3 to 5 years).

What Will Be Done with the Evaluation Results

Think broadly about the utility of your results. You can use your data not only to meet funding requirements, but also to garner community and school support, inform future planning or programming, support community stakeholders, etc. But, the degree of influence a program has on the evaluation findings is largely dependent on the research design used.

Available Resources for the Evaluation (Time, Money, People)

Don’t ask questions that you can’t afford to answer. Available resources can influence an evaluation plan more than any other single factor.

Learn about Step 5 of SAMHSA's Strategic Prevention Framework (SPF): Evaluation.

Last Updated: 09/24/2015