Research and Evaluation

Learn about theories of change, logic models, evaluation plans, and other research tools that help IECMHC teams measure program fidelity and impact.

Vision Statement on Research and Evaluation

The Center’s vision is that solid research and evaluation will help move infant and early childhood mental health consultation (IECMHC) forward as a necessary and essential service for children, families, and programs across the nation. Research and evaluation on IECMHC will be equity-informed. Data will be gathered at multiple levels and will be used to sustain programs and monitor fidelity. Evaluators will work with their community and state partners to ensure that they are measuring policy-relevant outcomes.

Additional information on working with tribal communities.

Evaluation Planning

Program developers, implementers, evaluators, and practitioners must take several steps to demonstrate the success and impact of IECMHC programs and their fidelity. Learn about the steps of the program evaluation process and the evidence base for IECMHC:

Designing an IECMHC Evaluation Plan

Theory of Change

Every strong evaluation should begin with a theory of change. A theory of change explains why a program will succeed and describes how IECMHC services are expected to affect short- and long-term outcomes. From a practical point of view, going through the exercise of developing, refining, and promoting a theory of change helps program leadership and staff to articulate the “what,” “why,” and “how” of IECMHC.

Theory of Change Toolbox Resources

Use these publications and resources to develop a theory of change for your team’s program:

Logic Models

A well-designed evaluation begins with the development of a program logic model, created in collaboration with stakeholders, program evaluators, and program designers. Logic models are visual depictions of the IECMHC program’s purpose, theory of change, processes, and outcomes. The logic model states basic assumptions, the intended target populations, specific IECMHC services, and the processes needed to support the IECMHC program. Logic models are also an essential tool for guiding measurement of the program’s effects.

Logic Model Toolbox Resources

The following documents can also help your team develop a program logic model:

Selecting Measures for IECMHC Evaluations

Best practice requires IECMHC evaluations to collect data at multiple levels. These include program characteristics; consultant demographics; teacher, classroom, and home visitor variables; and child- and family-level outcomes. If you’re starting to plan an evaluation, ensure that your team collects data that can be disaggregated by gender, race, ethnicity, and other characteristics of the service recipients.

It’s essential to examine the fidelity of your IECMHC model. Fidelity is defined as how closely a program follows the intended protocol or procedures. Data systems are also critical to conversations about fidelity, program results, and future funding. Data can also highlight areas of the state where out-of-school discipline is disproportionate and may need to be addressed through better access to IECMHC services.

IECMHC is a complex intervention that can impact multiple levels of outcomes, depending on the model and approach. Examples of individual outcomes associated with IECMHC are improvements in children’s behaviors, teachers’ practices, and parental stress. Examples of program outcomes may be measured by population reach, service provision across sites, expulsion rates, and disproportionality across target populations. Measures selected for IECMHC should align with the model’s theory of change and logic model.

Evaluations Toolbox Resources

Use the following documents and resources to help your team select program measurements:

Equity-informed IECMHC Evaluation

As demonstrated in data on school discipline gathered by the Department of Education’s Office for Civil Rights – 2014 (PDF | 2.1 MB), there are ongoing disparities in out-of-school discipline for preschool boys and African American preschoolers. While access to IECMHC has been associated with overall reductions in expulsions from early care and education settings, it remains to be seen if IECMHC can directly address this disproportionality. Evaluators are encouraged to collect information on race and ethnicity, gender, suspensions, and expulsions so that better data on this equity issue can be analyzed.

Learn more about promoting equity and reducing disparities when engaging in IECMHC.

State Snapshots: Equity in IECMHC Evaluations – 2017 (PDF | 259 KB) describes the purposes of equity-informed evaluation, offers key recommendations, and provides snapshots of equity-informed evaluation in five states:

  • Louisiana’s IECMHC program prioritizes equity by requiring all affiliated staff, including the evaluation team, to participate in “undoing racism” seminars and training.
  • Maryland’s detailed IECMHC program monitoring system allows administrators to disaggregate data by race, gender, community, and provider. This helps ensure that communities affected by disproportionality of suspension and/or expulsion rates are receiving IECMHC services.
  • Connecticut’s statewide IECMHC information system has built-in transparency in its data collection and program monitoring. This system includes a feedback loop where mental health consultants share outcomes with consultees and families to involve them in the decision-making process for moving forward with IECMHC services.
  • Arkansas’ commitment to equity in IECMHC involves new and stronger initiatives that strengthen non-expulsion policies throughout the childcare system and will track data to monitor adherence to these new policy requirements.
  • Arizona’s evaluation of its Smart Support IECMHC program allowed evaluators to disaggregate child-level outcomes for African American and Latino children to examine changes in expulsion and suspension patterns.

Using Data for Sustainability

Strong evaluation data are often a powerful tool for communicating with stakeholders. Securing even a small amount of funds for an external evaluation can help make the case for scaling up pilot sites and securing state general revenue after a federal grant or seed money ends. Evaluators should work with their community and state partners to ensure they are measuring policy-relevant outcomes.

The video Using Data to Show Effectiveness and Promote Sustainability (three minutes) can help your team use data for sustainability. In this video, key staff from Maryland explain how they used data to demonstrate IECMHC efforts and outcomes, leading to the formation of an effective message to share with decision-makers.

IECMHC Evaluation Examples

Evaluations of IECMHC programs take on many purposes. These purposes may include:

  • Tracking programmatic outcomes in a statewide data system
  • Documenting the reach of a targeted population, such as in foster care settings
  • Fostering the use of innovative evaluation tools to demonstrate social and emotional support for children in IECMHC programs

When evaluations are rigorous, they produce credible data that can help make the case for sustainable outcomes and securing additional funds from sponsors.

Use the following evaluation examples to inform your own evaluation plan:

Evidence Base for IECMHC

During a SAMHSA meeting of federal agencies and experts in the field of IECMHC in September 2014, researchers identified a substantial body of evidence for the effectiveness of IECMHC across a range of outcomes. In a variety of early care, education, and home visiting settings, IECMHC has been shown to reduce behavior problems in children and to increase adults’ awareness of the need to focus on social and emotional health.

While this is welcome news, there is still need for more equity-based evaluations that examine whether IECMHC can reduce differential treatment of children based on factors such as race and gender.

Access documents and tools on the evidence base for IECMHC.

Find additional resources in the IECMHC Toolbox, including guidance on:

Last Updated: 02/02/2018