Skip Repetitive Navigation

Scoring and Profile Interpretation

Scoring Each DDCMHT Item

Each program element of the DDCMHT is rated on a 1-to-5 scale.

  • A score of 1 is commensurate with a program that is focused on providing services to persons with mental health disorders. This level, using ASAM language, is referred to on the DDCMHT as Mental Health Only Services (MHOS).

  • A score of 3 indicates a program that is capable of providing services to some individuals with co-occurring substance use and mental health disorders, but has greater capacity to serve individuals with mental health disorders. This level is referred to as being Dual Diagnosis Capable (DDC) by ASAM and the DDCMHT.

  • A score of 5 designates a program that is capable of providing services to any individual with co-occurring substance use and mental health disorders, and the program can address both types of disorders fully and equally. This level is referred to as being Dual Diagnosis Enhanced (DDE) on the DDCMHT.

  • Scores of 2 and 4 are reflective of intermediary levels between the standards established at the 1-MHOS, 3-DDC, and 5-DDE levels.

When rating a program on the DDCMHT, it is helpful to understand that the objective anchors on the scale for each program element are based on one of the following factors:

  1. The presence or absence of specific hierarchical or ordinal benchmarks: 1-MHOS sets the most basic mark; a 3-DDC sets the mid-level mark; a 5-DDE sets the most advanced benchmark to meet. For example, the first Index element regarding the program’s mission statement requires specific standards to be met in order to meet the minimum requirements for scoring at each of the benchmark levels (MHOS, DDC, or DDE).

    -or-

  2. The relative frequency of an element in the program, such as in the last Index element regarding clinical staff that have advanced training in co-occurring disorders services. The rating 1-MHOS sets a lower percentage of staff with required training, 3-DDC requires a moderate percentage, and 5-DDE requires the maximum percentage. Another way frequency may be determined is the degree to which the process under assessment is clinician-driven and variable or systematic and standardized. When processes are clinician-driven they are less likely to occur on a consistent basis and be incorporated into a program’s routine practices.

    -or-

  3. A combination of a presence of a hierarchical standard and the frequency at which these standards occur. In other words, in order to meet the criterion of 3 or 5 on a DDCMHT item, a program must meet a specific qualifying standard. Also, the program must consistently maintain this standard for the majority of its clients (set at an 80 percent basis). For example, the program element regarding co-occurring disorders assessment sets a qualifying standard for the type of assessment used and specifies the frequency with which the standard is routinely applied.

Scoring the DDCMHT Index

Scoring the DDCMHT will produce ratings on the seven dimensions and categorize the program as MHOS, DDC, or DDE. This is a simple way to indicate the co-occurring capacity of an agency’s program.

The total score for the DDCMHT and rank of the program overall is arrived at by:

1. Tallying the number of 1s, 2’s, 3’s, 4’s, and 5’s that a program obtained.

2. Calculating the following percentages:

  1. Percentage of 5’s (DDE) obtained
  2. Percentage of 3’s, 4’s, and 5’s (scores of 3 or greater) obtained
  3. Percentage of 1’s and 2’s obtained

3. Applying the following cutoffs to determine the program’s DDCMHT category:

  1. Programs are DDE if at least 80 percent of scores (i.e., 28 of the 35) are 5’s
  2. Programs are DDC if at least 80 percent of scores are 3’s or greater
  3. Programs are MHOS if less than 80 percent of scores are 3’s are greater

Creating Scoring Profiles

The dimension scores are the average scores of the items within each dimension. Dimension scores can be examined for relative highs and lows and may be connected with the agency’s own readiness to address specific, if not all, areas. These averages can also be depicted on a chart (line graph) and presented as the program’s profile (see the References and Downloads page for an example). Horizontal lines can indicate points above or below the benchmark criteria (e.g., DDC), and this can serve as a visual aid in focusing the assessor and program leadership on both those dimensions that are strengths and areas for potential development. This chart can be very useful to guide feedback and target program enhancement efforts. Lastly, the visual depiction can be enlightening if DDCMHT assessments are conducted at two or more points in time. As a process or continuous quality improvement measure, the profile depicts change or stabilization by dimension.

An Excel Program for the DDCMHT 4.0, available on the References and Downloads page, accepts item scores and generates dimension and overall DDCMHT ratings. This program also generates graphic profiles for display of the program’s ratings across the seven DDCMHT dimensions.

Feedback to Programs

Feedback to programs based on their assessment is typically provided in two formats: verbal feedback and a written report.

First, at the end of the DDCMHT site visit, agency directors and leadership may receive some preliminary verbal feedback. A suggestion is to focus on the strengths of the program and, where possible, join with those issues that have already been identified as quality improvement issues by the agency/program staff members themselves. This could be seen as a parallel to motivational interviewing techniques.

The second format is via written report, which can be structured in several different ways. The report may be in the form of a summary letter to the agency director or a more formal structured report. Regardless of the format, the feedback letter or report should include:

  • a communication of appreciation;
  • a review of what programs and sources of data were assessed;
  • a summary of their scores, including their categorical rating of MHOS, DDC, or DDE, and a graph from the Excel workbook that shows the seven dimension scores;
  • an acknowledgment of relative strengths in existing services; and
  • empathic and realistic suggestion of potential areas that can be targeted for enhancement.

Additional components that could be included in the report include:

  • a graphical display of the program’s overall and dimension scores compared to their region/county/state’s overall averages;
  • a discussion and graph showing the changes since baseline if the assessment is a follow-up.

Conversation and written summaries about dimensions, as well as themes across dimensions, are often the most useful ways for providers to consider where they are and where they want to go. The report may include specific recommendations (e.g., listing and describing specific screening measures to systematize screening for co-occurring disorders) or make mention only of thematic areas of potential improvements.