Skip To Content
Click for DHHS Home Page
Click for the SAMHSA Home Page
Click for the OAS Drug Abuse Statistics Home Page
Click for What's New
Click for Recent Reports and HighlightsClick for Information by Topic Click for OAS Data Systems and more Pubs Click for Data on Specific Drugs of Use Click for Short Reports and Facts Click for Frequently Asked Questions Click for Publications Click to send OAS Comments, Questions and Requests Click for OAS Home Page Click for Substance Abuse and Mental Health Services Administration Home Page Click to Search Our Site

Computer Assisted Interviewing for SAMHSA's National Household Survey on Drug Abuse

Executive Summary

The National Household Survey on Drug Abuse (NHSDA) is the primary source of statistical information on the use of illegal drugs by persons in the United States. Sponsored by the Substance Abuse and Mental Health Services Administration (SAMHSA), the survey collects data by administering questionnaires to a representative sample of the population through face-to-face interviews at their place of residence.

The survey is designed to produce yearly cross-sectional estimates of substance use and abuse for the U.S. civilian, noninstitutionalized population aged 12 years or older. Prevalence estimates are produced of the use of alcohol, tobacco, marijuana and hashish, cocaine, heroin, inhalants, and hallucinogens, as well as of the nonmedical use of psychotherapeutic drugs. From the NHSDA's inception in 1971 until 1998, data were collected using paper-and-pencil interviewing (PAPI) methods with a combination of interviewer-administered questions and self-administered questionnaire (SAQs; i.e., respondent-completed answer sheets). In 1999, the NHSDA underwent a redesign that involved the implementation of computer-assisted interviewing (CAI) data collection methods. This report describes the development of this new methodology.

Background

Throughout the history of the NHSDA, there has been a continual focus on evaluating and improving the methodology of the survey. Methodological research and improvement has focused on content, sample design, questioning strategies, editing methods, and estimation procedures. The PAPI methodology had a number of limitations and problems that were identified and studied in this research. The advent of CAI procedures during the early 1990s, particularly audio computer-assisted self-interviewing (ACASI), clearly offered an opportunity for improving the measurement methods. Numerous studies have demonstrated that computer-assisted personal interviewing (CAPI) improves the quality of survey data and that ACASI results in increased reporting of sensitive issues.

The ACASI methodology allows the respondent to listen to questions through a headset and/or to read the questions on the computer screen. Respondents also key their own answers into the computer. Thus, greater privacy can be assured for the respondent even in interview settings that might not otherwise be considered sufficiently private. Programming the questionnaire allows for a more complex contingent questioning structure and strategy in a format where the routing is less visible to the respondent and makes possible the incorporation of consistency checks during the interview.

In addition to incorporating the use of CAI instruments for collecting data from respondents, the use of electronic screening of households was implemented in 1999. Prior to 1999, NHSDA interviewers used complex paper forms to conduct the 5-minute interview to determine the housing unit composition and to select the sample persons. This procedure was difficult to manage, prone to error, expensive to process, and limiting in terms of the sample selection algorithms that could be implemented.

In 1995, SAMHSA decided to initiate development and testing of CAI in the NHSDA. The development was accomplished primarily under a contract awarded to the Research Triangle Institute (RTI) in early 1996. SAMHSA carefully considered the shift to computer-assisted screening and interviewing, requiring extensive testing and proof of the feasibility of the new technology for the NHSDA. The testing protocol included a small initial feasibility experiment in the fourth quarter of 1996, cognitive laboratory testing, a large (n = 1,982) field experiment in the fourth quarter of 1997, and a final pretest conducted in August 1998.

1996 CAI Feasibility Experiment

The 1996 CAI feasibility experiment was designed to assess the operational feasibility of using an electronic version of the NHSDA, the impact on perceptions of privacy, the length of the interview, the effect of CAI on the interviewing environment, and the quality of data provided. This study compared two versions of the CAI to the 1996 PAPI with a sample of 435 respondents in 20 purposively selected NHSDA primary sampling units (PSUs). The main results of this study were as follows:

  1. CAPI reduced the time it took for interviewers to complete the personal interview component.

  2. Respondents were, in general, able and willing to complete the extended ACASI interview with little help from an interviewer.

  3. Interviewers were much less likely to know what the respondents' answers were when ACASI was used than when PAPI/SAQ was used.

  4. ACASI appeared to increase reporting of past year and past month marijuana and cocaine use.

Cognitive Laboratory Testing

Following the CAI feasibility experiment, in the spring of 1997 several rounds of cognitive testing were conducted in RTI's Laboratory for Survey Methods and Measurement in order to further develop the ACASI methodology to be used in the 1997 field experiment. Fifty respondents were recruited. Three topics were explored in this lab testing: the voice used in the ACASI portion of the interview, a new method for asking the frequency of use question, and a method for resolving inconsistent responses during the interview.

A voice for the ACASI instrument was selected on the basis of lab respondents' ratings of eight voices that varied on several voice characteristics.

In the NHSDA prior to 1999, the question asking the number of days a respondent used a substance in the past 12 months had nine response categories that combined number of days with a periodicity estimate (e.g., "at least 12 but not more than 24 days [1-2 days a month])." This was particularly confusing for respondents who, for example, only drink alcoholic beverages every day for a 2-week vacation period each year (i.e., 14 is the appropriate number of days, but "1-2 days a month" is not appropriate). A new method was tested, in which the respondent selects the unit for reporting (i.e., days per month, days per week, or total days during the year). This procedure worked well and was incorporated into the 1997 field experiment.

A two-stage resolution methodology was developed for the field experiment. When the computer detected that a response was inconsistent with an earlier response, the respondent was first asked to verify whether the second response was correct, and then to resolve the inconsistency by correcting one or both of the responses.

1997 Field Experiment

The 1997 field experiment, conducted during the fourth quarter of 1997, evaluated alternative versions of an ACASI NHSDA using a 2x2x2 factorial design with a sample of 1,982 respondents including 1,117 youths 12 to 17 years of age. The overall goal of the field experiment was to identify an ACASI version of the NHSDA that produced higher quality data without adversely affecting respondent burden or increasing breakoffs (i.e., incomplete interviews). Random halves of the sample were assigned to one of two levels within three experimental factors, described below.

  1. Factor 1: Structure of the contingent questioning in the CAI interview. In the single gate question version, respondents were first asked if they had ever used a substance and were routed immediately to the next section if they had not. Under the multiple gate question version, every respondent answered three gate questions for each substance: use in the past 30 days, use in the past 12 months, and lifetime use. Only those respondents who answered "no" to each of the three questions were routed to the next section.

  2. Factor 2: Data quality checks within the ACASI interview. For a random half of the respondents, the ACASI program included additional questions that followed up on inconsistent answers and questionable reports, such as a suspiciously low age of first use for a substance.

  3. Factor 3: Number of chances to report 30-day and 12-month use. This factor was included at two levels: a single opportunity to report use and multiple opportunities to report use. Under the single opportunity to report use, respondents were only asked once about use during the past 30 days or during the past 12 months. Under the multiple opportunity version, respondents who indicated at least lifetime use of a substance were routed through additional follow-up questions even though they had not indicated use in the particular time period.

In addition to the experimental treatments, interviewer and respondent debriefing questions were included. A subsample (n = 3,105) of the Quarter 4 national NHSDA, which used a combination of interviewer-administered PAPI and SAQ, comprised the control group for the study. This comparison group was restricted to those 1997 NHSDA respondents who were in the same PSUs that contained the field experiment sample.

Effect of experimental factors on prevalence. Contrary to expectations, the single gate version rather than the multiple gate version resulted in increased reporting of drug use, particularly for the illicit substances (i.e., marijuana, cocaine, any illicit, and any illicit but marijuana). This was especially true for use during the past 30 days. In addition, any illicit drug use showed higher prevalence ratios for all three reference periods in all three age categories, except for 12-month use of any illicit drug but marijuana for 12 to 17 year olds.

On the whole, when consistency checks were present, respondents gave somewhat higher reports of drug use across all drugs, for all reference periods, and all age groups. In addition, 12 to 17 year olds showed an overall tendency toward higher reporting in the version with a single opportunity to report use in a reference period. The analysis for the total sample and persons 18 years old or older did not show any pattern in favor of either treatment group.

ACASI versus PAPI. There was an overall tendency for ACASI to yield higher reports of drug use. This was especially evident among the 12 to 17 year olds where the differences were quite dramatic.

For the total sample, the difference between the ACASI and the PAPI rates of use was significant at the 0.1 level for cocaine and any illicit drug. For the 12 to 17 year olds, rates of use of every drug but cocaine showed some significant differences at the 0.05 level. The ACASI versus PAPI difference for rates of use of any illicit drug were significant at 0.05 for all three references periods.

Respondent's ease of answering questions. Respondents were asked to rate their ability to record their answers using their particular interview mode, ACASI versus PAPI, without the help of the field interviewer. Overall, a large percentage of both groups reported not needing the interviewer's help when entering answers; however, there was a tendency for a larger percentage of the ACASI respondents to indicate they needed no help (88.3% vs. 73.5%). This difference was even larger for the youths with 20% fewer of the youth ACASI respondents indicating that they required help from the interviewer. Adults with less than a high school education found the ACASI interview easier to complete on their own (83.2% vs. 68.1%).

Level of comfort in answering questions. Respondents also were asked about their level of comfort with the interviewing environment. Overall, ACASI respondents were more likely to report that they were comfortable (73.9% vs. 62.3%). Under both modes, youths were less comfortable than adults, but showed an increase of 15% in ACASI over PAPI. Additionally, about 65% of ACASI respondents who reported any illicit drug use in the past 30 days indicated that they felt comfortable using the computer. This compares to 59.6% of PAPI illicit drug users who said they felt comfortable using paper and pencil, indicating a preference for the ACASI interview mode among illicit drug users.

Recorded voice. The recorded voice was helpful to those respondents who could not read well. In one of the debriefing questions, respondents were asked to rate their own reading ability. Those who could not read well found the recorded voice more useful. Only 14.7% of the respondents with excellent reading ability felt that the voice helped them a lot, whereas 48.7% of the fair to poor readers indicated that it helped them a lot.

Privacy. ACASI was a major factor in increasing the respondents' perception that the interview was private. In all demographic groups, nearly twice as many of the ACASI respondents reported that the interviewer saw none of their answers—overall 82.6% for ACASI versus 41.3% for PAPI. Nearly 40% of PAPI respondents indicated that the interviewer saw some of their answers, whereas only 13.1% of the ACASI respondents indicated such.

When asked which interview mode provided the best privacy protection, approximately 40% to 50% of the Quarter 4 PAPI comparison group favored the computer, 10% to 13% favored the answer sheet, and nearly 25% indicated that either method would protect equally. Furthermore, users of illicit substances were more likely to say that ACASI provided better privacy protection.

Results from the revised 12-month frequency-of-use items. For every drug except heroin (where sample sizes were quite small), CAI respondents reported greater frequency of use than did the PAPI respondents. With the exception of heroin, a greater percentage of CAI respondents provided an estimate of 12-month use that placed them in the upper half of the frequency scale (51 days or more) as compared to the PAPI respondents. This was especially true for youths, who consistently reported higher frequencies of use in CAI than in PAPI. This trend was not entirely consistent for adult respondents. The data indicate that the majority of respondents appeared to be responding as intended. Most of the respondents who selected the monthly or yearly reporting periods were infrequent users.

Effect of ACASI on mental health questions. Questions about adult mental health syndromes were included in the NHSDA instrument from 1994 to 1997. In the PAPI instrument, these questions were interviewer administered. In the 1997 field experiment, the adult mental health questions were included in the ACASI interview.

There was a definite trend for ACASI to yield higher estimates of these mental syndromes compared with PAPI. In particular, the rate of a likely major depressive episode based on ACASI (14.6%) was nearly double the rate for PAPI (7.4%). Similarly, the ACASI estimate for generalized anxiety disorder (5.8%) was nearly four times the PAPI rate (1.6%). The effects were particularly pronounced for males; the ACASI estimate for generalized anxiety disorder among males (5.1%) was more than 4.5 times the PAPI rate for females (1.1%). Similarly, ACASI estimated that 12.7% of adult males were probable cases for major depressive episode, or more than twice the PAPI rate of 5.5% for females.

Reporting of nonmedical use of psychotherapeutic drugs. There was a single version of the ACASI program for questions on the nonmedical use of the psychotherapeutic drugs, which include analgesics, tranquilizers, stimulants, and sedatives. For these drugs, there were significant differences in question structure between the CAI and PAPI, which resulted in higher reported rates of use in CAI.

Overall, reporting of lifetime nonmedical use of analgesics increased by 300%, from an estimated 4.9% of the population to 14.8%. Both youths and adults showed dramatic increases, with the youth lifetime prevalence rate being 3.7 times higher under ACASI and the adult rate 3 times higher. Use during the past 12 months of analgesics showed a similar dramatic increase. For the other psychotherapeutic drugs, similar but less dramatic results were obtained. For example, overall, the reported lifetime prevalence of nonmedical tranquilizers use was 2.7 times higher under ACASI, stimulant use was 1.8 times, higher and sedative use 2.5 times higher.

Ability to resolve inconsistencies. Approximately 28% of the respondents assigned to receive an interview that would require resolution of inconsistencies triggered at least one such check item.

Overall, the consistency resolution methodology was successful. The methodology improved the quality of the data collected without adversely affecting respondent cooperation or burden. Using this methodology in future implementations of the NHSDA will allow SAMHSA to capitalize on the numerous benefits of the ACASI technology while minimizing one of the potential pitfalls—that respondent errors and inconsistencies are not identified and corrected at the time of interview.

Final Pretest

The final pretest was conducted in August 1998 and consisted of a field test and concurrent cognitive laboratory interviews. Laboratory interviews were conducted with adolescents to test a new set of tobacco questions, including usual brand of each tobacco product, and a new question on "month of first use" for better incidence data on persons recently initiating substance use. Interviews were conducted with drug treatment clients to test updated "pill cards" and revised questions on nonmedical use of prescription drugs, as well as new questions to estimate withdrawal symptoms related to use of specific drugs. Primarily, the field test served as a final test of all procedures for the 1999 NHSDA, incorporating all of the best features from prior testing.

Problems identified in the field test with the hardware, software, and procedures were identified through interviewer debriefings and corrected as much as possible. Lab testing identified problems with question wording, and modifications to questions were made prior to fielding the 1999 survey.

Development of an Electronic Screener

The electronic screener was developed in three phases. It was first used in the 1997 NHSDA field experiment. It was revised and tested on a small scale in the spring of 1998. A final test was included in the August 1998 pretest.

Benefits of the electronic screener include the following:

  1. elimination of interviewer errors in the selection process, including accidental errors and intentional tampering with the roster by the interviewer to achieve certain selections;

  2. the capability to include more variables in the respondent selection algorithms;

  3. reduction of data editing, data entry, and shipping costs; and

  4. more detailed and timely information field activities

This page was last updated on June 16, 2008.