Skip To Content

Table Of Contents

Click for DHHS Home Page
Click for the SAMHSA Home Page
Click for the OAS Drug Abuse Statistics Home Page
Click for What's New
Click for Recent Reports and HighlightsClick for Information by Topic Click for OAS Data Systems and more Pubs Click for Data on Specific Drugs of Use Click for Short Reports and Facts Click for Frequently Asked Questions Click for Publications Click to send OAS Comments, Questions and Requests Click for OAS Home Page Click for Substance Abuse and Mental Health Services Administration Home Page Click to Search Our Site

Computer Assisted Interviewing for SAMHSA's National Household Survey on Drug Abuse

3. Background: Literature Review and Research Issues

Survey administrators at the Substance Abuse and Mental Health Services Administration (SAMHSA), and previously at the National Institute on Drug Abuse (NIDA), have strived to continually improve and refine the NHSDA's methodology, while ensuring the comparability of trend data. In doing so, changes have been carefully evaluated to ensure that it is clearly understood how the changes affect trend data. Thus, major changes in data collection and estimation procedures have been instituted only after thorough testing, evaluation, and consultation. For the NHSDA, this is especially important because of its focus on highly sensitive topics.

3.1 Prior Research

Previous research has indicated that the validity of self-reported substance abuse data is highly dependent on the methods used to collect the data (Aquilino, 1994; Aquilino & LoSciuto, 1990; Duffy & Waterton, 1984; Schober, FeCaces, Pergamit, & Branden, 1992; Turner et al., 1992a). Moreover, research showing the feasibility of audio computer-assisted self-interviewing (ACASI) and its potential for improving the reporting of sensitive behaviors was pivotal in SAMHSA's decision to actually develop and test computer-assisted interviewing (CAI) procedures for use in the NHSDA (Duffer, Lessler, Weeks, & Mosher, 1996; O'Reilly, Hubbard, Lessler, Biemer, & Turner, 1994; Turner, Ku, Sonenstein, & Pleck, 1996). Successful implementation of CAI procedures in other surveys did not necessarily mean that CAI would work in the NHSDA, due to its unique data collection method and the scope of sensitive topics covered. Thus, extensive testing of a computer-assisted NHSDA and proof of its feasibility for the NHSDA was required.

A number of benefits may accrue from using CAI methods. Simply moving to a computer-assisted personal interviewing (CAPI) mode of data collection provides several benefits:

  1. Routing can be controlled by the computer rather than by the interviewer, so fewer errors are made. As a result, more complex questionnaires than can be scripted on paper are possible.

  2. Question items cannot be inadvertently skipped. An answer must be recorded for each question that appears on the screen.

  3. Questions are presented to all respondents in the same order.

  4. Out-of-range and inconsistent responses can be identified "on the spot" and corrected.

  5. Data can be processed more quickly and analysis files created more easily.

  6. Customized question wordings are handled easily, and "fills" are generated automatically by the system.

  7. Numeric calculations can be performed by the computer software at the time of interview.

  8. On-screen question-by-question specifications can be provided to the interviewers rather than requiring them to refer to hard-copy specifications.

  9. Information about the respondent (either from sampling frame data or from previous interview data) can be preloaded and used to customize question wordings and routings.

  10. Timing data can be collected more easily, even down to the individual question level.

Prior research suggests that the enhanced privacy under ACASI may increase the willingness of respondents to report sensitive behaviors. Other benefits that may accrue from using ACASI include (a) enhanced ability to interview respondents who speak languages other than English or Spanish; (b) improved standardization of the question presentation, thereby decreasing the interviewer component of measurement variance; and (c) increased ability to maintain the privacy of the interview for illiterate and semiliterate respondents.5

Prior to 1996, three large national surveys that included questions on sensitive topics had adopted CAI methods with ACASI components. In 1995, Cycle V of the National Survey of Family Growth (NSFG), sponsored by the National Center for Health Statistics (NCHS), was converted to CAPI with a short (approximately 8 minutes) ACASI module. The sample for the Cycle V NSFG was a list sample of women between the ages of 14 and 44 residing in households included in the 1993 National Health Interview Survey. The ACASI module included questions about abortions, number of sexual partners, HIV risk behaviors, forced sex, and family violence. Results from a pretest of approximately 800 women found higher reporting of abortions in ACASI than when the questions were administered by the interviewer during the CAPI portion of the interview (Lessler, Weeks, & O'Reilly, 1994). Results from the main study of approximately 10,800 women showed that 3% of unmarried women told the interviewer they had had four or more male sexual partners in the past 12 months compared with 9% reporting four or more partners in ACASI (Abma, Chandra, Mosher, Peterson, & Piccinino, 1997).

Operationally, the ACASI module worked well. A staff of 260 interviewers worked on the Cycle V NSFG. Interviewers reported positive reactions from respondents who completed the ACASI module. One negative aspect of the ACASI implementation was the use of an external audio box, which added additional weight for the interviewer to carry. Setting up an external voice box proved difficult for some of the NSFG interviewers. Kelly, Mosher, Duffer, and Kinsey (1997) speculated that the use of internal voice cards would reduce this operational burden associated with ACASI.

The National Survey of Adolescent Males (NSAM), sponsored by the National Institute of Child Health and Human Development (NICHD), incorporated an ACASI component. The study collects data on the sexual and HIV risk behaviors of young men in the United States. The NSAM-1 is a longitudinal study of young men who were aged 15 to 19 in 1988. These men were interviewed again in 1991. As part of the 1995 NSAM, the original sample members were interviewed a third time, and a second panel (NSAM-2) of young men aged 15 to 19 were also interviewed. In 1995, a methodological experiment was included that compared data on sensitive topics collected in a traditional, hard-copy self-administered questionnaire (SAQ) to data collected using ACASI. Comparing ACASI to the SAQ, Turner et al. (1998) reported that respondents were nearly four times as likely to report male to male sex when the ACASI methodology was used (5.5% vs. 1.5%). ACASI respondents also reported higher rates of other socially stigmatizing or illegal behaviors, including drug use, sexual contact with a prostitute, sexual activities while under the influence of drugs or alcohol, and a variety of violence measures.

Operationally, ACASI achieved the same positive review as in the NSFG. A total of 123 interviewers were trained for the study, and only a small number of computer problems were reported (Turner et al., 1996). As with the NSFG, however, the primary improvement suggested for ACASI was to replace the external audio boxes with internal sound cards.

The National Longitudinal Survey of Adolescent Health also included an ACASI component during an in-home interview (Bearman et al., 1997). Approximately 12,000 adolescents in grades 7 to 12 were selected from the school rosters obtained from the schools that participated in the in-school data collection. The interview was approximately 90 minutes long and included both CAPI and ACASI components. The ACASI component asked sensitive questions about contraception, romantic relationships, sexual activity outside of a relationship, motivations for birth control, and use of tobacco, alcohol, and drugs; the adolescents also completed a delinquency scale (Nicols et al., 1997). The final data collection report did not indicate that there were any problems with using ACASI to collect the sensitive information.

Although the potential benefits of CAI methods for the NHSDA seemed great, SAMHSA carefully considered its shift to the new technologies. Not only are the start-up costs large (i.e., development work and purchase of hardware), but the impact on trend data is also a major consideration. Trend data are particularly critical because the NHSDA is a key data source in tracking the effects of the changes in the welfare and health care systems on substance abuse and also in tracking the recent surge in illicit drug use among our Nation's youths.

3.2 Design and Operational Issues Addressed

A number of issues had to be successfully resolved before an automated NHSDA instrument could be considered a viable possibility.

3.2.1 Critical Design Issues

Work at Research Triangle Institute (RTI) concentrated on three issues that were critical to deciding on the CAI approach to the study, namely, interview length, use of contingent questioning, and resolution of inconsistent responses. Three key questions were asked:

  1. Are respondents willing to take part in an extended ACASI interview?

  2. Should skip patterns (i.e., contingent questioning) be included in the core sections of the NHSDA instrument?

  3. Can we expect respondents to be able to resolve inconsistent answers in the ACASI portion of the interview?

For the NHSDA to successfully employ ACASI, respondents had to be willing to use a computer to complete the self-administered portion of the NHSDA interview. Depending on how much of the NHSDA instrument was programmed for ACASI administration and to what extent contingent questioning was employed, an ACASI interview could last 30 minutes or longer. As noted, previous studies had reported few respondent difficulties. However, these studies had significantly shorter ACASI sections. There was concern that respondents would be unwilling to take part in the interview, might break off before the interview was completed, or might provide poor quality data as a result of having to complete so much of the interview using the computer.

Survey methodologists working on the survey were also concerned that the respondent would feel uncomfortable having the interviewer in his or her home with little, if anything, to do for such a large part of the interview. Similarly, there was a need to assess how interviewers reacted to having the interview out of their control for such a large amount of time because the typical interaction between respondent and interviewer would be significantly altered in the automated NHSDA. Thus, the research staff examined whether this change was likely to have any consequences for the quality of the data collected or for the types of individuals chosen to work as interviewers for the NHSDA in the future.

Having CAI procedures available for the self-administered portions of the interview offered the possibility of using contingent questioning in the interview. The NHSDA did not utilize contingent questioning in most of the core answer sheets. Skip patterns were incorporated, however, into the supplemental answer sheets. One reason that contingent questioning was not used in the core sections related to concerns about respondents' ability to accurately follow the routing. This was no longer a concern with an automated instrument. With proper programming and testing, computer programs will route the respondent to the next appropriate question based on the answers he or she provides.

There remained, however, unresolved issues as to how best to incorporate contingent questioning in the core sections of the NHSDA. In the paper NHSDA instrument, the practice of asking respondents every question in the core sections regardless of their prior answers provided multiple chances to assess use in certain time periods. Depending on how contingent questioning was incorporated, such multiple assessments would no longer be possible. But perhaps the most important issue to resolve was whether including contingent questioning in the core sections would alter the way that respondents answered the questions. For example, respondents might learn that answering "yes" to a question about their use of a drug results in additional questions, whereas answering "no" moves them to a new section of the instrument. To the extent that a respondent wants to complete the interview quickly, he or she might choose to answer in the negative even when that is not an accurate answer. Such response effects are well documented in the survey literature (see Dijkstra & van der Zouwen, 1982). With the special importance placed on using the NHSDA to monitor trends in drug use, response effects caused by incorporating additional contingent questioning needed to be scrutinized so that changes that affect trend data could be examined. At the same time, the researchers realized that the inclusion of contingent questioning would reduce the burden associated with the NHSDA for respondents who do not use drugs. This reduction in burden could be maintained, or new questions targeted toward nonusers could be incorporated into the NHSDA.

One further reason for excluding contingent questioning from the core sections of the NHSDA has been to protect the privacy of the respondent. Because every respondent had to answer every question on each of the core answer sheets, it was difficult for an interviewer to know whether or not a respondent was a user of the drug type. The use of contingent questioning can cause nonusers to finish the NHSDA questionnaire faster than users.6 Although interviewers are probably less able to identify users and nonusers in an ACASI methodology than they are under the answer sheet format, an assessment of whether respondents perceive the use of the contingent questioning to be a threat to privacy was examined during the conversion process.

Finally, the third critical issue that was addressed was to determine if consistency checks should be used in the ACASI portion of the automated NHSDA instrument. With sufficient time for programming, very complex consistency checks can be programmed into the computer-assisted NHSDA. However, it was not known if respondents could handle the process of resolving their own inconsistencies, particularly when the information being collected is especially sensitive. Because of the extensive inconsistent data in prior rounds (i.e., years) of the NHSDA, the methodologists felt that incorporating consistency checks would improve the quality of the data collected. However, it is possible that a respondent who is prompted once too often by the computer to resolve an inconsistency in his or her data will simply refuse to continue with the interview. At the same time, the quality of the NHSDA data would be improved if inconsistent answers could be resolved by the respondent rather than by an editing clerk after the fact.

The decision of how and to what extent to use consistency checks was closely tied to the decision regarding contingent questioning structures. The more extensive the use of these structures, the fewer the number of consistency checks that would be needed. However, if the automated NHSDA had only used contingent questioning structures as they appeared in the paper answer sheets, the total number of consistency checks could be quite large.

3.2.2 Operational Issues

A number of operational issues also needed to be resolved if an automated NHSDA was to be considered viable. Each of these issues is outlined briefly below.

Using showcards and pillcards. The paper NHSDA questionnaire included both showcards and pillcards. In the self-administered portion of the interview, showcards were used to display lists of drugs that the respondent should think about as he or she answered a particular set of questions. They were also used during the interviewer-administered portion of the interview for questions that have a large number of response categories, such as work status or income. Four pillcards were used during the self-administered portion of the interview to show the respondent pictures of prescription analgesics, tranquilizers, stimulants, and sedatives. Provisions for incorporating these showcards and pillcards were required, particularly in the portion of the NHSDA that was to be converted to an ACASI format.

Incorporating "other, specify" questions. A small number of questions in the NHSDA instrument require the respondent to write in an answer rather than simply mark a box or fill in a number. In most cases, these are "other, specify" questions where the respondent indicated an answer other than those listed and there is space designated for him or her to write in that other answer. Handling open-ended text within the ACASI portion of the NHSDA might be problematic if respondents are not familiar with a computer keyboard or have little experience with typing. This required investigation before the NHSDA could be converted to a CAPI/ACASI format.

Using long sets of response categories. A few questions in the self-administered portion of the NHSDA interview had especially large numbers of response categories. When these questions are programmed for ACASI, the categories barely fit on one screen, and the screen appears especially daunting to respondents. Similarly, there was concern that respondents relying on the recorded voice to help them select a category might be overwhelmed by the amount of information they must process. Alternatives to these long lists were needed. For example, several of the questions ask the respondent to report the number of days he or she used a particular drug during the past 12 months. For these questions, there were 11 response options. Alternatives, such as leaving the question open-ended and allowing the respondent to type in a number, or using some type of unfolding methodology, were likely to create response effects that needed to be well documented. Because the 12-month use items are important items in analysis, they required special care during the conversion to ensure that changes to these items were adequately tested and the effects well documented.

Automating the screener and other related forms. In addition to automating the NHSDA questionnaire, decisions were also required regarding whether to automate other components of the survey, such as the screener or verification form.

Adding additional questions to the NHSDA. A computer-assisted instrument can allow questions to be more carefully tailored to a particular respondent. SAMHSA has given special attention to whether new questions should be added to the NHSDA to develop a better understanding of specific subpopulations. In a PAPI/SAQ instrument, it is difficult to tailor questions to respondents who report specific drug usage patterns because it means re-asking a number of questions simply to screen the correct respondents into the series of questions. For example, asking questions of respondents who indicate using marijuana, cocaine, and heroin in the past 12 months and who also report several treatment episodes in the past 12 months requires four questions to be re-asked before the new series of questions can begin. Similarly, ACASI offers the opportunity to incorporate new questions targeted toward respondents who have never used drugs.

Training interviewers to conduct an automated NHSDA. Survey organizations are becoming increasingly skilled in training interviewers for computer-assisted data collections. Similarly, as CAI methods become more common, more and more interviewers have previous experience working with the technology. This can be both an asset and a liability. Interviewers who have worked on other CAPI/ACASI studies generally feel more comfortable working with a computer. However, to the extent that different studies use different software for programming, interviewers sometimes have to "unlearn" the procedures they used for the last study they worked on in order to learn the current procedures. This becomes especially problematic when interviewers are working on more than one computerized study concurrently and must keep track of which keystrokes are required to activate a given function for each application. During the NHSDA conversion process, special attention was given to developing training protocols that ensured that all interviewers and supervisors working on the computer-assisted NHSDA were thoroughly trained. Wojcik, Bard, and Hunt (1991) addressed a number of these issues, including the need for sufficient electrical capacity, a sufficient number of telephone jacks (for transmission training), and secured storage for the computer equipment. Considering the large number of interviewers employed by the NHSDA (e.g., more than 300 interviewers worked on the 1998 NHSDA), these issues were not inconsequential for a full-scale computer-assisted NHSDA training session.

5 In the paper version of the NHSDA, these respondents typically must get help from the interviewers to complete the answer sheets, which compromises privacy.

6 Other factors, such as reading ability, ease of recall, and attention, can also introduce considerable variability in the time required to complete the interview.

Top Of PageTable of Contents

This page was last updated on June 16, 2008.