1997 National Household Survey on Drug Abuse:  Preliminary Results

Previous Page TOC Next Page


I. Target Population

An important limitation of the NHSDA estimates of drug use prevalence is that they are only designed to describe the target population of the survey, the civilian noninstitutionalized population. Although this includes more than 98% of the total U.S. population, it does exclude some important and unique subpopulations who may have very different drug-using patterns. The survey excludes active military personnel, who have been shown to have significantly lower rates of illicit drug use. Persons living in institutional group quarters, such as prisons and residential drug treatment centers, are not covered in the NHSDA and have been shown in other surveys to have higher rates of illicit drug use. Also excluded are homeless persons not living in a shelter on the survey date, another population shown to have higher than average rates of illicit drug use. Appendix 3 describes other surveys that provide data for these populations.

II. Sampling Error and Statistical Significance

The sampling error of an estimate is the error caused by the selection of a sample instead of conducting a census of the population. Sampling error is reduced by selecting a large sample and by using efficient sample design and estimation strategies such as stratification, optimal allocation, and ratio estimation.

With the use of probability sampling methods in the NHSDA, it is possible to develop estimates of sampling error from the survey data. These estimates have been calculated for all prevalence estimates presented in this report using a Taylor series linearization approach that takes into account the effects of the complex NHSDA design features. The sampling errors are used to identify unreliable estimates and to test for the statistical significance of differences between estimates.

Estimates considered to be unreliable due to unacceptably large sampling error are not shown in this report, and are noted by asterisks (*) in the tables in the appendix. The criterion used for suppressing estimates was based on the relative standard error (RSE), which is defined as the ratio of the standard error over the estimate. The log transformation of the proportion estimate (p) was used to calculate the RSE. Specifically, rates and corresponding estimated number of users were suppressed if:

RSE[-ln(p)] > 0.175 when p < .5

or RSE[-ln(1-p)] > 0.175 when p _ .5.

Estimates were also suppressed if they rounded to zero or 100 percent. This occurs if p < .0005 or if p _.9995. Statistical tests of significance have been computed for comparisons of estimates from 1997 with prior years. Results are shown in the appendix 5 tables. As indicated in the footnotes, significant differences are noted by "a" (significant at the .05 level of significance) and "b" (significant at the .01 level of significance). All changes described in this report as increases or decreases were tested and found to be significant at least at the .05 level, unless otherwise indicated.

Nonsampling errors such as nonresponse and reporting errors may affect the outcome of significance tests. Also, keep in mind that while a level of significance equal to .05 is used todetermine statistical significance in these tables, large differences associated with slightly higher p-values (specifically those between .05 and .10) may be worth noting along with the p-values. Furthermore, statistically significant differences are not always meaningful, because the magnitude of difference may be small or because the significance may have occurred simply by chance. In a series of twenty independent tests, it is to be expected that one test will indicate significance merely by chance even if there is no real difference in the populations compared. In making more than one comparison among three or more percentages (comparing percentages within a table), there has been no attempt to adjust the level of significance to account for making simultaneous inferences (often referred to as multiple comparisons). Therefore, the probability of falsely rejecting the null hypothesis at least once in a family of k comparisons is higher than the significance level given for individual comparisons (in this report, either .01 or .05).

When making comparisons of estimates for different population subgroups from the same data year, the covariance term, which is usually small and positive, has typically been ignored. This results in somewhat conservative tests of hypotheses that will sometimes fail to establish statistical significance when in fact it exists.

III. Nonsampling Error

Nonsampling errors occur from nonresponse, coding errors, computer processing errors, errors in the sampling frame, reporting errors, and other errors. Nonsampling errors are reduced through data editing, statistical adjustments for nonresponse, and close monitoring and periodic retraining of interviewers.

Although nonsampling errors can often be much larger than sampling errors, measurement of most nonsampling errors is difficult or impossible. However, some indication of the effects of some types of nonsampling errors can be obtained through proxy measures such as response rates and from other research studies.

Of the 81,068 eligible households sampled, 75,136 were successfully screened for a screening response rate of 92.7%. In these screened households, a total of 31,290 sample persons were selected, and completed interviews were obtained from 24,505 of these sample persons, for an interview response rate of 78.3%. 3,365 (10.8%) of sample persons were classified as refusals, 2,198 (7.0%) were not available or never at home, and 1,151 (3.7%) did not participate for various other reasons, such as physical or mental incompetence or language barrier. The response rate was highest among the 12-17 year old age group (83%). Response rates were also higher among Hispanics (83%) than among blacks (82%) and whites (76%).

Among survey participants, item response rates were above 98% for most questionnaire items. However, inconsistent responses for some items, including the drug use items, are common. Estimates of drug use from the NHSDA are based on the responses to multiple questions by respondents, so that the maximum amount of information is used in determining whether a respondent is classified as a drug user. Inconsistencies in responses are resolved through a logical editing process that involves some judgement on the part of survey analysts and is a potential source of nonsampling error. A typical occurrence is when a respondent reports their most recent use of a drug as more than a month ago, but in a later question they report having used in the past month. (This could occur because the interviewer may have developed greater rapport with the respondent in the latter stages of the interview, leading to more openness on the part of the respondent.) This respondent would be considered a past month user. For1997, 22% of the estimate of past month marijuana use and 53% of the past month cocaine use estimate is based on such cases.

NHSDA estimates are based on self-reports of drug use, and their value depends on respondents' truthfulness and memory. Although many studies have generally established the validity of self-report data and the NHSDA procedures were designed to encourage honesty and recall, some degree of underreporting is assumed. No adjustment to NHSDA data is made to correct for this (Appendix 4 lists a number of references addressing the validity of self-reported drug use data). The methodology used in the NHSDA has been shown to produce more valid results than other self-report methods (e.g., by telephone) (Turner, Lessler, and Gfroerer 1992; Aquilino 1994). However, comparisons of NHSDA data with data from surveys conducted in classrooms suggest that underreporting of drug use by youth in their homes may be substantial (Gfroerer, Wright, and Kopstein 1997).

The incidence estimates discussed in section 9 of this report are based on retrospective reports of age at first drug use by survey respondents interviewed during 1994-97, and may be particularly subject to several biases.

Bias due to differential mortality occurs because some persons who were alive and exposed to the risk of first drug use in the historical periods shown in the tables died before the 1994-1997 NHSDAs were conducted. This bias is probably very small for estimates shown in this report. Incidence estimates are also affected by memory errors, including recall decay (tendency to forget events occurring long ago) and forward telescoping (tendency to report that an event occurred more recently than it actually did). These memory errors would both tend to result in estimates for earlier years (i.e., 1960s and 1970s) that are downwardly biased (because of recall decay) and estimates for later years that are upwardly biased (because of telescoping). There is also likely to be some underreporting bias due to social acceptability of drug use behaviors and respondents’ fear of disclosure. This is likely to have the greatest impact on recent estimates, which reflect more recent use and reporting by younger respondents. Finally, for drug use that is frequently initiated at age 10 or younger, estimates based on retrospective reports one year later underestimate total incidence because 11 year old children are not sampled by the NHSDA. Prior analyses showed that alcohol and cigarette (any use) incidence estimates could be significantly affected by this. Therefore, for these drugs no 1996 estimates were made, and 1995 estimates were based only on the 1997 NHSDA.

Overall, these biases are likely to have the greatest effect on the most recent estimates, i.e., 1994-1996, primarily because they reflect recent drug use and because they are heavily based on the reports of adolescents. Thus, the estimates for recent years may be less reliable than estimates for earlier periods. Analyses of estimates based on single years of NHSDA data have been done to attempt to better understand the effects of these biases and to assess the reliability of estimates for recent years. So far, no clear evidence of significant bias has been found.

IV. Estimation of Heavy Drug Use

While the NHSDA collects data on the most severely affected drug users, the survey design is less suited to estimate these problems. The limitations that preclude more accurate estimates are primarily the sample size, coverage, and the use of a self-report. Because heavy drug use is relatively rare in the general population, the NHSDA captures a small number of these users, resulting in a relatively large sampling error. In addition to this instability resulting from the small sample, underestimation is believed to occur because many heavy drug users may not maintain stable addresses and, if located, may not be available for an interview. Finally, as with all NHSDA respondents, heavy drug users who participate in the survey may not always report their drug use accurately during the interview.

A new estimation procedure was designed at OAS to produce improved estimates of heavy drug use (Wright, Gfroerer and Epstein 1998). This procedure uses external counts of the number of people in treatment for drug problems (from the National Drug and Alcoholism Treatment Unit Survey) and the number of arrests for non-traffic offenses (from the F.B.I.'s Uniform Crime Reports) to adjust NHSDA data. This ratio estimation procedure provides a partial adjustment that accounts for undercoverage of hard-to-reach populations and also adjusts for underreporting of drug use by survey respondents. However, it does not reduce sampling error.

Applications of this adjustment have resulted in 40-80 percent higher estimates of past month and past year heroin use and 20-40 percent higher estimates of frequent cocaine use.

Previous Page Page Top TOC Next Page

This page was last updated on February 05, 2009.