Although the design of the 2002 to 2004 National Surveys on Drug Use and Health (NSDUHs) is similar to the design of the 1999 through 2001 surveys, there are important methodological differences between the 2002 to 2004 NSDUHs and prior surveys:
These changes improved the quality of the data provided by the survey. There were significant improvements in response rates beginning in January 2002, which had been expected based on an experiment conducted in 2001 (Office of Applied Studies [OAS], 2002d). The initial analysis of this experiment showed that incentives increased response rates, reduced data collection costs, and had no significant effect on prevalence. This result was the basis for the decision to introduce incentives in 2002. However, the results of the 2002 survey, as well as more recent analyses, suggest that the incentive, and possibly the other survey changes, did have an impact on the 2002 estimates. Estimates of rates of substance use, dependence and abuse, and serious psychological distress (SPD) (formerly serious mental illness, or SMI) were significantly higher in 2002 than in 2001. Analyses have shown that many of these "increases" were artifacts of the changes in the survey procedures.
Early results of these analyses were presented to a panel of survey methodology experts convened on September 12, 2002.1 The panel concluded that, because of the survey improvements, the 2002 estimates should not be compared with 2001 and earlier estimates. The panel also concluded that, because of the multiple changes made to the survey simultaneously, it would not be possible to measure the effects of each change separately or to develop a method of "adjusting" pre-2002 data to make them comparable for trend assessment. The panel also recommended that the Substance Abuse and Mental Health Services Administration (SAMHSA) continue its analyses of the 2001 and 2002 data to learn as much as possible about the impacts of each of the methodological improvements. Although it was considered unlikely that these studies could lead to the development of an adjustment method, there was hope that a better understanding of the methods effects would be beneficial to analysts using NSDUH data. In addition, given that there were few examples in the literature in which a monetary incentive was introduced to encourage the reporting of sensitive behaviors in a large nationally representative survey of households, it was important to document its effect on national response rates and prevalence estimates and across important subdomains. The results of some of these analyses were presented in Appendix C of the 2002 NSDUH National Findings report (OAS, 2003). Other studies of the methods effects have been completed since that report was published.
The purpose of this appendix is to summarize all of the studies of the effects of the 2002 methods changes and to discuss the implications of this body of research for analyses of NSDUH data. The focus is mainly on analyses involving 1999 to 2004 trend assessment. Brief discussions of approaches to long-term (1971-2004) assessment and analyses of pooled 1999-2004 and later data also are included.
A retrospective cohort analysis was used to evaluate the reasonableness of changes in the estimates of lifetime use reported in the 2002 survey. Comparisons of the changes in lifetime prevalence with trends based on retrospective reporting (i.e., age at first use) demonstrate that the increases in lifetime substance use rates between 2001 and 2002 could not be due to an increase in new initiates or the slight change caused by the addition of a new cohort of youths who were 12 years old. For example, retrospective data from 2002 show that the net changes due to new users and cohort shifts between 2001 and 2002 were 2.2 million for marijuana and 1.0 million for cocaine. However, the changes in lifetime prevalence between the 2001 and 2002 surveys were 10.5 million for marijuana and 5.8 million for cocaine.2
A response rate pattern analysis was used to assess the impact of the methodological changes on the response rates of different demographic subpopulations. Concurrent with the upward shift in prevalence in 2002, there were substantial increases in interview response rates across all geographic and demographic groups. One group who experienced a small increase in their response rate was the population aged 50 or older.
A response rate impact analysis was used to assess the potential levels of substance use prevalence under different assumed scenarios about the behavior of the respondents "added" as a result of the higher response rates under the new methodological conditions. An analysis of the connection between the response rate increases and the prevalence increases showed that the "additional" respondents in 2002 did not solely account for the increases in prevalence, indicating that the changes in methods did affect the level of reporting of some behaviors among survey respondents. This finding was strongest in the 50 or older age group, where the increase in the response rate was small but the increase in prevalence was large.
An analysis of the impact of new census data was used to determine whether any part of the increases in substance use observed in 2002 was due to the transition from 1990 census data to 2000 census data for weight calculations. The effect of the switch from the 1990 to the 2000 census-based weights was very small for NSDUH estimates of rates, but the effect was somewhat larger for some estimates of the number of persons using substances. Unlike the other changes implemented in 2002, the impact on the results can be precisely estimated subject to the sampling error of the data. Thus, the use of new census data does not adversely affect the ability to measure trends by themselves.
A series of model-based analyses of protocol changes, the name change, and the use of incentives was used to better understand how much each of the methodological changes might influence the comparisons of 2001 and 2002 data. One analysis attempted to identify and quantify the impact on lifetime prevalence rates of each of the separate NSDUH methodological improvements. Results of that analysis (reported in Appendix C of the 2002 National Findings) indicated that the impact of each of the interviewer monitoring and training interventions tended to be small and statistically nonsignificant for most measures and subdomains (OAS, 2003).
Another model-based analysis focused on the incentive effect and used data from the 2001 incentive experiment. The analysis controlled for a number of demographic variables and looked again at lifetime use of a number of substances by detailed age groups. Some of those results were statistically significant; however, most were not. The primary finding from that analysis is that the pattern of the incentive effect differs significantly from one age group to the next, and often in opposite directions, so that the incentive effect on prevalence estimates for the 12 or older age group as a whole tend to be quite small. A second analysis focusing on the incentive effect used the difference between quarter 4 of 2001 and quarter 1 of 2002 as a measure of the incentive effect. For the age 12 or older population, the pattern of the incentive effect appeared more comparable with that of the incentive experiment. Because the 2002 effects are larger than those from the 2001 experiment in the 12 or older age group, those results also may reflect other changes that occurred at the same time, such as the name change, further training of field staff, or seasonal or secular trends.
Since 2002, two types of analyses have been conducted that extend the analyses described above: (a) more in-depth analysis of the 2001 incentive experiment and (b) further analysis of the 2001 field interventions.
Initial analyses of the 2001 incentive experiment had indicated little impact of the incentive on screening response rates, a significant impact on individual interview response rates, and no statistically significant differences for any of the five substance use measures studied when comparing the non-incentive cases with those receiving either a $20 or $40 incentive. Two extended analyses of the incentive experiment subsequently were conducted.
In the first analysis, an investigation was carried out to determine whether gains in response rate and reduced data collection costs associated with monetary incentives varied across subgroups in the population (Eyerman, Bowman, Butler, & Wright, in press). Research has demonstrated that cash incentives paid to respondents in sample surveys can increase the level of cooperation, reduce nonresponse bias, and lower data collection costs. However, recent research has shown that gains in response rate and reduced data collection costs associated with monetary incentives may vary across subgroups in the population. Consequently, monetary incentives may result in inconsistent reductions in nonresponse error and systemic changes in sample composition. Findings of this analysis indicate that the incentive had a positive impact on cooperation. However, it did not eliminate the preexisting differences in cooperation among population subgroups.
In the second analysis, an extended investigation of whether the monetary incentive had an effect on reported drug use rates was conducted (Wright, Bowman, Butler, & Eyerman, in press). Sampling weights were adjusted to account for the differential response rates between the incentive and non-incentive cases. Then logistic regression models of substance use were fitted as a continuous function of the incentive level ($0, $20, and $40) while controlling on other variables that might mask the relationship. The incentive had a statistically significant positive effect on the reported past year use of marijuana (p = 0.027), a marginally significant positive effect on past month use of marijuana (p = 0.056), but no effect on lifetime use of marijuana. The incentive also had a statistically significant negative effect on the reported past month use of cocaine (p = 0.033). Offering a monetary incentive to respond can result in different estimated prevalence rates for the incentive and non-incentive groups. The extent of the difference may be a function of the perceived level of social disapproval of the substance and the reference period (past month, past year, or any past use). Some of the difference appears to be due to differences in substance use rates between the group who had traditionally reported without an incentive and the new group attracted by the incentive. Other differences appear to result from more honest reporting among the traditional respondents.
During the 2001 and 2002 NSDUHs, six new field interventions were introduced as follows:
Field interventions 4 to 6 were introduced at the same time as the $30 incentive payment and the survey name change; hence, it is not possible to measure these effects separately. As a consequence, field interventions 1 to 3, which occurred at different times in the 2001 survey, were the only interventions whose effects could be measured separately.
The results of the initial analysis of the field interventions showed that for most models, each of the three interventions had effect sizes that were not statistically significant. However, other analytic approaches had not yet been examined; therefore, the goal was to explore these other possibilities. In the extended analyses, more predictor variables were considered, some existing predictor variables were redefined (e.g., a polynomial spline model was fitted to the continuous age variable), and, where possible, more than one intervention was analyzed simultaneously in the same model. In addition, because an incentive experiment also was conducted in selected field interviewer regions during the first two quarters of 2001, any data associated with this experiment first were excluded to remove any potentially confounding effects, and appropriate new weights then were created for the remaining data so that national estimates could be derived.
Results from the extended analyses were similar to those in the initial analysis; that is, the individual interventions generally were not statistically significant and showed little evidence that the field interventions affected the reporting of lifetime substance use (Odom, Aldworth, & Wright, 2005). In addition, a combined model was developed to assess whether combined effects existed even if the individual effects were small and statistically nonsignificant. In general, the combined model did not show a strong pattern of evidence of an increase apart from the field interventions, and the direction of the change was not consistent among the field interventions. There were serious limitations to this combined analysis, primarily due to the timing of the three interventions. Not all interaction effects could be included, and for some intervention variables data from a greater time interval were used, thus possibly introducing seasonal and other confounding effects. The effect of the field interventions on response bias, whether analyzed individually or collectively, appears to be small, thus corroborating the initial analyses. A summary of the results from the logistic regression analyses for individual and combined models of lifetime use of different substances is given in Table C.1, and predicted prevalences and standard errors based on these models are given in Table C.2.
Although a great deal has been learned from the analyses described above, the results do not point to any reasonable method of quantifying the methods effects well enough to specify an adjustment procedure for the trends between 2001 and 2002. The observed differences between NSDUH estimates for 2001 and 2002 are believed to reflect both the underlying trend between those years and various methods effects, primarily those due to the survey name change and the use of the $30 incentive, both introduced on January 1, 2002. Lacking random subsamples with and without the two main interventions introduced in 2002 to estimate separately the methods effects, there is no direct way to separate a methods effect from a trend effect.
Several other approaches to "connecting" 2002 and later data with the 2001 and earlier data for trend assessment have been suggested and considered by SAMHSA. One alternative approach that has been suggested is to use indirect methods, based on specified assumptions. For example, it could be assumed that there is no trend over some short period, such as between the fourth quarter of 2001 and the first quarter of 2002. If this assumption is true, the methods effect can be estimated by comparing estimates from quarter 1 of 2002 with estimates from quarter 4 of 2001. The trend then can be estimated as the sum of two components: (a) the change occurring between the estimate based on the combined quarters 1, 2, and 3 of 2001 and the estimate for quarter 4 of 2001, plus (b) the change occurring between the estimate for quarter 1 of 2002 and the estimate based on the combined quarters 2, 3, and 4 of 2002. Because the annual NSDUHs are fielded as four quarterly surveys, each based on a probability subsample of the annual sample, the method is generally feasible.
Another type of approach is one that assumes a linear trend over some period of two or more surveys before and after January 1, 2002. If annual surveys are employed for this purpose, a linear trend over the years 2000, 2001, 2002, and 2003 might be assumed. Annual substance use measures then could be modeled as a function of year and an intervention effect occurring between 2001 and 2002. Because quarterly surveys are employed by NSDUH, a version of this method also could be developed using quarterly survey data and perhaps a shorter period of assumed linear trend.
The use of quarterly estimates to apply either of these methods may reflect seasonality as well as annual trends. Seasonal patterns in substance use have been shown to be present for initiation of some drugs (Gfroerer, Wu, & Penne, 2002; OAS, 2004c).
Another indirect approach to trend measurement with NSDUH is to make use of external data from other surveys to determine the "true" trend between 2001 and 2002, providing a crude method of quantifying the overall NSDUH methods effect (i.e., by subtraction). This could be feasible for measures that are covered by other surveys using similar definitions and survey methods and having sufficient sample sizes.
A summary of all of the results of the 2002 NSDUH methods effects analyses was presented to a second panel of consultants that included NSDUH data users, researchers, and survey methods experts on April 28, 2005.3 The panel concluded that there was no possibility of developing a valid direct adjustment method for the NSDUH data, and that SAMHSA should not compare 2002 and later estimates with 2001 and earlier estimates for trend assessment. The panel suggested that SAMHSA make this clear recommendation to other users of NSDUH data. The panel also discussed the use of indirect methods described above and recommended that SAMHSA should not use these methods to measure trends because the assumptions required to apply these methods are arbitrary and cannot be verified for the same reasons that direct evaluation of the separate effects of method and trend is not possible. The panel did support the use of external data to better understand how the methods changes affected NSDUH trends from 2001 to 2002, but only if the external data were valid and collected using methods similar to NSDUH. Finally, the panel recognized the value and uniqueness of the historical NSDUH data and suggested that long-term trends could be presented with sufficient caveats, such as showing breaks in the trends. This approach is used in the trend discussion in Chapter 9 of this report and is discussed further in the next section.
The first national household surveys collecting data on drug use were conducted in 1971 and 1972 under the auspices of the National Commission on Marihuana and Drug Abuse. Similar surveys, which eventually became known as the National Household Survey on Drug Abuse, subsequently were conducted every 2 or 3 years during the 1970s and 1980s by the National Institute on Drug Abuse (NIDA). Annual data collection began in 1990, and sponsorship of the survey was transferred to SAMHSA in 1992. Throughout its history, the survey has undergone a number of changes to its methodology and its questionnaire. Some of these changes affected comparability of estimates over time. For analysts interested in studying long-term trends or comparing estimates from recent years with estimates 20 or 30 years ago, it is important to be aware of the survey changes that have an impact on comparability. A complete assessment of consequential NSDUH methods and questionnaire changes is beyond the scope of this report, but this appendix gives a brief summary of the most important changes that have occurred since 1971. More detailed documentation of changes is provided in various NSDUH reports published by NIDA and SAMHSA (e.g., Gfroerer, Eyerman, & Chromy, 2002; Kennet & Gfroerer, 2005; OAS, 1996a, 2001a; Turner, Lessler, & Gfroerer, 1992). In addition to the changes in 2002 discussed in Section C.1, there were important survey design changes in 1994 and 1999. Thus, the major methods changes affecting data comparability are as follows:
Based on a statistical model that used data from a supplemental sample fielded in 1994, an adjustment procedure was developed and applied to 1979-1993 NSDUH data to produce estimates that are comparable with the 1994-1998 data. Data from the 1971-1977 surveys (no survey was done in 1978) were collected using the same basic methodology as in 1979-1998, although the editing and imputation methods used were different from those used on 1979 and later data. However, it is possible to employ a similar adjustment to estimates published in the 1971-1977 reports to provide estimates that are comparable with the 1979-1998 data, accounting for the 1994 methodology changes. This adjustment is simply the ratio of the adjusted 1979 estimate (which is adjusted for the 1994 methodological changes) and the unadjusted 1979 estimate from the 1979 NHSDA Main Findings report (NIDA, 1980). The estimates published in the 1979 report employed editing and imputation procedures that were similar to the methods used for the 1971-1977 estimates. Estimates of past month marijuana use and past year cocaine use for 1971-1977 were computed in this manner for presentation in Chapter 9 of this 2004 NSDUH report. Data for the entire 1971-2004 period are presented, with breaks in trend lines at 1999 and 2002, but continuous lines are used for 1971-1998, 1999-2001, and 2002-2004 to indicate comparable data during each of these three intervals. It should be pointed out that this kind of analysis may be limited for some measures due to questionnaire changes that were made on specific items at specific points in time. For example, the 1985 and later surveys used a different definition and questions on the nonmedical use of prescription-type drugs than had been used for the 1982 and earlier surveys. The data collection mode changed at different times for several substances. Questions were shifted from interviewer-administered to self-administered in 1979 for alcohol, in 1982 for prescription-type drugs, and in 1994 for tobacco, resulting in substantial increases in estimated prevalence of these substances in each of those years.
Most of the above discussion has dealt with the problems involved in estimating trends in substance use based on the NSDUH data, given all of the methodological changes that have taken place. However, many data users are interested in other statistics for which the methodological effects may be assumed to be relatively smaller. One instance of this is the comparison of domains. It may happen that the sample size for a specific analysis is quite small for a given year, but adequate when multiple years are combined. If an analyst requires fairly current data, he or she may want to combine data from 2001 and earlier with data from 2002 and later in order to improve the precision of estimates for some small domain. The suitability of this approach depends on a number of factors. Users may find it helpful to verify that the estimate based on NSDUH data prior to 2002 and the one based on 2002 and later data are "reasonably" similar for the subdomain of interest prior to combining data across those years. Typically, any differences can be tested for statistical significance given the appropriate software. Whether or not the differences are tested for statistical significance, the size of the difference can be the deciding factor. Sometimes, differences are statistically significant because of a large sample size, but the differences are relatively small. Sometimes, samples are too small to determine statistical significance, but the estimates themselves are relatively similar. In this case, combining data across years will not affect the results unduly.
This suggestion also can apply to estimating relationships between one or more variables based on a model. For example, if one is estimating the relationship between a risk factor and past month use of marijuana and desires to combine pre- and post-2002 data, one may want to first make estimates separately for those two periods and compare them as described above.
Analysts interested in pooling 1999-2001 and 2002-2004 NSDUH data are encouraged to consider the potential impact of the methodological change and incorporate analytic approaches, such as those described above, into their studies to account for methods effects. Reports of results from these kinds of studies always should acknowledge the methodological changes and include a discussion of the steps taken to account for them.
|Model / Variable||Substance|
|Marijuana||Cocaine||Any Illicit Drug||Alcohol||Cigarettes|
|Beta||P Value||Beta||P Value||Beta||P Value||Beta||P Value||Beta||P Value|
|Bef, Before (RC)||0.00||0.00||0.00||0.00||0.00|
|Aft, Never (RC)||0.00||0.00||0.00||0.00||0.00|
|Aft, Before (RC)||0.00||0.00||0.00||0.00||0.00|
|Aft, After (RC)||0.00||0.00||0.00||0.00||0.00|
|RC: Reference class; NA: Not available.
|Model / Variable||Substance|
|Marijuana||Cocaine||Any Illicit Drug||Alcohol||Cigarettes|
|PM: Predicted marginal; SE: standard error of predicted marginal; NA: Not available.
This page was last updated on June 16, 2008.