Abstract

Does participating in a longitudinal survey affect respondents’ answers to subsequent questions about their labor force characteristics? In this article, we investigate the magnitude of panel conditioning or time-in-survey biases for key labor force questions in the monthly Current Population Survey (CPS). Using linked CPS records for household heads first interviewed between January 2007 and June 2010, our analyses are based on strategic within-person comparisons across survey months and between-person comparisons across CPS rotation groups. We find considerable evidence for panel conditioning effects in the CPS. Panel conditioning downwardly biases the CPS-based unemployment rate, mainly by leading people to remove themselves from its denominator. Across surveys, CPS respondents (claim to) leave the labor force in greater numbers than otherwise equivalent respondents who are participating in the CPS for the first time. The results cannot be attributed to panel attrition or mode effects. We discuss implications for CPS-based research and policy as well as for survey methodology more broadly.

Introduction

Most people assume that survey respondents’ attitudes and behaviors are not altered by the act of measuring them. However, researchers in political science (e.g., Greenwald et al. 1987), consumer marketing (e.g., Spangenberg et al. 2008), public health (e.g., Battaglia et al. 1996), sociology (Das et al. 2011; Torche et al. 2012), and other fields have regularly demonstrated that this assumption is not always warranted. At least in some instances, the complex cognitive and social processes involved in answering survey questions can change respondents’ actual attitudes and behaviors. In other circumstances, answering questions on an initial survey can affect the accuracy of respondents’ answers to questions on subsequent surveys. In the former case, a data user would see changes in attitudes or behaviors that occurred only because respondents were being observed. In the latter case, a data user would observe changes when no change has actually occurred.

Methodologists have long been concerned with “testing” or “reactivity” biases in research on humans (e.g., Campbell and Stanley 1966; Landsberger 1958). Researchers in several disciplines have also demonstrated “panel conditioning” or “time-in-survey” effects in longitudinal surveys, whereby the act of responding to questions in a baseline survey “conditions” people’s answers to questions on subsequent surveys. However, evidence for panel conditioning has mostly come from smaller-scale studies of highly selective populations in the context of marketing, political opinion, or cognitive psychological research. We know very little about the nature or magnitude of panel conditioning biases in the sorts of large-scale longitudinal surveys that are integral to modern demographic, social, behavioral, and policy research.

In this article, we investigate the magnitude of panel conditioning effects on labor force items in the U.S. Bureau of Labor Statistics’ Current Population Survey (CPS), a foundational data resource for research and policy-making in demography, economics, public health, and beyond.1 Although our immediate objective is to understand the extent to which panel conditioning affects the quality of CPS labor force data, we also intend our work to provide more broadly useful information about the nature, pervasiveness, and severity of this methodological issue for all large-scale longitudinal surveys.

Panel Conditioning in the Current Population Survey

The Bureau of Labor Statistics has longed warned that “time-in-sample” effects may influence CPS-based estimates of unemployment and labor force participation rates. As described in more detail below, the CPS uses a rotating panel group design, in which new participants are introduced into the sample each month. In any calendar month, then, respondents may be participating in the CPS for the first time or they may have previously participated as many as seven times. A number of observers have noted that unemployment rates, in particular, are considerably higher among respondents who are participating for the first time compared with those who are experienced CPS respondents (e.g., Bailar 1975, 1989; Hansen et al. 1955; Shack-Marquez 1986; Shockey 1988; Solon 1986; Williams and Mallows 1970).

Indeed, the issue has made its way into documentation about the design of the CPS (e.g., U.S. Bureau of Labor Statistics 2006:16–17). Table 16-8 from that documentation describes a “month-in-sample bias index” that equals “the ratio of the [unemployment rate] estimate based on a particular month-in-sample group to the average estimate from all eight month-in-sample groups combined, multiplied by 100” (U.S. Bureau of Labor Statistics 2006:16-7). As shown in Table 1, across a 12-month span in 2003 and 2004, the unemployment rate for CPS respondents in their first month in sample was 6.3 % higher than for CPS respondents as a whole. This same basic empirical finding has been noted since the CPS first implemented the rotating panel group design in 1954 (Hansen et al. 1955).

Few people would argue that CPS respondents are actually more likely to be unemployed the first time that they participate in the CPS and then less likely to be unemployed thereafter. Most observers interpret evidence like that presented in Table 1 to mean that the quality of CPS respondents’ reports of their labor force participation and employment statuses differs across survey months. Over time, respondents’ willingness or ability to report their labor force status accurately may change; the result would be the appearance of changes in labor force status when none has actually occurred. A second plausible explanation is that these empirical results are simply due to differential panel attrition: if unemployed people are more likely to attrite from the CPS, and if available sampling weights do not fully account for this pattern, then it would not be surprising to observe the patterns noted in Table 1. Yet another plausible explanation is that the empirical pattern noted in Table 1 is due to mode effects. CPS respondents are usually interviewed in person in their first month of participation; they are then typically interviewed by telephone in subsequent months.

We examine whether the empirical patterns noted in Table 1 can be attributed to time-in-sample or panel conditioning biases, or whether these patterns are an artifact of differential attrition from the CPS panel or are primarily driven by mode effects across survey months. These questions are interesting and important on two fronts. First, the CPS is the main source of data on America’s labor force and unemployment circumstances. Understanding the nature of this (or any other) source of systematic bias in CPS estimates of core social and economic conditions, such as the unemployment rate, is inherently important. Second, as we explain later, this inherently important empirical example can yield general insight into the consequences of panel conditioning for data from any number of widely used longitudinal social science surveys. Broader theoretical and methodological lessons can be learned from research on possible panel conditioning biases in CPS unemployment rates.

Background and Motivation

Prior Research on Panel Conditioning

Researchers in several disciplines have known for some time that panel conditioning biases exist.2 Political scientists and others, for example, have frequently concluded that participation in political opinion polls increases voter turnout in the United States and elsewhere (Anderson et al. 1988; Bartels 1999; Clausen 1968; Granberg and Holmberg 1992; Greenwald et al. 1987; Kraut and McConahay 1973; Simmons et al. 1993; Traugott and Katosh 1979; Voogt and Van Kempen 2002; Yalch 1976). Likewise, cognitive psychologists and consumer marketing researchers have repeatedly shown that measuring people’s intended or forecasted behaviors has a direct bearing on their actual behaviors (e.g., Borle et al. 2007; Chandon et al. 2005; Dholakia and Morwitz 2002; Feldman and Lynch 1988; Fitzsimons and Williams 2000; Janiszewski and Chandon 2007; Morwitz 2005; Sherman 1980; Spangenberg et al. 2003; Williams et al. 2004).3

Although these studies suggest that panel conditioning effects are certainly possible within the context of longitudinal surveys like the CPS, they also have important limitations. Researchers in consumer marketing and cognitive psychology, for instance, often employ strong experimental designs to estimate the impact of panel conditioning (e.g., Borle et al. 2007; Bridge et al. 1977; De Amici et al. 2000; Godin et al. 2008; O'Sullivan et al. 2004; Williams et al. 2006; Yalch 1976), but the broader generalizability of their findings is often unclear. The majority of these analyses are carried out among students in college classes, customers of particular businesses, or voters in specific precincts, and they generally focus on a narrow range of substantive topics (e.g., grocery purchases or blood donation).

Other scholars—including most of those focusing on unemployment rates in the CPS and in the Survey of Income and Program Participation (SIPP; McCormick et al. 1992; Pennell and Lepkowski 1992)—have proceeded by comparing survey responses from members of a longitudinal panel with those from members of an independent cross-sectional sample drawn from the same population (e.g., Das et al. 2007). Details of the problems with this design are laid out elsewhere (e.g., Holt 1989; Sturgis et al. 2009; Williams and Mallows 1970). Most importantly, such a design potentially confounds biases from panel conditioning with those from panel attrition. Whereas the cross-sectional sample may be representative of some population, the panel sample may have suffered from nonrandom attrition over time.

Why Panel Conditioning Effects Might Arise in the CPS

When does survey participation change respondents’ actual attitudes and behaviors? And when does it change merely the quality of their reports of those attitudes and behaviors? Elsewhere, we have developed seven theoretically motivated propositions about the circumstances in which panel conditioning effects are most likely to arise (Warren and Halpern-Manners forthcoming).4 These propositions are grounded in theoretical perspectives on the cognitive processes that underlie attitude formation and change, decision making, and the relationship between attitudes and behaviors (Fazio 1989; Fazio et al. 1986; Feldman and Lynch 1988; Millar and Tesser 1986; Tesser 1978). In short, responding to a survey question is a cognitively and socially complex interactive process that may or may not leave the respondent unchanged and equally willing and able to provide accurate survey responses on subsequent surveys. Three of these theoretically informed propositions, in particular, suggest that panel conditioning may arise in the context of CPS labor force items.

First, respondents’ attributes may (at least appear to) change over time when survey questions require them to provide socially nonnormative or stigmatized responses. Survey questions can force respondents to confront the reality that their attitudes, behaviors, or statuses conflict with what their social peers regard as normative or appropriate (Fitzsimons and Moore 2008; Levav and Fitzsimons 2006; Spangenberg et al. 2008; Toh et al. 2006; Williams et al. 2006). Some respondents may react by bringing their actual attitudes or behaviors into closer conformity with social norms. Others may simply avoid cognitive dissonance and the embarrassment associated with offering socially nonnormative or stigmatized responses by bringing their reports of their characteristics—and not the attitudes, behaviors, or statuses themselves—into closer conformity with what they perceive as socially desirable.

In the context of the CPS, for example, respondents may find it embarrassing, awkward, or unpleasant to report in an interview that they are unemployed. These reactions may encourage unemployed respondents to find work or to simply misreport their employment status on subsequent surveys.5 Either way, apparent declines in levels of unemployment across surveys represent a form of panel conditioning (Williams et al. 2006). In general, these biases may arise in the context of any longitudinal survey questions for which there are clear socially nonnormative or stigmatized responses.

Second, respondents’ attributes may appear to change across survey waves as they attempt to manipulate a survey instrument in order to minimize their burden. Respondents sometimes find surveys to be tedious, cognitively demanding, and/or undesirably lengthy. As a result, longitudinal survey respondents may learn how to direct or manipulate the survey experience in such a way that minimizes its length and thus their burden (Bailar 1989; Duan et al. 2007; Hernandez et al. 1999; Jensen et al. 1999; Kalton and Citro 2000; Kessler et al. 1988; Lucas et al. 1999; Mathiowetz and Lair 1994; Meurs et al. 1989; Nancarrow and Cartwright 2007; Wang et al. 2000). If true, this dynamic may lead to the false impression that respondents’ attributes have changed over time. For example, CPS respondents may learn in one survey wave that they are asked many additional questions about each job that they hold. As a result, and in order to reduce the duration of follow-up surveys, some respondents may subsequently report holding fewer jobs. In general, this form of panel conditioning may arise any time that a longitudinal survey allows response options that subjects believe may affect the length of subsequent surveys.

Third, panel conditioning may be more likely when survey waves occur more closely together in time. The longer the interval between surveys, the more that intervening life events, subject maturation and change, historical events, forgetfulness, and other factors may overwhelm, counteract, or mute the effects of answering baseline questions on respondents’ answers to follow-up questions. As we have reviewed elsewhere (Warren and Halpern-Manners forthcoming), panel conditioning effects are frequently observed when baseline and follow-up surveys are separated by one month or less (e.g., Bailar 1989; De Amici et al. 2000; Fitzsimons et al. 2007; Levav and Fitzsimons 2006), and they are less frequently observed when surveys are separated by longer periods of time. Although many CPS questions are asked annually, other core items (including labor force items) are asked every month. We suspect that panel conditioning may be especially likely to occur in the context of CPS items that appear as a part of the basic monthly survey.6

Motivation for Our Empirical Analyses

The CPS is a longitudinal survey in which respondents are interviewed eight times across 16 months. The CPS labor force questions solicit responses that are potentially stigmatizing or nonnormative. These same labor force questions include response options that respondents may perceive to affect the length of follow-up surveys.7 For these reasons, and given the empirical literature and the preceding discussion about the circumstances likely to give rise to conditioning biases, it seems plausible for panel conditioning to bias important labor force items in the monthly CPS.

With just a few exceptions, prior investigations into the nature and magnitude of panel conditioning in CPS labor force items have begun from the premise that the kinds of results shown in Table 1 do indeed represent panel conditioning effects. Hansen et al. (1955) and Bailar (1975) speculated that panel attrition may partially account for the patterns observed in Table 1, but they nonetheless treated these patterns as indicative of time-in-sample biases. Solon (1986) and Bailar (1989) discussed whether panel conditioning leads to additive or multiplicative biases across CPS months and, consequently, whether estimates of trends in CPS unemployment rates are as biased as monthly estimates of levels of unemployment. Shockey (1988) applied multigroup latent class models in an effort to define panel conditioning as a special case of misclassification or response error. None of these authors seriously entertained the hypothesis that what they identify as time-in-sample biases may simply be artifacts of panel attrition and/or mode effects. In contrast, Williams and Mallows (1970) demonstrated mathematically—although without using any actual data—that panel attrition can lead to the appearance of panel conditioning in the CPS or any other survey that uses rotating panel designs. Shack-Marquez (1986) used longitudinally matched CPS records and a covariate adjustment design in an effort to describe the magnitude of panel conditioning biases after adjusting for panel attrition. While the analyses presented by Shack-Marquez (1986) are similar in purpose to our own, we use a much stronger empirical design to identify panel conditioning effects. In addition, Shack-Marquez (1986) did not apparently restrict her analyses to an appropriate subset of CPS respondents (i.e., those who answered questions about themselves and whose labor force data were not imputed). In our own analyses, we focus only on responses to labor force items among people who were in the CPS in consecutive months, who answered questions about themselves, and for whom no data were imputed. Taken together, previous studies of panel conditioning in CPS labor force items have rarely sought to empirically distinguish panel conditioning from panel attrition biases; they have never considered the possibility that mode effects may account for what are usually described as panel conditioning biases; and they have used weaker research designs.

Our analyses are designed to make two types of contributions. First, and most practically, we hope to obtain improved estimates of the magnitude of panel conditioning bias to core labor force items in the CPS, which is an important national data resource. Second, and more broadly, we hope to shed additional light on the circumstances in which panel conditioning biases may arise in large-scale longitudinal surveys. Building on our prior work (Warren and Halpern-Manners forthcoming), we seek to contribute to a general understanding of when panel conditioning effects are worth worrying about, how best to detect them, and what to do to overcome resulting biases.

Research Design

The CPS sample is representative of the civilian, household-based population of the United States. In recent years, each monthly CPS has included about 152,000 individuals living in about 54,000 households. Upon selection into the CPS sample, household members are surveyed in four consecutive months, left unenumerated during the subsequent eight months, and then resurveyed in each of another four consecutive months; new rotation groups are brought into the CPS sample each calendar month. The CPS 4-8-4 rotating panel design guarantees that in any calendar month, one-eighth of the sample is in its first month of enumeration (month-in-sample 1, or MIS 1), one-eighth is in its second month (month-in-sample 2, or MIS 2), and so forth.

The basic CPS monthly survey gathers information about demographic characteristics, income, labor force participation, education, occupation, veteran status, and union membership for all household members. In most months, additional supplemental surveys collect information about topics such as military service, education, fertility, food security, and voting. Given the focus of our analyses, we use data only from labor force items that appear on the basic monthly survey; for reasons outlined earlier, we would expect fewer panel conditioning effects on responses to surveys questions that are administered annually.

Given our substantive focus, and because of the cognitive processes that we hypothesize give rise to panel conditioning biases, we restrict our analyses to cases in which CPS respondents themselves provided valid responses to questions about their own labor force characteristics. This requires four initial sample restrictions. First, because CPS household heads are asked to provide most information about other household members, we limit our sample to household heads; spouses or children of heads, for example, do not generally provide their own information. Second, in a small minority of cases, proxy informants supply information on behalf of household heads; we also omit these cases from our sample. Third, if a household moves or otherwise becomes unavailable for follow-up surveys, replacement households are brought in to the CPS, and these replacement households are omitted from our sample. Fourth, when respondents do not provide valid responses to labor force questions, the Bureau of Labor Statistics imputes that information; we select only cases in which respondents provided valid responses. Finally, we restrict our sample to individuals who participated in the CPS in their first and second months in sample (e.g., respondents who began the CPS in June 2010 must also have responded to the CPS in July 2010). As explained below, this restriction allows us to make comparisons that clearly distinguish panel conditioning from panel attrition biases.

Our analytic strategy is based on changes across months in respondents’ answers to identical survey questions. Consequently, we linked records for CPS respondents across two adjacent monthly surveys. Specifically, we linked records for individuals who were in MIS 1 in one calendar month with records for individuals who were in MIS 2 in the subsequent calendar month. Matches are based on unique household identifiers and indicators of individuals’ identities within household rosters. After linking records based on household and personal identifiers, we compared people’s ages and sexes across months to insure the quality of matches; we then threw out the handful of observations in which sex did not match, or in which there were implausible age differences. For a respondent to be included in our analysis sample, his or her sex had to remain the same across survey waves. Age could increase by one year between monthly surveys, and did so in about one-twelfth of all cases. Our analyses are based on data for the 42 incoming cohorts whose first MIS was between January 2007 and June 2010.

The monthly CPS survey collects information about labor force participation and employment status for all adult household members. Based on responses to a complex series of questions, respondents are classified each month as employed–at work, employed–absent, unemployed–laid off, unemployed–looking, not in labor force–retired, not in labor force–disabled, or not in labor force–other. Two issues are noteworthy for our purposes. First, among many segments of the American population, there is some perceived stigma or shame associated with being unemployed. Second, respondents are asked multiple follow-up questions that differ depending on whether they are employed, unemployed, or out of the labor force. Respondents may find these questions tedious or excessively lengthy, and they may believe (albeit incorrectly) that they would have been asked fewer or less onerous questions had they indicated a different labor force status. For reasons outlined earlier in our discussion of the circumstances in which panel conditioning effects are mostly likely to arise, both issues lead us to hypothesize that panel conditioning may alter respondents’ reported labor force status across survey waves. To avoid providing a socially nonnormative or stigmatized response—“I am unemployed; I want a job but cannot get one”—and/or to attempt to minimize the length and burden associated with completing the CPS interview, respondents may appear less likely over time to be unemployed.

Analytic Strategy

In this section, we describe how we use information about within-person, month-to-month changes in responses to these CPS survey questions to inform our understanding of panel conditioning. Our analyses are fundamentally based on two types of strategic comparisons. First, we compare successive incoming cohorts of CPS respondents with respect to within-person changes over time. We may observe that unemployment rates, for example, decline between MIS 1 and MIS 2 for individuals who began the CPS in January 2009; the national economy and job market may have simply improved during that month. However, given the economic recession that began in 2007, we should not observe that unemployment rates nearly always decline between MIS 1 and MIS 2 for respondents entering the CPS between January 2007 and June 2010. This sort of comparison is not enough, however, because it may conflate conditioning biases with attrition biases. If unemployment rates nearly always decline between MIS 1 and MIS 2, it may simply be because unemployed people disproportionately leave the CPS panel. Available sampling weights may not fully account for this pattern.

Thus, we rely more heavily on a second, and more compelling, comparison: within calendar months, we compare people who differ only with respect to whether they are in the CPS for the first or second time. Consider, for example, two incoming cohorts of CPS respondents: Cohort A began the CPS in January 2007, and Cohort B began in February 2007. If we select individuals in both cohorts who participated in and can be matched across MIS 1 (January for Cohort A, February for Cohort B) and MIS 2 (February for Cohort A, March for Cohort B), we set up an important comparison in February 2007. For that calendar month, we can compare labor force characteristics for individuals in Cohort A with those in Cohort B. In the absence of panel conditioning, outcomes for these groups should (on average) be statistically indistinguishable. The two groups are sampled from the U.S. population using the same procedures, the samples have subsequently been pared down using identical restriction criteria (including their propensity to attrite from the CPS), and they are experiencing the same social and economic conditions that February. Panel attrition cannot explain any differences between Cohorts A and B in February 2007 because both groups are restricted to individuals who responded to the CPS in both MIS 1 and MIS 2. The only systematic difference between the two groups is that members of Cohort A have previously responded to the CPS.

Results

In Table 2, we cross-classify labor force status for the 3,847 people who were in MIS 1 in June 2010 by the labor force status for those same people in MIS 2 in July 2010.8 Between these two monthly surveys, the number of unemployed respondents declined from 264 to 248. The unemployment rate, which equals the number of unemployed people divided by the number of people in the labor force, declined from 10.7 % to 10.3 %. However, the number of employed respondents also declined, from 2,198 to 2,165. Thus, the reduction in the unemployment rate between the two months is attributable to respondents removing themselves from the denominator of the unemployment rate altogether by (claiming to) leave the labor force.

The patterns described in Table 2 are not unique to people entering the CPS in June 2010. Figure 1 shows changes in the unemployment rate between MIS 1 and MIS 2 for 42 cohorts of people entering the CPS between January 2007 and June 2010. In 32 of 42 instances, the unemployment rate declined across months; six of the 10 exceptions to this pattern occurred during the economic crisis of late 2008, when real monthly increases in joblessness may have swamped any effects of panel conditioning or panel attrition. However, as explained earlier, the more important comparisons for our purposes in Fig. 1 are across incoming cohorts and within calendar months. For example, as shown in the lower panel, the unemployment rate in February 2007 for people in MIS 2 was two full percentage points lower than for people in MIS 1, even though the two groups were identical in all respects (including propensity to attrite from the CPS panel) other than when they entered the survey. This within-month difference in unemployment rates can only be attributed to the fact that people in MIS 2 in February 2007 had previously participated in the CPS. In 29 of 41 such comparisons, and even during massive changes in the unemployment rate in late 2008, people responding to the CPS for the first time had higher rates of unemployment than otherwise identically selected individuals who responded to the CPS one month earlier.9

To give a sense of the magnitude of these apparent conditioning effects for estimated unemployment rates, we present supplementary evidence in Fig. 2. The heavy black line reports the official unemployment rate as reported by the U.S. Bureau of Labor Statistics for each month between January 2007 and June 2010. The lighter black line shows the unemployment rate among household heads who were not members of replacement households, for whom data were not collected via proxy, and whose labor force data were not imputed; this line very closely mirrors the official unemployment rate. Finally, the dashed black line shows the unemployment rate among the subset of those respondents who are in MIS 1.10 On average, individuals who have not previously participated in the CPS have an unemployment rate that is 0.75 percentage points higher than among otherwise similar individuals. Given the media and policy attention directed at much smaller changes in the unemployment rate, this seems like a sizable difference.

As we noted in our discussion of Table 2, the unemployment rate among people in MIS 1 in June 2010 declined by the time they were in MIS 2 in July 2010 because they indicated that they had left the labor force for reasons of retirement or disability. Figures 3 and 4 depict changes in rates of retirement and disability, respectively, between MIS 1 and MIS 2 across 42 incoming cohorts of CPS respondents between January 2007 and June 2010. The patterns that we note in Table 2 are observed in the majority of calendar months. In 33 of 41 instances, people in MIS 2 are more likely than otherwise identical people in MIS 1 to be retired in that same calendar month. Likewise, in 32 of 41 instances, people in MIS 2 are more likely to be disabled than people in MIS 1 in that same month. These differences in the rates at which people are out of the labor force because of retirement or disability—which are sizable in magnitude—can be attributed only to the single systematic difference between two groups observed in the same month: one group previously participated in the CPS, while the other did not.

As an aside, Fig. 5 shows changes between MIS 1 and MIS 2 in the frequency with which employed respondents say that they hold two or more jobs. Employed respondents may perceive that indicating that they hold multiple jobs will lead to additional survey items about each job (e.g., regarding industry, occupation, or hours worked per week). In all 41 instances, employed people in MIS 1 are substantially more likely—fully one-third more likely—to hold two or more jobs than otherwise identical people in MIS 2 in that same calendar month.

Could these findings—and the findings about time-in-sample biases reported in CPS documentation—be due to mode effects? Virtually all respondents are interviewed in person in their first MIS; the majority of respondents are interviewed by phone in their second MIS. Perhaps what we are calling “panel conditioning effects” are actually mode effects. Although this seems unlikely, given the long history of time-in-sample biases in the CPS (Hansen et al. 1955) and given that early CPS surveys were all conducted in person, we repeat our analyses after limiting our sample to individuals who were interviewed in person on both occasions. As shown in Figs. 6 and 7, our conclusions about panel conditioning persist even after eliminating the possibility of mode effects. In the cases of 33 of the 42 incoming cohorts of CPS respondents, unemployment rates declined across MIS1 and MIS2. More interestingly, within 28 of 41 calendar months, respondents in their first MIS had higher unemployment rates than otherwise equivalent respondents in their second MIS. Likewise, after restricting our sample to individuals who were interviewed in person, we continue to observe that within calendar months, respondents in MIS 2 are less likely to hold two or more jobs than otherwise identical respondents in MIS 1.11 This suggests that our findings—as well as those reported in the CPS documentation—cannot be attributed to panel attrition biases or mode effects.

Discussion

Social and behavioral scientists have long recognized that longitudinal surveys are uniquely valuable for making causal assertions, studying change over time, and other research purposes. They have also long been aware of many of the special challenges that accompany the collection and use of longitudinal surveys: such surveys are more expensive, raise more data disclosure concerns, and suffer from additional sorts of nonresponse biases. For the most part, however, scientists (at least outside of the field of cognitive psychology) have been content to assume that longitudinal surveys do not suffer from the sorts of testing or reactivity biases that sometimes arise in the context of experimental or intervention-based research. Most operate under the assumption that participating in a social survey leaves respondents unaffected, such that the changes that we observe over time are real and would have occurred even if we had not observed them.

There is a great deal of evidence for panel conditioning effects from political science, consumer marketing, and elsewhere, but this evidence is usually based on smaller-scale, more targeted data collection efforts; the research designs used to generate that evidence is sometimes not as strong as we might like. In the end, we actually know relatively little about the nature or magnitude of panel conditioning biases in large-scale longitudinal studies such as those frequently used by demographers, sociologists, economists, and others. In this article, we use longitudinal data from linked CPS records to estimate the size of panel conditioning effects on labor force measures in that important national data resource.

In previous work, we spelled out various circumstances in which panel conditioning is theoretically most likely to arise (Warren and Halpern-Manners forthcoming).Three of them are especially salient in the context of monthly CPS labor force items. First, respondents who are forced to give socially nonnormative or stigmatizing answers to survey questions (e.g., “I do not have a job”) may subsequently change their underlying attitudes or behaviors, or they may simply not answer as forthrightly in the future. Second, respondents may attempt to manipulate survey instruments in order to reduce the time or energy costs associated with survey participation. Simply put, if respondents suspect that Response Option A leads to many more additional follow-up questions compared with Response Option B, then some respondents will elect to choose Response Option B, regardless of the truth. Both propositions imply that we may find evidence of panel conditioning in CPS data on labor force outcomes. Third, we hypothesize that panel conditioning will be most evident when survey waves are closer together in time. Consequently, we limit our focus to items appearing regularly on the CPS basic monthly surveys; we do not use data from supplements that are asked annually or less often.

We find quite persuasive evidence of sizable panel conditioning effects on labor force items in the CPS. Individuals who are in the CPS for the first time are substantially more likely to be unemployed (in the labor force, but looking for a job) compared with otherwise identical individuals who responded to the CPS the month before. Unlike most prior research, our analyses preclude the possibility that this difference is due to panel attrition or mode effects. We infer that after participating in the CPS for the first time, some individuals switch their labor force status from “unemployed” to “out of the labor force,” thereby exiting the denominator of the unemployment rate.

One purpose of this article was practical: to understand the extent to which panel conditioning biases core labor force measures in the CPS, which is a cornerstone resource for research and policy purposes. People are less likely to be unemployed if they previously participated in the CPS, and although we cannot say for certain, our understanding of the cognitive processes that give rise to panel conditioning implies that CPS respondents’ initial answers are more valid than their subsequent answers. That is, although we see little reason why respondents would overstate their unemployment in their first MIS, we do see at least two reasons why they would understate it thereafter. This implies that the CPS may broadly underestimate the unemployment rate (see Fig. 2).12 What is more, panel conditioning seems to increase the rate at which people claim to be out of the labor force because of retirement or disability. This would seem to have important implications for research and policy-making on the social, economic, and other predictors of disability and of the transition to retirement. Finally, panel conditioning appears to severely downwardly bias the rate at which employed respondents claim to hold two or more jobs. This may have important implications for research in labor economics and elsewhere.

One way to better estimate the extent of panel conditioning in the CPS would be to carry out a series of external validation studies. With respect to labor force issues, it seems feasible to link CPS records to data on earnings and on receipt of unemployment, disability, and Social Security benefits. Similar linkages have been carried out for any number of major national surveys (e.g., National Research Council 2009: 157; Olson 1999), albeit for quite different purposes. This would make it possible to know with more certainty when apparent changes in labor force characteristics are real, and it would allow us to describe which respondents are more susceptible to panel conditioning. It might also ultimately facilitate modifications to CPS weighting procedures to account for these sorts of biases.

A second purpose of our article is more general: to understand better the circumstances under which panel conditioning arises, and to generate more general guidance for those who create and/or use longitudinal data. To this end, our current work builds on our earlier efforts to spell out the conditions under which this methodological problem may surface (Warren and Halpern-Manners forthcoming). Although not a definitive or systematic test, our evidence supports the general notions that respondents may seek to avoid providing socially stigmatized or nonnormative answers, respondents may take steps to reduce the duration of surveys when they suspect that particular response options lead to fewer additional questions, and these and other forms of panel conditioning may be especially pronounced when surveys are conducted close together in time.

As we argue elsewhere, in order to understand more fully the nature and magnitude of panel conditioning biases, we will ultimately need new data collected specifically for that purpose. These data would improve on what we can do using the CPS and all other existing data in two respects. First, we envision randomly assigning respondents to one of several treatment conditions in a variant of the Solomon Four-Group Design (Campbell and Stanley 1966; Solomon 1949). The groups would be stratified on the basis of whether a baseline survey is administered and the length of time after the baseline in which follow-up surveys are administered. Properly implemented, such a design would eliminate the possibility of conflating panel conditioning and panel attrition effects and would allow a careful consideration of the extent to which the magnitude of panel conditioning depends on the length of time between survey waves. Second, we envision tailoring the questions on baseline and follow-up surveys to test explicitly propositions about the circumstances under which panel conditioning biases are likely to emerge. That is, the content of the survey would be determined by which questionnaire items are most suited for this purpose (as opposed to making the best of existing survey items that were designed for decidedly different purposes). These hypothetical new data would allow for a much more sound assessment of the causal magnitude of panel conditioning effects and would provide much more nuanced insight into the circumstances under which such effects arise.

Acknowledgments

Order of authorship is alphabetical to reflect equal contributions by the authors. This article was inspired by a conversation with Michael Hout, and was originally prepared for presentation at the April 2010 annual meetings of the Population Association of America. The National Science Foundation (SES-0647710) and the University of Minnesota’s Life Course Center, Department of Sociology, College of Liberal Arts, and Minnesota Population Center have provided support for this project. We warmly thank Eric Grodsky, Ross Macmillan, Gregory Weyland, anonymous reviewers, and workshop participants at the University of Minnesota, the University of Texas, the University of Wisconsin-Madison, and New York University for their constructive criticism and comments. Finally, we thank Anne Polivka, Dorinda Allard, and Steve Miller at the U.S. Bureau of Labor Statistics for their helpful feedback. However, all errors or omissions are the authors’ responsibility.

Notes

1

For example, by our count, of the 909 articles published in Demography between 1990 and 2010, at least 75 (or 8.2 %) of them featured original analyses of CPS data; another 61 (or 6.7 %) utilized CPS data in some other capacity.

2

For a more comprehensive review of this literature, see Warren and Halpern-Manners (forthcoming).

3

For instance, Chandon et al. (2004) showed that randomly selected customers of an online grocer who were asked about their intentions to make future purchases were substantially more likely than members of a randomly selected control group to make subsequent purchases from that grocer.

4

These resemble similar propositions developed by Cantor (2008), Waterton and Lievesley (1989), and Bailar (1989).

5

It is important to distinguish panel conditioning from social desirability biases. The latter may lead people to underreport unemployment, but it will lead to consistent levels of underreporting across CPS surveys—and thus would not bias estimates of change over time. Panel conditioning, in contrast, would lead to underreporting of unemployment on follow-up, but not baseline, waves of the CPS—and thus would bias estimates of within-person change over time.

6

This may also explain why time-in-sample biases appear to be more pronounced in the CPS than in the SIPP (McCormick et al. 1992; National Research Council 2009; Pennell and Lepkowski 1992). Whereas CPS respondents are interviewed each month, SIPP respondents are interviewed every four months.

7

In fact, survey length is not appreciably affected by whether respondents indicate that they are employed, unemployed, or out of the labor force. However, because they do not observe the counterfactual pathway through the CPS interview, respondents may believe that an alternate response may have led to fewer questions. This suspicion may be enough to lead some respondents to provide different (and inaccurate) responses in subsequent months.

8

In this and subsequent comparisons, observations are weighted by the cross-sectional survey weight in the “origin” month.

9

In the absence of panel conditioning, we would certainly not expect unemployment rates for people in MIS 1 to be exactly equal to those for people in MIS 2 in any particular calendar month; sometimes they will be higher, and sometimes they will be lower. However, in the absence of panel conditioning, we would expect the difference to be positive as often as it is negative. As shown in Fig. 1, the unemployment rate is higher for those in MIS 2 in 29 of 41 (71 %) possible comparisons. If we take this to be a binomial process (where 1 means that MIS1 – MIS2 is negative, and 0 means that MIS1 – MIS2 is positive), and if we take Pr(MIS1 – MIS2 < 0) to be .5, then the probability of getting 29 or more negative values of 41 trials is about .005. This suggests that our findings are not the result of sampling error.

10

In an effort to account for panel attrition effects, in this supplementary analysis, we have used a form of poststratification weighting to cause the two groups of household heads to be identical with respect to their distributions of age, race/ethnicity, sex, marital status, region of residence, urbanicity, and nativity status; this procedure is detailed in Warren and Halpern-Manners (forthcoming).

11

For the sake of space, we do not present analogous graphs of percentage out of the labor force because of disability or retirement for respondents interviewed by phone. Results for these variables, which are entirely consistent with the results shown in Figs. 3 and 4, are available upon request.

12

It is important to note that panel conditioning in the CPS leads to bias in estimated levels of unemployment, but does not lead to systematic bias in short- or long-term trends in that rate.

References

Anderson, B. A., Silver, B. D., & Abramson, P. R. (
1988
).
The effects of race of the interviewer on measures of electoral participation by blacks in SRC national election studies
.
Public Opinion Quarterly
,
52
,
53
83
. 10.1086/269082
Bailar, B. A. (
1975
).
Effects of rotation group bias on estimates from panel surveys
.
Journal of the American Statistical Association
,
70
(
349
),
23
30
. 10.1080/01621459.1975.10480255
Bailar, B. A. (
1989
).
Information needs, surveys, and measurement errors
. In Kasprzyk, D., Duncan, G. J., Kalton, G., & Singh, M. P. (Eds.),
Panel surveys
(pp.
1
24
).
New York
:
Wiley
.
Bartels, L. M. (
1999
).
Panel effects in the American national election studies
.
Political Analysis
,
8
,
1
20
. 10.1093/oxfordjournals.pan.a029802
Battaglia, M. P., Zell, E., & Ching, P. (1996). Can participating in a panel sample introduce bias into trend estimates? (National Immunization Survey Working Paper). Washington, DC: National Cenrter for Health Statistics. Retrieved from http://www.cdc.gov/nis/pdfs/estimation_weighting/battaglia1996b.pdf
Borle, S., Dholakia, U. M., Singh, S. S., & Westbrook, R. A. (
2007
).
The impact of survey participation on subsequent customer behavior: an empirical investigation
.
Marketing Science
,
26
(
5
),
711
726
. 10.1287/mksc.1070.0268
Bridge, R. G., Reeder, L. G., Kanouse, D., Kinder, D. R., Nagy, V. T., & Judd, C. M. (
1977
).
Interviewing changes attitudes—Sometimes
.
Public Opinion Quarterly
,
41
,
56
64
. 10.1086/268352
Campbell, D. T., & Stanley, J. C. (
1966
).
Experimental and quasi-experimental designs for research
.
Chicago, IL
:
Rand McNally
.
Cantor, D. (
2008
).
A review and summary of studies on panel conditioning
. In Menard, S. (Ed.),
Handbook of longitudinal research: Design, measurement, and analysis
(pp.
123
138
).
Burlington, MA
:
Academic Press
.
Chandon, P., Morwitz, V. G., & Reinartz, W. J. (
2004
).
The short- and long-term effects of measuring intent to repurchase
.
Journal of Consumer Research
,
31
,
566
572
. 10.1086/425091
Chandon, P., Morwitz, V. G., & Reinartz, W. J. (
2005
).
Do intentions really predict behavior? Self-generated validity effects in survey research
.
Journal of Marketing
,
69
(
2
),
1
14
. 10.1509/jmkg.69.2.1.60755
Clausen, A. R. (
1968
).
Response validity: Vote report
.
Public Opinion Quarterly
,
32
,
588
606
. 10.1086/267648
Das, M., Toepoel, V., & van Soest, A. (2007). Can I use a panel? Panel conditioning and attrition bias in panel surveys (CentER Discussion Paper Series No. 2007-56). Tilburg, The Netherlands: Tilburg University CentER.
Das, M., Toepoel, V., & van Soest, A. (
2011
).
Nonparametric tests of panel conditioning and attrition bias in panel surveys
.
Sociological Methods & Research
,
40
,
32
56
. 10.1177/0049124110390765
De Amici, D., Klersy, C., Ramajoli, F., Brustia, L., & Politi, P. (
2000
).
Impact of the Hawthorne effect in a longitudinal clinical study: The case of anesthesia
.
Controlled Clinical Trials
,
21
,
103
114
. 10.1016/S0197-2456(99)00054-9
Dholakia, U. M., & Morwitz, V. G. (
2002
).
The scope and persistence of mere-measurement effects: Evidence from a field study of customer satisfaction measurement
.
Journal of Consumer Research
,
29
,
159
167
. 10.1086/341568
Duan, N., Alegria, M., Canino, G., McGuire, T. G., & Takeuchi, D. (
2007
).
Survey conditioning in self-reported mental health service use: Randomized comparison of alternative instrument formats
.
Health Services Research
,
42
,
890
907
. 10.1111/j.1475-6773.2006.00618.x
Fazio, R. H. (1989). On the power and functionality of attitudes: The role of attitude accessibility. In A. R. Pratkanis, S. J. Breckler, & A. G. Greenwald (Eds.), Attitude structure and function (pp. 153–179). Hillsdale, NJ: Lawrence Erlbaum Associates.
Fazio, R. H., Sanbonmatsu, D. M., Powell, M. C., & Kardes, E. R. (
1986
).
On the automatic activation of attitudes
.
Journal of Personality and Social Psychology
,
50
,
229
238
. 10.1037/0022-3514.50.2.229
Feldman, J. M., & Lynch, J. G. (
1988
).
Self-generated validity and other effects of measurement on belief, attitude, intention, and behavior
.
Journal of Applied Psychology
,
73
,
421
435
. 10.1037/0021-9010.73.3.421
Fitzsimons, G. J., & Moore, S. G. (
2008
).
Should we ask our children about sex, drugs and rock & roll? Potentially harmful effects of asking questions about risky behaviors
.
Journal of Consumer Psychology
,
18
(
2
),
82
95
. 10.1016/j.jcps.2008.01.002
Fitzsimons, G. J., Nunes, J. C., & Williams, P. (
2007
).
License to sin: The liberating role of reporting expectations
.
Journal of Consumer Research
,
34
(
1
),
22
31
. 10.1086/513043
Fitzsimons, G. J., & Williams, P. (
2000
).
Asking questions can change choice behavior: Does it do so automatically or effortfully?
.
Journal of Experimental Psychology-Applied
,
6
,
195
206
. 10.1037/1076-898X.6.3.195
Godin, G., Sheeran, P., Conner, M., & Germain, M. (
2008
).
Asking questions changes behavior: Mere measurement effects on frequency of blood donation
.
Health Psychology
,
27
,
179
184
. 10.1037/0278-6133.27.2.179
Granberg, D., & Holmberg, S. (
1992
).
The Hawthorne effect in election studies—The impact of survey participation on voting
.
British Journal of Political Science
,
22
,
240
247
. 10.1017/S0007123400006359
Greenwald, A. G., Carnot, C. G., Beach, R., & Young, B. (
1987
).
Increasing voting-behavior by asking people if they expect to vote
.
Journal of Applied Psychology
,
72
,
315
318
. 10.1037/0021-9010.72.2.315
Hansen, M. H., Hurwitz, W. N., Nisselson, H., & Steinberg, J. (
1955
).
The redesign of the census current population survey
.
Journal of the American Statistical Association
,
50
,
701
719
. 10.1080/01621459.1955.10501962
Hernandez, L. M., Durch, J. S., Blazer, D. G., & Hoverman, I. V. (1999). Gulf War veterans: Measuring health. Committee on Measuring the Health of Gulf War Veterans, Division of Health Promotion and Disease Prevention Institute of Medicine. Washington, DC: National Academies Press.
Holt, D. (
1989
).
Panel conditioning: Discussion
. In Kasprzyk, D., Duncan, G. J., Kalton, G., & Singh, M. P. (Eds.),
Panel surveys
(pp.
340
347
).
New York
:
Wiley
.
Janiszewski, C., & Chandon, E. (
2007
).
Transfer-appropriate processing response fluency, and the mere measurement effect
.
Journal of Marketing Research
,
44
,
309
323
. 10.1509/jmkr.44.2.309
Jensen, P. S., Watanabe, H. K., & Richters, J. E. (
1999
).
Who’s up first? Testing for order effects in structured interviews using a counterbalanced experimental design
.
Journal of Abnormal Child Psychology
,
27
,
439
445
. 10.1023/A:1021927909027
Kalton, G., & Citro, C. F. (
2000
).
Panel surveys: Adding the fourth dimension
. In Rose, D. (Ed.),
Researching social and economic change
(pp.
36
53
).
London
:
Routledge
.
Kessler, R. C., Wittchen, H.-U., Abelson, J . A., McGonagle, K., Schwarz, N., Kendler, K. S., . . . Zhao, S. (1988). Methodological studies of the Composite International Diagnostic Interview (CIDI) in the US National Comorbidity Survey (NCS). International Journal of Methods in Psychiatric Research, 7, 33–55.
Kraut, R. E., & McConahay, J. B. (
1973
).
How being interviewed affects voting: An experiment
.
The Public Opinion Quarterly
,
37
,
398
406
. 10.1086/268101
Landsberger, H. A. (
1958
).
Hawthorne revisited
.
Ithaca
:
Cornell University
.
Levav, J., & Fitzsimons, G. J. (
2006
).
When questions change behavior—The role of ease of representation
.
Psychological Science
,
17
,
207
213
. 10.1111/j.1467-9280.2006.01687.x
Lucas, C. P., Fisher, P., Piacentini, J., Zhang, H., Jensen, P. S., Shaffer, D., . . . Canino, G. (1999). Features of interview questions associated with attenuation of symptom reports. Journal of Abnormal Child Psychology, 27, 429–437.
Mathiowetz, N. A., & Lair, T. J. (
1994
).
Getting better? Change or error in the measurement of functional limitations
.
Journal of Economic and Social Measurement
,
20
,
237
262
.
McCormick, M. K., Butler, D. M., & Singhm, R. P. (1992). Investigating time in sample effect for the survey of income and program participation. Presented at Proceedings of the Survey Research Methods Section of the American Statistical Association.
Meurs, H., Wissen, L. V., & Visser, J. (
1989
).
Measurement bases in panel data
.
Transportation
,
16
,
175
194
. 10.1007/BF00163114
Millar, M. G., & Tesser, A. (
1986
).
Thought-induced attitude-change: The effects of schema structure and commitment
.
Journal of Personality and Social Psychology
,
51
,
259
269
. 10.1037/0022-3514.51.2.259
Morwitz, V. G. (
2005
).
The effect of survey measurement on respondent behaviour
.
Applied Stochastic Models in Business and Industry
,
21
,
451
455
. 10.1002/asmb.590
Nancarrow, C., & Cartwright, T. (
2007
).
Online access panels and tracking research: The conditioning issue
.
International Journal of Market Research
,
49
,
573
594
.
National Research Council. (2009). In C. F. Citro & J. K. Scholz (Eds.), Reengineering the Survey of Income and Program Participation. Panel on the Census Bureau’s Reengineered Survey of Income and Program Participation. Committee on National Statistics, Division of Behavioral and Social Sciences and Education. Washington, DC: National Academies Press.
Olson, J. A. (
1999
).
Linkages with data from the social security administrative records in the health and retirement study
.
Social Security Bulletin
,
62
,
73
85
.
O'Sullivan, I., Orbell, S., Rakow, T., & Parker, R. (
2004
).
Prospective research in health service settings: Health psychology, science and the 'Hawthorne' effect
.
Journal of Health Psychology
,
9
,
355
359
. 10.1177/1359105304042345
Pennell, S. G., & Lepkowski, J. N. (1992). Panel conditioning effects in the Survey of Income and Program Participation. Presented at Proceedings of the Survey Research Methods Section of the American Statistical Association.
Shack-Marquez, J. (
1986
).
Effects of repeated interviewing on estimation of labor-force status
.
Journal of Economic and Social Measurement
,
14
,
379
398
.
Sherman, S. J. (
1980
).
On the self-erasing nature of errors of prediction
.
Journal of Personality and Social Psychology
,
39
,
211
221
. 10.1037/0022-3514.39.2.211
Shockey, J. W. (
1988
).
Adjusting for response error in panel surveys: A latent class approach
.
Sociological Methods & Research
,
17
,
65
92
. 10.1177/0049124188017001004
Simmons, C. J., Bickart, B. A., & Lynch, J. G. (
1993
).
Capturing and creating public opinion in survey research
.
Journal of Consumer Research
,
20
,
316
329
. 10.1086/209352
Solomon, R. L. (
1949
).
An extension of control group design
.
Psychological Bulletin
,
46
,
137
150
. 10.1037/h0062958
Solon, G. (
1986
).
Effects of rotation group bias on estimation of unemployment
.
Journal of Business & Economic Statistics
,
4
(
1
),
105
109
.
Spangenberg, E. R., Greenwald, A. G., & Sprott, D. E. (
2008
).
Will you read this article's abstract? Theories of the question-behavior effect
.
Journal of Consumer Psychology
,
18
,
102
106
. 10.1016/j.jcps.2008.02.002
Spangenberg, E. R., Sprott, D. E., Grohmann, B., & Smith, R. J. (
2003
).
Mass-communicated prediction requests: Practical application and a cognitive dissonance explanation for self-prophecy
.
Journal of Marketing
,
67
(
3
),
47
62
. 10.1509/jmkg.67.3.47.18659
Sturgis, P., Allum, N., & Brunton-Smith, I. (
2009
).
Attitudes over time: The psychology of panel conditioning
. In Lynn, P. (Ed.),
Methodology of longitudinal surveys
(pp.
113
126
).
New York
:
Wiley
.
Tesser, A. (
1978
).
Self-generated attitude change
. In Berkowitz, L. (Ed.),
Advances in experiemental social psychology
(pp.
289
338
).
New York
:
Academic Press
.
Toh, R. S., Lee, E., & Hu, M. Y. (
2006
).
Social desirability bias in diary panels is evident in panelists' behavioral frequency
.
Psychological Reports
,
99
,
322
334
.
Torche, F., Warren, J. R., Halpern-Manners, A., & Valenzuela, E. (
2012
).
Panel conditioning in a longitudinal study of Chilean adolescents' substance use: Evidence from an experiment
.
Social Forces
,
90
,
891
918
. 10.1093/sf/sor006
Traugott, M. W., & Katosh, J. P. (
1979
).
Response validity in surveys of voting-behavior
.
Public Opinion Quarterly
,
43
,
359
377
. 10.1086/268527
U.S. Bureau of Labor Statistics. (2006). Design and methodology: Current population survey (Technical Paper 66). Washington, DC: U.S. Department of Labor, Census Bureau.
Voogt, R. J. J., & Van Kempen, H. (
2002
).
Nonresponse bias and stimulus effects in the Dutch National Election Study
.
Quality & Quantity
,
36
,
325
345
. 10.1023/A:1020966227669
Wang, K., Cantor, D., & Safir, A. (2000). Panel conditioning in a random digit dial survey. Presented at Proceedings of the Survey Research Methods Section of the American Statistical Association.
Warren, J. R., & Halpern-Manners A. (Forthcoming). Panel conditioning in longitudinal social science surveys. Sociological Methods & Research.
Waterton, J., & Lievesley, D. (
1989
).
Evidence of conditioning effects in the British social attitudes panel survey
. In Kasprzyk, D., Duncan, G. J., Kalton, G., & Singh, M. P. (Eds.),
Panel surveys
(pp.
319
339
).
New York
:
Wiley
.
Williams, P., Block, L. G., & Fitzsimons, G. J. (
2006
).
Simply asking questions about health behaviors increases both healthy and unhealthy behaviors
.
Social Influence
,
1
,
117
127
. 10.1080/15534510600630850
Williams, P., Fitzsimons, G. J., & Block, L. G. (
2004
).
When consumers do not recognize "benign" intention questions as persuasion attempts
.
Journal of Consumer Research
,
31
,
540
550
. 10.1086/425088
Williams, W. H., & Mallows, C. L. (
1970
).
Systematic biases in panel surveys due to differential nonresponse
.
Journal of the American Statistical Association
,
65
,
1338
1349
. 10.1080/01621459.1970.10481169
Yalch, R. F. (
1976
).
Pre-election interview effects on voter turnout
.
Public Opinion Quarterly
,
40
,
331
336
. 10.1086/268309