Context: Recent studies have shown that changes in public health insurance policy have the potential to affect political participation. In particular, aggregate-level analyses suggest that increases in Medicaid enrollment due to the Affordable Care Act's Medicaid expansion are associated with increased voter turnout. Given the current uncertainty surrounding the future of Medicaid, these results lead to an important and related question: What happens to political participation when public health insurance coverage is reduced?
Methods: Leveraging changes instituted by the state of Tennessee in the early 2000s to its Medicaid program, TennCare, the authors employed a first-differences approach to examine the effect of health policy retrenchment on county-level voter turnout and partisan vote share in gubernatorial elections.
Findings: Counties with larger Medicaid enrollment declines saw larger decreases and smaller increases in voter turnout between 2002 and 2006 relative to those with smaller declines. Disenrollment was also associated with reduced Democratic vote share, though these results are not robust to controls.
Conclusions: Rather than mobilizing voters angry about losing coverage, the TennCare disenrollment seems to have had a demobilizing effect. The negative resource and interpretive effects of losing coverage likely outweigh any mobilizing backlash effect of retrenchment.
Much of 2017 was marked by a succession of failed congressional bills aimed at repealing the Affordable Care Act (ACA). In March, the US House proposed the American Healthcare Act (H.R. 1628). In June and September, the US Senate attempted its own versions of repeal: the Better Care Reconciliation Act and the Graham-Cassidy bill, respectively. Among many notable components, these legislative offerings all promised to restructure health policy in ways that would induce massive cuts to Medicaid—the largest source of public health insurance in the United States and the primary mechanism for providing coverage to low-income Americans. According to projections from the Congressional Budget Office, these policies would have led to tens of millions of people being uninsured, many as a result of dramatic reductions in Medicaid via the imposition of per capita caps or block grants (Jost 2017). Even without such significant national legislative attempts at retrenchment, Medicaid cuts frequently loom large on political agendas (Hoadley, Cunningham, and McHugh 2004). As the third most costly domestic program in the federal budget (following Social Security and Medicare) and a heavy outlay in state budgets, Medicaid accounts for $1 of every $6 spent on health care (Snyder and Rudowitz 2015; Paradise 2015). This puts a large target on its back. Moreover, given the intergovernmental structure of the program, states have multiple policy levers through which to diminish Medicaid rolls, especially with a willing administration at the helm of the federal government. Even at this writing, numerous states have received and are seeking permission from the Centers for Medicaid and Medicare Services to implement work requirements, drug testing, and other policy transformations that will very likely lead to large-scale Medicaid disenrollment (Ku and Brantley 2017; Rosenbaum et al. 2017).
This article examines the consequences of such changes for political behavior. Years of scholarly research provide consistent evidence of processes now known as “policy feedback”: public policies are not just the output of politics; they can act as crucial inputs by affecting the political actions of ordinary citizens (Bruch, Ferree, and Soss 2010; Bruch and Soss 2018; Campbell 2003, 2012; Mettler 2005; Michener 2018; Pierson 1993; Soss 2000; Lerman and Weaver 2014). These studies demonstrate that the structure and design of government programs can influence political decision making across a broad spectrum of actors by channeling resources, generating interests, and shaping interpretive schemas. But much of this work has focused on what happens as a result of receiving some policy benefit or burden. Though influential economic and psychological theories of loss aversion evince that having something taken away has different implications than gaining it, scholars of policy feedback know relatively little about the participatory upshot of policy losses (Kahneman and Tversky 1979, 1984).
We scrutinized a key case that provides empirical leverage on this question. In 2004, Tennessee governor Phil Bredesen (a Democrat), with bipartisan support from the state legislature, implemented substantial changes to TennCare, the state's Medicaid program. Bredesen's policies led to the removal of coverage for over 170,000 people—approximately 3% of all Tennesseans—with the goal of shoring up the program's finances. The impact of the retrenchment unfolded unevenly across the state's 95 counties. Such variation allowed us to employ a first-differences approach to studying the county-level relationships between coverage losses, voter turnout, and partisan vote share.
As the first study on public health insurance disenrollment and voting, this research adds a key dimension to the growing literature on health policy and political behavior. Even more broadly, the post-1970s trajectory of the American welfare state suggests a less stable social policy landscape, marked by varied forms of belt-tightening (Hacker 2004; Starke 2006; Weir, Orloff, and Skocpol 1988). Given this context, understanding the politics of retrenchment is all the more crucial for fully grasping the relationship between public policy and democracy.
In the following sections, we illuminate the political consequences of cutting public health coverage. To begin, we elaborate why we expect any relationship at all between Medicaid and voting, detailing the insights offered by previous research and pointing to important questions that remain in the literature. Next, we turn to our primary case—TennCare—to contextualize this study and describe the distinctive empirical and theoretical traction we gain by focusing on Tennessee. We then describe our data and report the findings. Our (aggregate) analyses reveal a significant association between decreases in Medicaid enrollment and reduced voter turnout. This raises important questions and indicates critical next steps for researchers seeking to advance knowledge of the democratic repercussions of retracting public benefits.
Medicaid and Political Participation
Why should we anticipate any relationship at all between the provision of public health coverage and the political participation of ordinary citizens? Theories of policy feedback provide useful guidance. Much of the literature suggests that means-tested programs like Medicaid are likely to depress the political participation of policy beneficiaries (Soss 2000; Mettler 2005; Mettler and Stonecash 2008). However, not all means-tested programs have such effects (Bruch, Ferree, and Soss 2010), and there is ample reason to think that Medicaid might not. Most noteworthy, Medicaid has fared well in the court of public opinion (Campbell 2015), and it is generally less stigmatized than other means-tested programs (Cook and Barrett 1992; Stuber and Kronebusch 2004; Howard 2006; Grogan and Patashnik 2003).
With this indeterminate evidence in view, emerging scholarship on Medicaid and political participation has made important inroads. Both Haselswerdt (2017) and Clinton and Sances (2018) have examined the short-term participatory effects of recent Medicaid expansions and discovered an increased likelihood of district and countywide voting, respectively, in the wake of eligibility expansions. Consistent with such findings, Michener (2018) has linked individual-level data to state policy metrics and shown that, even prior to the ACA, Medicaid beneficiaries were more likely to participate in politics if they lived in states that recently expanded benefits. Importantly, Michener found the opposite for beneficiaries residing in states that recently reduced benefits: such individuals were significantly less likely to vote, register, or engage in other forms of politics.1 These studies paint a complementary picture and make significant advances in understanding Medicaid's policy feedback effects. However, critical queries remain. In particular, both Haselswerdt and Clinton and Sances focused exclusively on the consequences of gains in Medicaid coverage. Similarly, though Michener considered the individual-level consequences of state reductions in benefits or services, she did not directly examine the effects of disenrollment or coverage loss (at either the individual or aggregate level). These are not minor caveats. Though the literature on prospect theory is vast and complex, the seminal insights from that body of work are especially germane: “People tend to overweight losses with respect to comparable gains,” and they are reference dependent, meaning they “are more sensitive to changes in assets than to net asset levels” (Levy 2003: 215–16; see also Kahneman and Tversky 1979, 1984). Prospect theory has been widely applied by scholars of international relations, and it also has resonance among students of American politics (for more on this, see Jervis 1992; Levy 2003). Two pertinent political implications of prospect theory are that “political punishment for losses is generally greater than for failure to make gains” and that “more energy will be spent trying to avoid or recoup losses than will be devoted to consolidating, or obtaining, new gains” (McDermott 1998: 42). Perhaps most crucial, the concept of a bias toward loss aversion is an important part of Pierson's (1994: 18) arguments about the challenges of welfare state retrenchment as a political project. Overall, prospect theory implies that policy feedback scholars should be more attentive to losses.
A salient example from state politics points to the potential that retrenchment will mobilize those affected: California Proposition 187, a ballot initiative approved by voters in 1994 that would have made undocumented immigrants ineligible for public benefits, is widely credited with sparking a large and enduring shift in political behavior among California's Latino population, sparking more participation and turning Latinos decisively against the Republican Party, perhaps permanently (Damore and Pantoja 2013; Pantoja, Ramirez, and Segura 2001; Dyck, Johnson, and Wasson 2012).2 More recently, in a study of “pocketbook voting” in Britain, Tilley, Neundorf, and Hobolt (2018) found that the extent to which voters punish or reward the incumbent party for changes in their personal circumstances depends on the extent that those changes can be attributed to government. For example, they found that voters were most likely to turn on the Conservative government between 1991 and 1996 if they believed their finances had gotten worse specifically due to a decrease in welfare benefits.3
A Range of Theoretical Expectations
The literatures on political participation and policy feedback suggest different expectations about the political effects of retrenchment than prospect theory. The “resource model” of political participation (Brady, Verba, and Schlozman 1995) contends that differing levels of resources (time, money, and civic skills) explain a great deal of variation in levels of participation. Policy feedback scholars have built on this insight to identify an important way in which policy change can influence political participation: by changing the distribution of resources. In their influential studies of policy feedback, Campbell (2003) and Mettler (2005) demonstrated how major 20th-century policy changes in the United States increased participation among senior citizens and GIs returning from World War II, respectively, by increasing the resources available to them. Given the evidence for such positive “resource effects,” it is possible that retrenchment may depress political participation. By destabilizing the lives of now former beneficiaries and creating a sudden and severe drop in their resources (specifically money, free time, and potentially health), policy changes like the TennCare disenrollment can make taking political action more difficult. While it does not deal with social policy, a study by Hall, Yoder, and Karandikar (2017) offers evidence for such patterns: the authors used aggregate and individual-level data to demonstrate that housing foreclosures in the subprime mortgage crisis led to depressed voter turnout among those affected, rather than generating an electoral backlash as some popular discourse has suggested.
In addition to resource effects, policy feedback scholars also emphasize interpretive effects (Pierson 1993; Campbell 2012). Interpretive effects emerge from formative experiences with public policy that shape beneficiaries' perceptions of government and teach them political lessons. Such lessons affect subsequent participatory decisions. In the case of Medicaid, Michener (2018) found that for Medicaid enrollees benefit reductions reinforce messages about the inadequacy of the government, underlining the limits of government capacity and resources. When cuts are viewed through such lenses, beneficiaries do not respond with political action because there is little incentive to engage a government that lacks the wherewithal to provide for the essential needs of the citizenry. Interpretive mechanisms thus point to a negative relationship between participation and program cuts.
The political psychology literature also offers reasons to be skeptical that retrenchment will mobilize. Levine (2015) found that it is especially difficult to mobilize people in response to issues that threaten their economic security because such mobilization can involve “self-undermining” rhetoric that reminds people of personal financial constraints and thus dampens their inclination to participate. Moreover, Levine and Kline (forthcoming) found that “loss-framed” arguments are demobilizing when they resonate personally and remind people of resource constraints (even if such arguments are more persuasive, per the findings of Arceneaux 2012).
Offering a complementary analysis of political attitudes (not participation), Sears and colleagues (Sears et al. 1980; Sears and Lau 1983; Sears and Funk 1990) found self-interest effects in policy attitudes for recipients of government benefits but described those effects as rare as well as “weak and narrow . . . far outstripped by the effects of symbolic predispositions” like partisanship and racial attitudes (Sears and Funk 1990: 163). One reason Sears and Funk (1990: 165) suggest that self-interest effects tend to be weak is that individuals fail to attribute their fluctuations in well-being to government. This notion is also reflected in the work of Arnold (1990) on the “traceability” of policy change and Pierson (1994: 19–22) on the “obfuscation” of welfare retrenchment.4 This consideration may be especially relevant in the TennCare case, given the arguments of Howard (1997) and Mettler (2011) on the “hidden” or “submerged” welfare state. Since TennCare beneficiaries received services through private managed care companies, some of the people who were disenrolled may not have fully understood that they were the victims of a conscious policy decision by political actors (Tallevi 2018).
In sum, the political science literature provides numerous reasons to expect that TennCare cuts would either fail to mobilize or actively demobilize beneficiaries. Due to the submerged nature of Medicaid benefit administration, disenrolled TennCare beneficiaries who did not connect the cuts to government action could have experienced demobilizing resource effects without any corresponding impetus to mobilize (Tallevi 2018). At the same time, those beneficiaries who successfully linked TennCare cuts to government action could have experienced demobilizing interpretive effects (Michener 2018). Finally, political attempts to mobilize beneficiaries or former beneficiaries in response to the economic loss insurance coverage could have been viewed or framed in terms of losses, thus reminding beneficiaries of their personal resource constraints and diminishing their likelihood of participating (Levine 2015; Levine and Kline forthcoming).
Given the conflicting expectations between prospect theory, policy feedback theory, and other findings in the discipline of political science, studying the consequences of retrenchment is of first-order importance to understanding the effects of public policy on political participation. To this end, we picked an apt case: Tennessee's Medicaid program. In late 2005, over 170,000 TennCare beneficiaries were disenrolled (which amounts to 3% of the entire state population), reversing previous expansions that had brought new populations into the program. This created a potentially ripe environment for reference-dependent political responses: Medicaid beneficiaries received coverage for long enough to become accustomed to having access to public health insurance. Then, that coverage was abruptly rescinded. How do such losses shape the citizenry? In the case of Tennessee, we know that the cuts sparked particularistic political behavior among some beneficiaries in the form of widespread administrative appeals of lost coverage (Franklin 2017). But scholars have not yet assessed the electoral consequences of massive disenrollment. Before we do so, it is worth contextualizing the case of Tennessee more extensively.
TennCare and the Politics of Retrenchment
While Medicaid enrollment has increased steadily since its inception in 1965, its growth has been markedly dependent on policy and economic conditions and quite variable across states (see fig. 1). This means that Medicaid cuts are hardly exclusive to Tennessee. Nonetheless, we focus on Tennessee because its unique history maps onto our theoretical interest in retrenchment quite well.
Though Medicaid came onto the national scene in 1965, it was not until 1969 that it took root in Tennessee, when that state became the 41st to begin a program. This initial incarnation of Medicaid lasted less than three decades. Then, in the early 1990s, when many states were searching for ways to reduce Medicaid spending, Tennessee commenced a historic expansion under the auspices of Democratic governor Ned McWherter (Bennett 2014). In 1993, the US Department for Health and Human Services approved a statewide section 1115 waiver. The waiver birthed TennCare, which introduced two major changes to the preexisting Medicaid program: (a) Tennessee became the first state to enroll its entire beneficiary population into capitated managed care, and (b) Tennessee expanded coverage to many people who had been uninsured. These changes brought early successes, increasing enrollment from 900,000 in 1994 to over 1.4 million in 2001, thereby plummeting Tennessee's uninsured population to its lowest levels ever (Chang and Steinberg 2016). In addition, TennCare achieved impressively low rates of cost per enrollee and saved the state anywhere from $245 million to $2 billion during its first six years (Conover and Davies 2000; Kaiser Commission on Medicaid and the Uninsured 2006). On top of this, Tennessee was able to increase the federal match rate to 70%. It did so by “folding almost all state and local funding of health and mental health services into the new program so those redirected dollars would qualify for federal matching funds” (Bonnyman 2007: 4). Savvy as this was, it also made Tennessee exceptionally dependent on Medicaid to cover the health needs of its population. In the wake of expansion, other state health programs were weakened, leaving a scant health safety net outside of Medicaid.
Notwithstanding its early accomplishments, TennCare soon began to crumble (Bennett 2014; Bonnyman 2007). A series of political skirmishes over state income tax policy, major missteps in ensuring fiscal discipline among managed care organizations, ballooning expenditures that accompanied unrelenting growth in medical costs (a nationwide problem), and a number of other demographic and political factors swiftly turned the tide against TennCare after 2000. The 2002 election of Democratic governor Phillip Bredesen—a fiscally conservative Democrat who founded a for-profit health insurance company—sealed TennCare's fate. In 2004, Bredesen began the process that would culminate in major cuts by putting forward a plan that would have limited doctor visits and prescriptions in what he framed as a “last chance” bid to save the program. The legislature (narrowly controlled by Democrats in both houses until Republicans took the Senate majority in 2005) approved the changes by overwhelming margins (Cheek 2004). However, a lawsuit by the Tennessee Justice Center, a prominent opponent of TennCare retrenchment, led to a federal judge blocking the changes later that year. Bredesen then switched tactics, announcing that he would seek to eliminate expanded TennCare and return the state to a traditional Medicaid program (de la Cruz and Wadhwani 2004). By the end of 2005, the state had terminated coverage for all uninsured and uninsurable adults (this latter group consists of those with preexisting conditions that make obtaining health coverage nearly impossible). Uninsured children were the only exception—one of the few concessions that advocates like the Tennessee Justice Center were able to win in negotiations with Bredesen (Gouras 2005). In less than one year, Tennessee went from having one of the most expansive Medicaid programs to having one of the most limited. Given state demographics, disenrollment had a disproportionate effect on whites. In 2004, before disenrollment, whites composed 80% of the state population and 62% of Medicaid beneficiaries. In 2006, after the cuts, whites' state population numbers held steady (79%) but their representation among Medicaid beneficiaries was reduced to 56%. In contrast, the proportion of black beneficiaries barely changed between 2004 (26%) and 2006 (27%). Shelby County—both the largest county in the state and the county with the most African Americans (51% in 2004)—experienced the fewest losses in terms of Medicaid enrollment after the cuts were implemented. Despite racial divergences in the effects of the cuts, only 45% of a random sample of white Tennesseans polled in 2005 expressed opposition compared to 80% of African Americans.5 However, in the critical period right after the cuts (2005–6), 85% of TennCare beneficiaries who attempted to get their disenrollment overturned were white (Franklin 2017). The racial and partisan contours that set the stage for the 2006 gubernatorial election were by no means straightforward. The political context held no obvious answers about whether and how Medicaid disenrollment would shape the election. We take this striking instance of retrenchment as a strong case for assessing the consequences of disenrollment. If Medicaid retrenchment affects political behavior, we should expect those effects to be apparent in the TennCare disenrollment. While the federated structure of Medicaid creates state-level heterogeneity that belies any uniform story about program effects (Michener 2018), Tennessee is an apt point of departure insofar as it represents one of the most arresting examples of Medicaid cutbacks in our nation's history. It also happened relatively suddenly, and political leaders did little to obfuscate (Pierson 1994: 19–22) the cutbacks (though it is still possible that beneficiaries failed to properly attribute the cuts, given the managed care structure of benefit administration).
If we adopt the perspective of prospect theory, there were ample grievances at play that could have sparked an electoral backlash in the wake of the TennCare cuts. First, a substantial portion (perhaps the majority) of the disenrolled remained uninsured, and there is evidence that the consequences for many of them were severe. Importantly, one evaluation of the disenrollment (Farrar et al. 2007) found that, while some of the disenrolled were able to adequately navigate the insurance and health care systems following loss of coverage, the most vulnerable and high-need Tennesseans—those with multiple chronic conditions—struggled to find affordable coverage (also see Ritchart 2005). These findings are bolstered by anecdotal evidence from media reports about people with chronic conditions facing difficult choices and massive out-of-pocket costs following disenrollment (e.g., Pinto 2006). The impact may have been especially devastating for those with severe and persistent mental illness and their families—16,478 such individuals were disenrolled across the state (Farrar et al. 2007: 8). In the obituary of a Knoxville man who suffered from schizophrenia and committed suicide in 2011, the man's parents specifically blamed the TennCare disenrollment for triggering a “downward spiral” culminating in his death (Hall 2011).
Second, even for those who subsequently received coverage after the cuts, the fact that disenrollment sparked the search for jobs suggests that the jobs themselves were unlikely to greatly improve the individuals' living standards until they became the only way to obtain health insurance.6 Indeed, the labor market effects of the TennCare cuts seemed concentrated among those with reasons to be desperate: Garthwaite, Gross, and Notowidigdo (2014) found that disenrolled individuals in “good” or “poor” (as opposed to “excellent”) health were especially likely to seek employment for the purposes of replacing coverage, as were older individuals (there was no evidence of a job search effect for those under 40).
Notwithstanding the significant grievances that may have sparked backlash, there was also marked potential for negative resource effects. Even among the disenrolled who were eventually able to find coverage, any lapse in coverage could be very disruptive, and an unplanned job search could easily distract the seeker from political matters. Moreover, in the pre-ACA era, many of those who obtained coverage likely settled for bare-bones plans with higher out-of-pocket costs than Medicaid (for an example of a Tennessee woman who experienced precisely this scenario, see Pinto 2006). Recent research suggests that the TennCare disenrollment had a negative and meaningful financial impact, with declining credit scores and increases in the amount and share of delinquent debt, as well as bankruptcy risk (Argys et al. 2017).
A similar logic holds for negative interpretive effects. Even former beneficiaries who subsequently obtained private coverage via the labor market were susceptible to the interpretive effects of having lost Medicaid. To the extent that the experience of losing public coverage manifested in peoples' lives, it created an opportunity for them to learn important political lessons, and such instruction could have consequences for the likelihood of political participation whether or not former enrollees eventually regained insurance coverage via private means.
Our research question is straightforward: Did the TennCare disenrollment impact the voting behavior of Tennesseans, in terms of both voter turnout and partisan vote choice? Though we have outlined a range of potential expectations, note that we do not present formal hypotheses. As we have made clear, the research record is scant when it comes to understanding the feedback effects of policy contractions. We have drawn on extant research to map the possibilities but simply do not have firm enough footing to ingenuously present this work in terms of strong hypotheses.
Data and Methods
We followed recent work (Haselswerdt 2017; Clinton and Sances 2018) in examining changes in aggregate voting patterns from one election to the next, before and after a major Medicaid policy change takes effect, using a “first-differences” approach. Specifically, our dependent variables were the change in overall voter turnout (as a percentage of county population) and in Democratic two-party vote share percentage from the 2002 and 2006 Tennessee gubernatorial elections, at the county level.7 The mean change in county-level turnout over this period was a 1.4-percentage-point increase, with a standard deviation of 2 percentage points.8 The Democratic share of the two-party vote increased considerably across all counties, as Bredesen comfortably won reelection in 2006 over Republican state senator Jim Bryson. The mean increase in Democratic share was 18.4 percentage points, with a standard deviation of 3.7 percentage points.
The advantage of the first-differences approach (i.e., regressing change in the dependent variable on changes in the independent variables) is that it prevents baseline differences between counties from confounding possible causal relationships. In effect, we study only change within counties, not differences between counties.
The gubernatorial data offer several advantages. First, both of these elections took place in federal midterm election years, and both coincided with US Senate elections, making them very similar in terms of potential “coattails” that could distort county-by-county patterns. This is in contrast to another potential data source, the US House elections that took place in 2004 and 2006, since the former coincided with a presidential election and the latter did not. Second, since (again in contrast to US House elections) the gubernatorial is a statewide election in which every county votes on the same candidates, fewer potential confounding factors arise from differences in candidates and competitiveness across districts (which are not coterminous with counties). Lastly, since the TennCare disenrollment was a state policy change initiated by the governor, it makes sense to focus on the subsequent gubernatorial election for evidence of voter behavior effects. The chief disadvantage of the gubernatorial data is that the 2002 election predates the beginning of the disenrollment by two years. This extra time creates more opportunity for confounding factors or differential trends across counties to bias the results. We attempted to guard against this with a battery of control variables measuring economic and demographic change, discussed below.
The key explanatory variable in our analysis was the change in TennCare enrollment at the county level between 2004 and 2006, expressed in percentage points of county population.9 The impact of the retrenchment was unevenly felt across the state's 95 counties: while the mean coverage loss from 2004 to 2006 was 3.1% of county population (with a standard deviation of 2.1 percentage points), this figure ranged from a 0.5% drop in Shelby County (the state's most populous) to double-digit losses in other counties.
Given that our dependent variable measures change from 2002 to 2006, it would be ideal to use a 2002 TennCare baseline as well. Unfortunately, comparable data are not available for the period prior to 2004.10 While this means US House elections are better aligned with the policy change in question, we still believe the gubernatorial data to be more appropriate for the reasons discussed above. Given the relatively steady economic trends and the lack of major state or federal Medicaid policy changes between 2002 and 2004, we did not believe the missing data would show wide variation in TennCare enrollment trends by county.11 In other words, we argue that the 2004 TennCare baseline is a strong proxy for the unmeasured 2002 baseline and that any major changes in TennCare enrollment between 2002 and 2006 likely occurred between 2004 and 2006, the period of disenrollment.12 In any event, our focus in this case is not simply enrollment change in general but disenrollment and retrenchment per se—this occurred only between 2004 and 2006. Declines in enrollment prior to 2004 would likely have been driven by improved economic conditions, which would not generate the same sort of life disruption as disenrollment due to policy change.
While the first-differences design alleviates any concerns about baseline between-county differences, we were still vigilant about differing trends across counties that could confound the results. To address this, we included control variables measuring the county-level change in population (percentage change), unemployment rate, median household income (percentage change), poverty rate, black or African American population percentage, and Hispanic population percentage.13 It is somewhat unclear whether it would be more appropriate to use the change from 2002 to 2006 (to align with the dependent variables) or from 2004 to 2006 (to align with the key independent variable), so we report the results of models using each. We also consider the trends in Georgia, a neighboring state that also held gubernatorial elections in 2002 and 2006 but did not disenroll large numbers of residents from its Medicaid program.
Since the dependent variables are continuous, we used ordinary least-squares regression. Since population varies widely by county (from 4,890 in Pickett County to 915,550 in Shelby County in 2006), we used analytic weights for 2006 population in all models and robust standard errors to guard against heteroskedasticity and to ensure that counties with small populations did not exert an outsized influence in our models.14 Since it is considered best practice to report both weighted and unweighted estimates, we include weighted estimates in our main tables and report corresponding unweighted estimates in the appendix (Solon, Haider, and Wooldridge 2015).
We begin with the results for voter turnout. Figure 2 shows the correlation between turnout change and enrollment change, weighted by county population.15 As expected, there appears to be a positive correlation, or, more accurately (since there were no enrollment increases in any county), larger declines in enrollment were associated with lower or more negative turnout changes.
Table 1 tests this apparent correlation with ordinary least-squares regression, with analytic weights for population size and robust standard errors. The first model displays the results of a simple bivariate regression. This model predicts a 0.36-percentage-point increase in turnout change for every percentage-point increase in TennCare enrollment or, more appropriately, a decrease of 0.36 percentage points for a one-percentage-point drop in enrollment. This correlation is marginally statistically significant (p = 0.06). The addition of control variables, whether they measure change from a 2002 or 2004 baseline, slightly reduces the magnitude of the estimate (to 0.30 and 0.24 percentage points, respectively) but increases its precision (p = 0.005 and 0.021, respectively).16
To put the magnitude of these effects in perspective, the 25th percentile drop in enrollment was 3.3 percentage points, while the 75th percentile drop was 6.5 percentage points. The difference between these values was associated with a difference of between 0.8 and 1.2 percentage points in voter turnout change (depending on the covariates used), or between 0.4 and 0.6 standard deviations. This is best characterized as a small- to medium-sized correlation.
Since we used population weights, we must consider the extent to which more populous counties influence the results. Shelby County, the most populous, is an influential data point, since it experienced the smallest percentage enrollment drop (0.5 percentage points) and the fourth-highest increase in gubernatorial turnout (4.3 percentage points). Dropping Shelby from the analysis does change the conclusions—the coefficients for enrollment change drop to 0.08 (p = 0.36) in the bivariate model, 0.16 (p = 0.102) in the 2002 baseline control model, and 0.10 (p = 0.25) in the 2004 baseline control model. While all remained positive, none reached statistical significance. While this adds a note of caution to our findings of an apparent positive correlation between enrollment change and turnout change, we believe that the model including all counties weighted by population is the most appropriate specification—Shelby is an influential data point for good reason, being home to 15% of the total state population in 2006.17
Given the conflicting expectations of the existing literature, it is important to note that the evidence weighs very strongly against the presence of a backlash effect in turnout, wherein coverage loss induced angry voters to go to the polls. If there was such an individual-level effect, it was clearly overwhelmed by other factors at the aggregate level.
We now turn to the results for Democratic vote share—the change in Bredesen's vote share between his initial election in 2002 (in which he defeated Republican Van Hilleary, 50.6% to 47.6%) and his landslide reelection in 2006 (in which he defeated Bryson 68% to 29.7%). Just as the existing literature is inconclusive with regard to predicting the voter turnout in the wake of policy retrenchment, it offers little by way of a basis for strong predictions about partisan vote share under such conditions. There are several possibilities. On the one hand, it is quite possible that depressed political participation among Medicaid beneficiaries—who are more likely to vote Democrat—could have eroded Bredesen's vote share.18 In addition, backlash from people who opposed the cuts (beneficiaries and others) might have driven people away from the Democratic ticket. Furthermore, the findings of Kogan and Wood (2018) on the ACA and the 2016 election demonstrate that voters dissatisfied with health policy can easily end up punishing or rewarding the wrong politicians. On the other hand, by cutting Medicaid Bredesen cemented his position as a moderate and may have thereby won over Republicans or independents who would not have otherwise voted for a Democrat. We explored data that can help us see which possibilities emerged, but we had no strong a priori hypotheses.
Figure 3 shows the correlation between enrollment change and the change in Bredesen's two-party share, weighted by county population. The correlation appears positive, which is to say that larger drops in enrollment may have depressed Bredesen's vote share gains at the county level.
Table 2 presents the results of regressions of Bredesen's vote share changes on the same set of covariates used for the turnout change regressions. The bivariate model suggests a strong correlation between enrollment change and vote share change (p = 0.012), but the picture is much less clear when we control for economic and demographic changes. The coefficient for enrollment change does not attain conventional levels of statistical significance whether the control variable baseline is 2002 (p = 0.11) or 2004 (p = 0.16).19 Thus, the evidence for a partisan vote share effect of the disenrollment is much weaker than that for the turnout effect.20 This is consistent with the results of Haselswerdt (2017), who found turnout effects but no partisan vote share effects of Medicaid enrollment increases in US House races.21
In this case, the ambiguous results for partisan vote share could be a product of the complex partisan politics of Medicaid in Tennessee during this period. The disenrollment was prominently championed by Bredesen, a Democrat, with broad bipartisan support in the legislature.22 Most of the opposition he faced came from advocates on the left and a minority of Democrats in the legislature (the bill passed the State House by a vote of 82–8, with 7 of the “no” votes coming from Democrats), though legislative Republicans Joey Hensley (Cheek 2004; Schrade 2005; Wadhwani 2005) and Diane Black (Fender 2006b) did make critical public statements about disenrollment. For his part, Republican gubernatorial candidate Jim Bryson also attacked Bredesen's TennCare approach on the campaign trail (Wickham 2006; Fender 2006a). If voters wished to punish Bredesen for this policy change, they would be faced with a difficult choice, since most Republicans also favored the change. They also may not have seen Bryson himself as a credible champion of TennCare given that his campaign, like Bredesen's, emphasized fiscal responsibility. It is conceivable that if such a change had been enacted by a Republican governor, a pro-Democratic response could have ensued. Of course, any such response could also be undermined by the apparent depressive effect of disenrollment on participation. Indeed, it is possible that the correlations we observed in the vote share analyses, statistically significant or not, were driven by potential Bredesen voters staying home rather than by any sort of vote choice effect.
Lastly, we sought to test an alternative explanation for the patterns we observed: perhaps different sorts of counties in the United States or the Southeast simply experienced different political trends between 2002 and 2006. If this is the case, it could violate the “parallel trends” assumption that is fundamental to first-differences research designs. In our case, it is possible that such differential underlying trends are correlated with both enrollment change and political changes, which could lead us to misattribute the political changes to the effects of disenrollment in the analyses above. Since we cannot observe the counterfactual of a Tennessee that did not implement disenrollment in the mid-2000s, we can never fully verify the parallel trends assumption. We can, however, compare trends in Tennessee with those in Georgia, a neighboring state with gubernatorial elections in the same years. Specifically, we simulated disenrollment in Georgia by generating a predicted enrollment change variable using the 2002 baselines of our control variables (population, unemployment rate, poverty rate, black population, Hispanic population, and median household income). This regression explains 77% of the variation in enrollment change in Tennessee, and its predicted values for Georgia counties serve as a placebo variable. If the association we observe between disenrollment and political changes in Tennessee is causal, then we should not observe the same associations in Georgia, where disenrollment did not actually occur.
As expected, our placebo analysis (results displayed in appendix table A8) showed that predicted enrollment change had very different relationships with the dependent variables in Tennessee and Georgia. The positive and statistically significant associations of predicted enrollment change with both turnout and Democratic vote share change in Tennessee are not present in Georgia—in fact, there is a negative and statistically significant coefficient for predicted enrollment change in the vote share model. Through the use of interaction terms, we can reject the null hypothesis that the coefficients are the same across the two states (p = 0.02 and p < 0.001, respectively).23 In short, the evidence suggests that the effects we report here are probably not an artifact of national or regional trends.
To our knowledge, this is the first study to report on the electoral effects of a large-scale retrenchment in public health insurance policy in the United States. The results suggest that the loss of Medicaid does not mobilize people to vote. Instead, widespread loss of coverage appears to decrease turnout. While it is also possible that such policy changes have effects on partisan vote share (potentially shaping electoral results), we found only weak support for such outcomes.
There are several caveats to our analysis. First, we present only aggregate data, so our ability to make inferences about individual-level behavior is limited. Second, our findings on turnout are heavily influenced by the data point of Shelby County, the state's most populous locale, which experienced the least drastic drop in coverage rates in percentage terms while exhibiting a healthy increase in turnout. In this context, it is worth remembering that the TennCare disenrollment is a strong case for health policy retrenchment and political behavior—if losing public health coverage makes people more likely to participate, we should have seen some evidence of that in these results. Instead, we found evidence for the opposite effect. Third, while our model specification guards against the confounding effects of baseline differences between counties and we controlled for several measures of economic and demographic trends, and while our placebo analysis of county-level voting behavior in neighboring Georgia offers some additional reassurance, it is possible that some other trends correlated with the size of disenrollment drove the observed voting effects. It is also possible that Tennessee is a unique case and that these findings may not perfectly generalize to other states.
Limitations notwithstanding, the substantive implications of the findings are imperative. Our results suggest that theories of loss aversion (e.g., McDermott 1998) and work on the politics of welfare state retrenchment (e.g., Pierson 1994) may overstate the degree to which aggrieved voters will take to the polls in response to material losses. Instead, extensive losses in public benefits may dampen participation via resource or interpretive effects. This has troubling implications for democracy. The United States has long faced the challenge of significant nonparticipation by the most economically marginal Americans (Gaventa 1982; Hill and Leighley 1994; Leighley and Nagler 2005; Rosenstone and Hansen 1993; Piven and Cloward 2000; Verba, Schlozman, and Brady 1995). Moreover, the political inequality bred by socioeconomic imbalances in the electorate has policy consequences (Avery and Peffley 2005; Hill, Leighley, and Hinton-Andersson 1995). This makes for a damaging cycle of politics and policy: class bias in the electorate facilitates political conditions conducive to welfare state retrenchment—and that very retrenchment exacerbates class disparities in political participation. By offering evidence that major cuts in public health insurance demobilize voters, we further substantiate a crucial part of this cycle. To the extent that the full incorporation of disadvantaged Americans reinforces democratic legitimacy, it is vital that we heed the causes, consequences, and extent of such electoral patterns.
The authors thank Patrick Flavin, Sarah Gollust, Eric Patashnik, Vladimir Kogan, Sekou Franklin, participants in the JHPPL special issue conference at the University of Missouri in February 2018, participants in the “Health Politics at the Crossroads” conference at the University of Houston in November 2017, and an anonymous reviewer for comments and suggestions. They also thank Kate Perry, Grace Yousefi, and Colin Cepuran for their research assistance.
In a separate study, Michener (2017) examined county-level data and found that, as the proportion of county residents enrolled in Medicaid increased, there were corresponding declines in both aggregate rates of voting and counts of civic associations. This does not contradict the findings of either Haselswerdt (2017) or Clinton and Sances (2018) because Michener (2017) did not examine the (short-term) effects of the ACA Medicaid expansion. Instead, she studied an earlier period and looked at the average effects of county-level Medicaid density. Though related, the hypothesized mechanisms and political processes underlying these inquiries are distinct. Indeed, it is reasonable for scholars to find that politically salient mass coverage expansions attributable to prominent elite actors that quickly bring large waves of new people into a beneficiary population will have different (e.g., more positive) effects than gradual, often imperceptible, geographically differentiated increases in county-level Medicaid density over time. Moreover, while the immediate positive effects of coverage expansion may indicate an electorate that is responsive to major policy changes, they do not speak to the enduring consequences of the experiences that emerge from actually using Medicaid. Maintaining the theoretical distinction between coverage effects and experiential effects helps us make sense of heterogeneity in the political consequences of Medicaid.
Importantly, Proposition 187 was never actually implemented due to a series of court injunctions. Furthermore, it is admittedly difficult to disentangle whether the increased mobilization of Latinos was a result of the threat of benefit losses or other factors such as ethnic solidarity and resentment toward the anti-immigrant stance of the Republican Party. Dyck, Johnson, and Wasson (2012) found evidence that the Latino movement toward the Democratic Party in California is best explained by a combination of factors, including ballot initiatives like Proposition 187 and larger national trends in macropartisanship.
Note that Tilley, Neundorf, and Hobolt (2018) focused only on partisan vote choice, not participation.
The findings of Tilley, Neundorf, and Hobolt (2018) on government responsibility and pocketbook voting are again relevant here.
These figures come from a nonpartisan semiannual probability survey of Tennessee residents conducted by Middle Tennessee State University Survey Group as reported by Franklin (2017).
Garthwaite, Gross, and Notowidigdo (2014) demonstrated that many of the disenrolled in Tennessee were able to subsequently obtain private insurance by entering the labor market or changing jobs, indicating a degree of insurance “crowd-out” by TennCare.
These data are available on the Tennessee secretary of state website (sos.tn.gov/products/elections/election-results).
We weighted by 2006 county population to calculate these and all other county-level summary statistics.
For example, in Campbell County in 2004, 39.9% of the population was enrolled in TennCare. Following the disenrollment, this figure dropped to 32.1% in 2006, for a change of −7.8 percentage points. These data are available in TennCare's annual reports (www.tn.gov/tenncare/topic/annual-reports).
This was confirmed in e-mail correspondence with the TennCare records staff on July 26, 2017.
See appendix table A1 for a summary of key state-level economic trends. Also note that county-level economic trends indicate that, of the 95 counties in Tennessee, 76 experienced a change of less than 1% (positive or negative) in their unemployment rates between 2002 and 2004, while 63 experienced a change of less than 0.5%. Only two counties experienced changes of a magnitude of 2% or greater.
The unemployment rate data came from the Bureau of Labor Statistics. The poverty rate and median household income data came from the Census Bureau's Small Area Income and Poverty Estimates. The racial and ethnic percentages came from the Census Bureau's American Community Survey 5-year estimates.
Since we are using population data, there is no sampling in the literal sense. Standard errors and test statistics should be interpreted as testing whether or not we would observe correlations like this due to elements of randomness in the process that generated the data. In other words, if we were to repeat the 2002–6 period in Tennessee hundreds of times, how extreme would the observed correlations in this particular data set be compared to a null hypothesis of no correlation?
The figures presented here exclude Moore County (2006 population: 6,110), which experienced a massive drop of 19.1% in TennCare enrollment, making it an extreme outlier that complicates visual presentation. The inclusion of Moore does not appreciably alter the slopes presented in these figures. It is included in the regressions reported in the tables.
Enrollment change is highly correlated with several of the economic and demographic change indicators. Regressing enrollment change on the 2004 baseline versions of these variables (with county population weights and robust standard errors) returned an R2 of 0.50, with statistically significant results for poverty change (negative) and black and Hispanic population change (positive). This suggests that enrollment drops were largest in counties with increasing poverty, all else equal, but that counties with increasing racial and ethnic diversity saw relatively smaller drops. The results were similar when we used the 2004–5 change in TennCare enrollment as the key independent variable (see appendix table A6). In fact, all of the coefficients are larger with this operationalization than with the 2004–6 change, though the association is not statistically significant in the bivariate model (p = 0.101).
Existing survey data do suggest that Medicaid beneficiaries are more likely to identify as Democratic. See Sanandaji 2012.
We also found null results for all vote-share models, including the bivariate specification, when we exclude Shelby County (see appendix table A5), though the coefficients are similar.
Here the use of the 2004–5 enrollment change operationalization makes a difference (see appendix table A7)—the association between this measure and Bredesen's vote share is statistically significant in all three models, with larger coefficients than those displayed in the main results. Since this relationship is not robust to this measurement choice, we emphasize the more conservative 2004–6 results in our interpretation.
The null finding also echoes that of Hall, Yoder, and Karandikar (2017), who found no evidence of a county-level electoral backlash against incumbents due to foreclosures in the housing crisis in terms of vote choice.
Bredesen expressed gratitude for the legislature for providing political cover on the TennCare cuts, remarking, “We are truly linking arms to demonstrate the intent of the state to take control of the TennCare program” (Cheek 2004).
Both of these regressions control for change on the usual set of indicators, using the 2002 baseline versions. The full results are shown in appendix table A8.