Abstract

We consider two ways that public opinion influenced the diffusion of ACA policy choices from 2010 through 2014. First, we consider the policy feedback mechanism, which suggests that policy decisions have spillover effects that influence opinions in other states; residents in the home state then influence the decisions of elected officials. We find that both gubernatorial ACA announcements and grant activity increased support for the ACA in nearby states. Consistent with our expectations, however, only gubernatorial announcements respond to shifts in ACA support, presumably because it is a more salient policy than grant activity. Second, we test for the opinion learning mechanism, which suggests that shifts in public opinion in other states provide a signal to elected officials about the viability of decisions in their own state. We find evidence that states are more likely to emulate other states with similar ACA policy preferences when deciding about when to announce their decisions. Our results suggest that scholars and policy makers should consider how shifts in public support influence the spread of ideas across the American states.

Does public opinion in one state influence the policy decisions of other states? While there is general agreement that policy makers consider the opinions of their own residents (Konisky 2007), few scholars offer theoretical advancements for understanding how or when public opinion matters for the spread of ideas. Instead, scholars focus on elite-driven mechanisms of diffusion, arguing that policies are adopted either because policy makers emulate or learn from states with “successful” policies or because states compete to gain an economic advantage. Of the 117 articles published on policy diffusion in the American states since Walker (1969), only 65 consider the influence of public opinion. The majority of these authors are concerned with public opinion in the home state, noting that policy makers should be responsive to their own citizens. Government officials are seeking reelection; therefore, preferences of state residents should be related to policy adoption in the home state (e.g., Berry and Berry 1992).1 Yet, missing in our theories of policy diffusion is the possibility that public opinion plays a more central role in the spread of policy ideas, not just the adoption of policy in the home state.

To better incorporate the ways that public opinion in one state both influences and is influenced by other states' policy decisions, it is important to have accurate measures of citizens' attitudes toward specific policies. Methodologically, scholars often measure public opinion indirectly via demographics (e.g., percentage of Evangelicals) or reelection factors (e.g., competitiveness). Yet, there is no guarantee that demographics or reelection factors proxy for specific policy preferences. For example, the correlation between the changes in the percentage of adult smokers and preferences toward smoking bans in restaurants is a mere −.06 (Pacheco 2012). Scholars who rely on direct measures of public opinion tend to use ideology (Taylor et al. 2012) or state culture (Crowley 2004). These broad measures of opinion, however, do not provide a clear understanding about the role of public opinion because (1) they are stable over time, preventing inferences about how changes in opinion influence the diffusion process; and (2) it is unclear exactly how broad measures of ideology or culture should be linked to actual policy choices (e.g., Matsusaka 2001; Lax and Phillips 2012). Empirically, research that considers the role of public opinion is best suited to include direct measures of preferences that are specific to policy choices.

We contribute to our understanding about the role of public opinion on state policy making by looking at policy choices on the Affordable Care Act from 2010 through 2014. First, we consider the policy feedback mechanism, which suggests that policy decisions have spillover effects that influence opinions in other states; residents in the home state then influence the decisions of elected officials. We find that both gubernatorial ACA announcements and grant activity exhibit spillover policy effects; that is, both types of decisions increased support for the ACA in nearby states. Consistent with our expectations, however, only gubernatorial announcements respond to shifts in ACA, presumably because it is a more salient policy than grant activity. Second, we test for the opinion learning mechanism, which suggests that shifts in public opinion in other states provide a signal to elected officials about the viability of decisions in their own state. We find evidence that states are more likely to emulate other states with similar ACA policy preferences when deciding about the timing of gubernatorial announcements but do not consider similarity in ACA policy preferences when making decisions about less visible policies, such as grant activity.

The results suggest that public opinion is more than an internal factor that influences policy adoption. Instead, public opinion is a potential lever that accelerates the diffusion process; it also explains the geographic patterns of policy adoption. Scholars and policy makers are encouraged to consider how shifts in public support influence the spread of ideas, particularly those related to health care decisions, across the American states.

Public Opinion and Policy Diffusion: Two Mechanisms of Influence

We consider two mechanisms that account for the influence of opinion on policy diffusion. The policy feedback mechanism suggests that policy decisions have spillover effects that influence opinions in other states; residents in the home state then influence the decisions of elected officials. The opinion learning mechanism suggests that shifts in public opinion in other states provide a signal to elected officials about the viability of decisions in their own state. We describe both mechanisms and their applicability to the ACA below.

The Policy Feedback Mechanism

The policy feedback mechanism suggests that the public plays a major role in the diffusion process by reacting to policy choices in other states and then pressuring their own officials to make similar decisions.2 The policy feedback mechanism has two components. First, policy preferences are shaped by policy choices in other jurisdictions. While there is evidence that policies influence mass preferences (Soss and Schram 2007; Pacheco 2014), less is known about spillover policy effects. The most compelling evidence comes from Pacheco (2012), who finds that public support for smoking bans in restaurants increased after enactment of antismoking legislation in neighboring states. Pacheco's (2012) work builds on other studies that show that individual behaviors are influenced by policy decisions in nearby states. For instance, residents travel to other states to purchase lottery tickets (Berry and Baybeck 2005), buy cigarettes (Hyland et al. 2005), and obtain abortions (Althaus and Henshaw 1994).

While there is empirical support for the first component of the policy feedback mechanism, it is unlikely to apply to all policies. Policies that are proximate and visible are most likely to shape public opinion (Soss and Schram 2007). Policies that are highly proximate, such as smoking bans or seatbelt laws, are those that individuals have direct and recurrent experiences with, while less proximate policies—including many redistributive policies—are largely hidden from citizens' daily lives. Highly visible policies are those that receive large amounts of media and electoral attention. Distinguishing policies based on proximity and visibility is important because these characteristics may determine how policies influence mass preferences. Proximate policies influence mass opinions through implementation (Soss 1999), while visible policies influence preferences through the information environment (Brewer 2003), which includes information transmitted via social networks and overlapping media markets (Zukin and Snyder 1984). We would expect the same pathways to apply to spillover policy effects. Spillover policy effects are most likely to occur when residents living near the borders of states have ample opportunities for direct policy experience or policy learning through the information environment.

Applied to the ACA, visibility is most applicable. However, not all decisions within the ACA are equally visible. Some decisions, such as choosing a structure for the online health insurance marketplace, are highly visible both to residents of the state and to residents in neighboring states. Citizens are less likely to be aware of other decisions, such as applying for federal funding. This leads to our first hypothesis:

H1

ACA state policy decisions that are highly visible influence policy preferences in nearby states.

The second component of the policy feedback mechanism is the policy responsiveness of elected officials to changing constituent opinions. State preferences are highly related to policy choices, regardless of whether scholars focus on broad measures of public opinion (Erikson, Wright, and McIver 1993) or specific policies (e.g., Hill, Leighley, and Hinton-Andersson 1995; Mooney and Lee 2000; Gray, Lowery, and Godwin 2007). More recent evidence using time series analyses finds evidence of dynamic policy responsiveness. Shifts in public support for governmental spending correspond to changes in state spending (Pacheco 2013), and an increase in support for antismoking legislation increases the probability that states adopt smoking bans (Pacheco 2012). Presumably, it is precisely because state officials are interested in reelection that they actively gauge and respond to shifts in public opinion in their home state (Erikson, MacKuen, and Stimson 2002). We would expect the same dynamics to apply to ACA policy decisions, which leads to our second hypothesis:

H2

Policy makers respond to shifts in support for the ACA among their constituents.

The Opinion Learning Mechanism

The opinion learning mechanism focuses on the reaction of elected officials to changing opinions in like-minded states. Berry and Berry (1992: 400), for instance, argue that adoptions in nearby states can counteract public opinion against a policy or intensify pressures to adopt a policy in a state where the public favors it. There is also evidence that states are more likely to emulate states that are ideologically similar (Volden 2006), suggesting that elected officials learn about viable policies from states that are similar in their policy preferences. Changes in national sentiment, particularly on salient policies (Nicholson-Crotty 2009), speed up or slow down the diffusion process as elected officials learn about the political gains or losses of policy enactments. According to the opinion learning model, when policy preferences between two states are similar, it provides a signal to elected officials about the viability of policies in their own state. Policy diffusion is, therefore, a product of elected officials responding to policy preferences elsewhere, as opposed to legislators learning about policy successes in other jurisdictions.

A crucial component of the opinion learning mechanism is the monitoring of external opinion by elected officials. While political officials often catch wind of shifting preferences among their constituents (Erikson, MacKuen, and Stimson 2002), there is also evidence that gauging shifts in policy preferences, especially on issues that are not salient (Burstein 2003), is burdensome for political officials (Weaver 2000; Manza and Cook 2002). While the rapid growth of public and private opinion polling has increased the amount and quality of information available to political actors (Geer 1996), it is also likely that poll respondents are somewhat unrepresentative of the broader constituency (Berinsky 1999). Given the difficulties in opinion monitoring, it is reasonable for elected officials to look elsewhere, and especially at those states that are similar, for information about their own residents.

State policy makers may have additional incentives to look at policy preferences elsewhere that have little to do with responding to constituent preferences. Instead, elected officials may look to states with similar ideological or partisan leanings in order to learn about how to “craft” their policy stances and “win” public support for what they desire (Jacobs and Shapiro 2000). According to this view, politicians are uncertain about public opinion, but believe that it is susceptible to change and malleable enough to support their preferred positions (Jacobs and Shapiro 2000). Elected officials rely on three techniques to change public opinion, including tracking public opinion, managing press coverage, and priming the public to support certain policies (Jacobs and Shapiro 2000). It is reasonable to assume that state officials use similar strategies to promote policy decisions and that the external monitoring of similar states is part of that strategy.

Recent work by Pacheco (2013) provides direct evidence that state legislators respond to public opinion outside their borders, although there is no explanation for whether officials are looking elsewhere due to sincere or strategic motives. Pacheco (2013) finds that legislators in less professional states respond to changes in national policy sentiment when deciding on expenditures instead of state-specific opinions, presumably because these states lack the necessary resources to gauge opinion shifts among their constituents.

Regardless of whether the opinion learning mechanism is a result of sincere or strategic motives, we might expect elected officials to pay the most attention to external opinion shifts on policies that are particularly salient. If opinion learning is sincere, then policy makers may have more uncertainty about how their constituents will respond in the next election to policy decisions on salient issues. If opinion learning is strategic, it is the highly partisan issues that increase competition among policy makers who then promote their favored positions via campaigns and counter-campaigns (Jacobs and Shapiro 2000). In both scenarios, we would expect the following as it applies to the ACA:

H3

States are more likely to emulate the highly visible decisions of the ACA from states with similar levels of ACA support.

The Diffusion of Affordable Care Act Decisions

The underlying design and structure of the ACA relies on the cooperation of the fifty states (Greer 2011), and states have significant leeway to tailor reform to the tastes of their residents (Jacobs and Skocpol 2012). We concentrate on two components of the ACA that we believe allow us to test our hypotheses regarding the policy feedback mechanism and the opinion learning mechanism. The first component is the timing of gubernatorial decisions regarding the new health insurance marketplace. While every state is required to have a health insurance marketplace, states may choose how much control they have in creating the marketplace. States can control all aspects of their marketplace (state marketplace), share control of the marketplace with the federal government (partnership marketplace), or cede all power in marketplace creation to the federal government (federal marketplace).

Governors, as the most powerful persons in state government (Rosenthal 1990; Beyle 2004), were instrumental in determining which option their state would take. By announcing the marketplace structure for their state, governors play a critical role in legitimizing the ACA by signaling to the public that health care reform in their state will move forward. We expect, then, that public support toward the ACA should increase following gubernatorial announcements in nearby states.

The second decision that we focus on is the timing of grant applications. The federal government offered grants for those states that were making progress toward establishing a marketplace. States choose when to apply for funding based on their needs and planned expenditures. Level 1 establishment grants (awarded to thirty-seven states, although states can apply for this grant multiple times) are available for states that are making progress in establishing a marketplace through a step-by-step approach. Level 2 establishment grants (awarded to fourteen states) are available for states that are moving ahead with their state-based marketplace at a faster pace. Here, we look at the timing and frequency of states' grant applications, rather than the amount of money states requested from the federal government, because we believe that these decisions signal willingness to implement the ACA (see Rigby 2012 for a similar argument).3

We picked these two decisions for empirical and substantive reasons.4 First, as described below, these decisions have ample variation over time, allowing us to explain how shifts in opinion influenced the dynamics of state policy making. Additionally, information regarding the timing of both policy decisions is readily available for us to code. Substantively, these two policy decisions vary on visibility and saliency, allowing us to test H1 and H3. Gubernatorial announcements and the implementation of the health insurance marketplaces received high media attention at both the national and local levels (Gollust et al. 2014), and decisions regarding the ACA are highly salient to the American public (Blendon, Benson, and Brulé 2012). Unlike decisions about the marketplace, grant activity received relatively less media coverage, likely because state health departments rather than politicians applied for these grants. This may also explain why less visible policy decisions are not as responsive to public opinion. Because bureaucrats are often not elected, they may feel less beholden to public preferences.

We anticipate the policy feedback mechanism and the opinion learning mechanism to be particularly influential for the timing of marketplace decisions and less so for the timing of grant applications. Given the salience and media coverage of the ACA, we anticipate that gubernatorial announcements will influence policy preferences in nearby states (H1). Similarly, because governors are sensitive to preferences in like-minded states, we expect changes in policy preferences toward the ACA in nearby states to influence gubernatorial announcements in the home state (H3). While both decisions are likely to be related to public opinion in the home state, we suspect that grant applications will be less related to changing preferences since these decisions are largely hidden from the public (H2).

Finally, while much is known about the determinants of state-level decision making regarding the types of marketplaces (federal, state-based, or partnership) (e.g., Jones, Bradley, and Oberlander 2014), scholars know relatively less about the timing of decisions related to gubernatorial announcements as well as the determinants of grant activity. Thus, our article not only contributes to the extensive literature on policy diffusion, but also has implications for health policy scholars.

Measuring State ACA Policy Decisions

We rely on policy briefs from the Kaiser Family Foundation to measure the month and year in which governors announced their marketplace decisions starting in 2010. States differ significantly in both the type of marketplace exchanges as well as the timing of gubernatorial announcements as shown in table 1. California was the first state to announce the structure of its marketplace in September 2010. By May 2013, all fifty states had announced their marketplace structure.

Data on the timing decisions of grant applications come from various reports by the Centers for Medicare and Medicaid Services. Table 2 shows differences across the states in the type of grants and timing of applications. States were eligible to apply for grants roughly once every quarter, but were allowed to apply in multiple funding cycles. While the majority of states applied for one grant during each cycle, table 2 shows that Massachusetts applied for two L1 grants during the last quarter of 2014. Additionally, more states applied for L1 grants compared to L2 grants. Given that L2 grants were relatively rare, the analyses below uses a measure that combines grant activity for L1 and L2 grants.

Measuring State ACA Support over Time

We rely on the Kaiser Family Foundation Health Tracking Polls to measure state opinion toward the ACA over time. Support for the ACA was asked for forty-seven consecutive months using the following question, “As of right now, do you generally support or generally oppose the health care proposals being discussed in Congress?” Respondent answers ranged from strongly support to strongly oppose. As the ACA became law, the question stem changed slightly to, “As you may know, a new health reform bill was signed into law.”5

To estimate state opinion toward the ACA, we will rely on multilevel modeling, imputation, and post-stratification (referred to as MRP), developed by Gelman and Little (1997) and extended by Park, Gelman, and Bafumi (2004; 2006). MRP produces accurate estimates of public opinion by state (Lax and Phillips 2009) and congressional district (Warshaw and Rodden 2012), and over time (Pacheco 2011; 2013).

As described in the  appendix, we model survey responses as a function of gender, race, age, education, region, state, and state presidential vote share. These are standard predictors of MRP and perform quite well (Lax and Phillips 2009). We use the population frequencies obtained from the public use micro data samples in 2010 supplied by the census bureau for post-stratification.

Adding a Time Component

We add a time component by pooling surveys across a small time frame. We use a three-quarter moving average to estimate quarterly opinion toward the ACA. For instance, to get point estimates for Q1 in 2011 using a three-quarter pooled window, we combine estimates from Q4 in 2010, Q1 in 2011, and Q2 in 2011, and then perform the MRP technique on this pooled dataset. The MRP process is repeated for each quarter after moving the time frame up a quarter at a time. By pooling and taking the median estimate, the first and last quarters are missing. Pacheco (2011) shows that while there is a trade-off between the reliability of estimates and sensitivity to very short-term shocks, the efficiency benefits of pooling over a small time period outweigh the costs of biasedness.

The use of multilevel modeling and post-stratification overcomes two major problems that arise when trying to measure state opinion from national surveys. Multilevel modeling increases the reliability of less populous states via “shrinkage towards the mean.”6 Indeed, the MRP approach has been shown to be superior to the aggregation method in terms of reliability, particularly when sample sizes are small, for instance, when N is less than 2,800 across all states (Lax and Phillips 2009). Post-stratification corrects for non-representativeness due to sampling designs by adjusting estimates so that they are more representative of state populations.

Figure 1 shows quarterly estimates of the percentage of state residents who are “very” or “somewhat” favorable toward the ACA from Q2 in 2010 to Q3 in 2014 for select states.7 While not exhaustive, figure 1 gives a descriptive glimpse into the dynamic properties of state opinion toward the ACA. State favorability toward the ACA is generally low; on average across the United States and during this time period, only 46 percent of residents view the ACA favorably, which corroborates with previous research. This national estimate, however, ignores significant variation across and within states; 73 percent of the variance in ACA favorability is across states and 28 percent is within states. In some states (e.g., California), a majority of residents favor the ACA, while in others (e.g., West Virginia) support is much lower than the national average. As shown in figure 1, there is also movement in ACA favorability with some states declining in support and others experiencing bouts of increased support.8

Empirically Testing the Policy Feedback Mechanism

We begin by testing whether ACA policy decisions have spillover policy effects on ACA preferences. Recall that we expect state policy decisions that are highly visible to influence policy preferences in nearby states; more specifically, we expect the timing of gubernatorial announcements, but not the timing of ACA grant applications, to influence shifts in support elsewhere. To test for spillover policy effects, we employ traditional time series methods. More specifically, we use an error correction model (ECM). An ECM allows for the estimation of both short- and long-term effects of independent variables and tells us how quickly the system returns to equilibrium or the overall mean after being disrupted. The dependent variable captures the changes in opinion toward the ACA. A lagged dependent variable is included to account for time dependence. For all time varying covariates, we include both the differenced independent variable ([ΔX]t) and the lagged independent variable (X(t−1)) to account for both short- and long-term effects.

The main independent variables are the proportion of neighboring states in which the governor has announced an ACA decision and applied for grant applications. We control for a number of other factors that may influence changes in ACA preferences. First, we control for policy decisions in the home state with the expectation that there may be some policy feedback effects. Specifically, we include measures of whether the governor in the home state has already announced the ACA decision and grant activity. We also include a measure of the type of exchange that a state announced since those that defaulted to the federal government are generally less supportive of the ACA (Jones, Bradley, and Oberlander 2014). Finally, we include fixed unit effects (e.g., state dummies) to account for unit heterogeneity and fixed time effects (e.g., quarter/year dummies) to account for systemic factors. We also include panel corrected standard errors as suggested by Beck and Katz (2011).

Results are shown in table 3. The coefficient on the lagged dependent variable gives the error correction rate with a value closer to zero, indicating a slow return to equilibrium. As shown in Model 1 in table 3, the coefficient on the lagged dependent variable for state ACA support is −.46, suggesting that opinion is relatively quick to return to equilibrium when disrupted.

Consistent with the first part of the policy feedback mechanism, the model suggests that, as the proportion of neighboring states announce their ACA decisions, ACA support in the home state increases in the short run, but not the long term. The coefficient on the differenced proportion of neighboring states variable gives the short-term effect of policy adoption on state public opinion. To get the estimated effect of a unit change in X, we simply multiply this effect with the coefficient. For instance, a .35 increase in the proportion of neighboring states that announce the ACA decision (which is roughly two standard deviations above the mean change) increases public support for the ACA in the next quarter by about 1 percent (e.g., .35 × .02). Although this effect is small, it is large if changes in neighboring policies occur in consecutive quarters.

Surprisingly, neighboring grant activity also has a statistically significant effect on public support for the ACA in the short term. For instance, the model predicts that if the number of grants applied for by neighboring states increased by two (which is roughly two standard deviations above the mean change), public support for the ACA also increased by about 2 percent (e.g., 2 × .009) in the next quarter.

If the policy feedback mechanism is true, then opinion should influence the probability of state ACA decisions; state officials should respond to the preferences of state residents. To test the second component of the policy feedback mechanism, we employ event history analysis. The dependent variable in these models is the probability that state i will either announce their marketplace structure or apply for a federal grant in quarter t. For gubernatorial announcements, this variable takes a value of one in the quarter that the governor in state i announces the state's marketplace structure and a zero in all quarters prior to announcement. Observations are dropped in the quarters after a state has declared the structure of their marketplace since the state is no longer “at risk” of innovating; this is the conventional coding scheme for event history analysis (Berry and Berry 1990). For grant activity, the dependent variable takes a value of one in all quarters that a state applied for either an L1 or L2 grant and a zero otherwise. Because states may apply for multiple grants and are, thus, “at risk” of another grant application in every quarter, cases are not dropped once a state has applied for their first grant. In the case of multiple or repeated events, it is important to control for a state's previous decisions (Beck, Katz, and Tucker 1998: 1272). Accordingly, we include a count of the previous number of grant applications for each state.9 Since the dependent variables are dichotomous, we employ logistic regression. To account for potential problems of non-independence of observations and of heteroskedasticity, we rely on the cluster procedure where observations are clustered by state.

The main independent variable is state support for the ACA, as described in the previous section. According to H2, as support increases, the probability for announcement and grant activity should also increase. We include the proportion of neighbors that have announced their ACA decisions and neighboring grant activity to account for the influence of other states. Some states are highly involved in grant activity, and we expect for states with more resources to be particularly well suited to apply for federal grants. We include a binary variable which captures whether the home state belongs to the RWJF's State Network (1) or not (0). States that belonged to the Robert Wood Johnson Foundation's (RWJF) State Health Reform Assistance Network likely had assistance in applying for grants.10

We also control for a number of other determinants of state health policy making. We control for gubernatorial partisanship using a binary measure which takes the value of one if the governor is a Republican and zero otherwise. The ACA is a highly partisan issue and Republican-led states may take longer to announce their marketplace structures or be less likely to apply for grants. States with a larger uninsured population may be more proactive in implementing the ACA by announcing their marketplace structure early or may have greater need for federal funding assistance. To control for this, we include a measure of the percentage of uninsured state residents. We include several demographic measures that are often used in diffusion studies such as the natural log of the state's population size and the median income in the state. We also include time and time squared.11

Results are shown in table 4. As shown in table 4, ACA support influences gubernatorial announcements, but not state grant activity. More specifically, the model predicts that a state that has the highest level of support for the ACA at time t-1 has a probability of announcing the ACA decision that is twenty points higher than states with the lowest level of support for the ACA in the previous year. It is interesting to note that including the public opinion measures does not completely account for the influence of neighboring states on gubernatorial announcements; states are also more likely to announce their decisions if neighboring states have already announced. This suggests the possibility that additional mechanisms of policy diffusion, besides the policy feedback mechanism, may be present. Turning to state-level grant activity, the majority of variables do not have a statistically significant influence. We do find that states with a federal exchange were less likely to apply for grants. Consistent with our expectations, however, public opinion does not influence state-level grant activity.

Overall, our results suggest modest support for the policy learning mechanism of diffusion. While we find that gubernatorial announcements and grant activity exhibit spillover effects that increased support for the ACA in neighboring states, public opinion is only significantly related to gubernatorial announcements in the home state. This generally conforms to our expectations that public opinion matters more for the diffusion of highly visible policy decisions, such as gubernatorial announcements of the ACA, compared to less salient policies, such as state-level grant activity.

Empirically Testing the Opinion Learning Mechanism

To test whether states are responsive to external public opinion on salient issues, as suggested by the opinion learning mechanism, we use directed dyad-quarter event history analysis where the dependent variables reflect increased similarity in policy decisions between two states in a dyad (Gilardi and Füglister 2008: 415). For gubernatorial announcements, we use a dichotomous measure that is coded one if the governor in State A announces that it will adopt the same marketplace that has already been announced by State B's governor in a previous quarter, and zero otherwise. For the time periods after State A has announced its marketplace structure, the dependent variable is set to missing since State A is no longer at risk of moving closer to State B's policy decision. For grant applications, our dependent variable takes a value of one if State A applies for a grant in quarter t that moves it closer to the number of grants that State B has applied for by quarter t–1. As with the previous grant application model, there is a possibility for repeated events whereby State A may apply for multiple grants which move its total number of grant applications closer to State B's total number of grant applications. It is important in cases of multiple or repeated events to control for the number of prior events, so we include a count of the previous instances of State A emulating State B's grant activity as suggested by Beck, Katz, and Tucker (1998: 1272).12

Dyadic analyses of policy diffusion are at risk for potential “emulation bias” whereby states appear to imitate another state, but, in reality, there is simply a trend for states to adopt the policy (Gilardi and Füglister 2008: 426–27; Boehmke 2009b). As a solution, Boehmke (2009b) suggests that researchers condition on whether states have the opportunity to be influenced by others by removing cases where Pr(yijt = 1) = 0. Accordingly, in the gubernatorial announcement models, we exclude observations where State B has not yet declared their marketplace structure. In the grant application models, we exclude cases if State B has never applied for any grants or if State B has applied for the same number or fewer grants than State A.

According to the opinion learning mechanism, policy makers will consider the level of policy support in another state, at least in relation to their own citizens' policy support, when making policy decisions. Therefore, our independent variable of interest is the similarity in public support for the ACA in the dyad. More specifically, we measure the similarity in ACA support by taking the absolute difference between ACA support in State A and ACA support in State B from the previous quarter. We expect policy makers in State A will use ACA preferences in State B when deciding to announce the marketplace structure (H3). Since ACA decisions are more salient than grant activity, however, we do not expect to find the same effect for the timing of grant applications.

We also control for several factors that may affect the similarity in state decisions. First, we include measures of similarity in demographic characteristics. Population ratio is the ratio of the larger state in the dyad to the smaller. We also control for the absolute difference in median income between the two states in the dyad, the absolute difference in liberal ideology in the two states,13 and the absolute difference between the percentages of the population in each state that are uninsured. Same region is a binary measure which captures whether states belong to the same region (1) or not (0). Same party governor is a dichotomous variable which takes the value of one if the governors of both states in the dyad belong to the same political party. We also include a measure of whether State B was in the RWJF state network with the expectation that policy makers may view state network members as well-informed about the ACA and are therefore more likely to make similar decisions.

Last, we have specific controls for each policy decision. In the gubernatorial announcement models, we include a series of dummy variables for State B's marketplace structure. In the grant application models, we include a series of dummy variables for State A's marketplace structure, since states with federal or partnership marketplaces may be less likely to apply for federal funding. States that have previously applied for grants may find it easier to apply in the future, so we include a count of State A's previous grant applications.

Since our dependent variables are dichotomous, we use logistic regression models with standard errors clustered by dyad.14 To account for a potential change in the baseline hazard across time, we include fixed time effects (e.g., quarter/year dummies).15

Results are shown in table 5. As suggested by the opinion learning mechanism, the absolute difference in ACA support is negative and statistically significant in the gubernatorial announcement model. When both states in a dyad have similar policy preferences, State A is more likely to announce the same marketplace structure that State B announced. As expected, the absolute difference in ACA support between two states is not statistically significant for grant activity. For policy decisions that are not as visible to the public, such as applying for federal funding, states do not consider the policy preferences of other states.

Figure 2 shows the predicted probability of State A announcing the same type of marketplace as State B across a range of absolute differences in ACA support.16 The probability that State A announces the same type of marketplace as State B when citizens of both states have the same level of support for the ACA (the absolute difference = 0) is 0.17. This probability decreases to .05 if the difference in ACA support is large (0.4). These findings indicate that states may use similarity in policy preferences as a way to learn about policy decisions, at least for more visible policies.

Other factors also influence a state's decision to pursue similar policy actions. States are more likely to mirror the marketplace structures of other states when governors belong to the same political party. This effect may be unsurprising given the politics surrounding the ACA, but the finding does indicate that policy makers may have looked for partisan cues when making ACA-related decisions. Ideology, population ratio, and the type of marketplace exchange influence similarity in gubernatorial announcements. State similarities in population, and the percentage of uninsured matter for grant activity, and states are less likely to emulate the grant activity of other states with similar marketplace structures. Finally, we find that states with state-federal partnership marketplaces or federal marketplaces are less likely to apply for a grant to move the state closer to State B's grant activity compared to states with state-based marketplaces. This effect is likely due to the fact that states with state-based marketplaces are simply more likely to apply for federal funding. States are also more likely to apply to mirror another state's grant activity if they have previously copied State B's grant activity. This may reflect states' likelihood to apply for multiple grants once they see that other states have successfully applied for several grants.

Conclusion

Our results suggest that public opinion played a modest role in the diffusion of ACA gubernatorial announcements and state-level grant activity. While we find that gubernatorial announcements and grant activity exhibited spillover effects that increased support for the ACA in neighboring states, public opinion is only significantly related to gubernatorial announcements in the home state. Additionally, we find that states are more likely to announce their ACA decisions when other states with similar levels of ACA support take action. This suggests that states may use similarity in policy preferences as a way to learn about policy decisions—at least for more visible policies such as marketplace structures.

These findings have larger implications for the political process. Research elsewhere suggests that governments react to the decisions of other governments by either learning from the policy experiments of others (Volden 2006) or by gaining an economic advantage over proximate states (Shipan and Volden 2008). Neither explanation places much weight on the influence of ordinary citizens. Our findings suggest that individuals play at least some role in the diffusion of policies—perhaps more so on policies that are highly visible. The influence that policy design and implementation has on policy preferences do not stop at the borders. Instead, policies have the potential to influence individuals elsewhere who may then pressure their own officials to adopt similar designs.

In addition, we find evidence that state legislators use external shifts in policy support as cues about how to make policy decisions, regardless of the level of saliency. Public opinion then may be a potential lever that advocates use to accelerate the diffusion process—either by actively framing the policy debate in ways that are favorable toward their policy or by advising state legislators about public support in similar states. Future research should consider whether external public opinion monitoring is due to sincere or strategic motives as we discussed above.

Finally, our results have implications for the future of the ACA. While public support for the ACA is not reacting as positively or as quickly as predicted (Blendon, Benson, and Brulé 2012), the slow movement of support for the ACA nationally may be partially attributed to differences in the timing of state-level decisions. As implementation progresses, however, we should see ACA support increase as components of the ACA continue to exhibit spillover policy effects. Moreover, as residents become more supportive of the ACA and its various components, we might likely see state legislators responding in more expansive ways.

This work has been supported (in part) by award #94-16-05 from the Russell Sage Foundation. Any opinions expressed are those of the author(s) alone and should not be construed as representing the opinions of the foundation.

The Role of Public Opinion—Does it Influence The Diffusion of ACA Decisions? Multilevel Regression and Post-stratification (MRP)

To estimate state opinion toward the ACA, we will rely on multilevel modeling, imputation, and post-stratification (referred to as MRP) developed by Gelman and Little (1997) and extended by Park et al. (2004; 2006). MRP produces accurate estimates of public opinion by state (Lax and Phillips 2009) and congressional district (Warshaw and Rodden 2012) and over time (Pacheco 2011; 2013).

MRP can be divided into three steps: (1) estimation of a multilevel regression with predictors; (2) imputation; and (3) post-stratification (see also Park et al. 2004, 2006; Lax and Phillips 2009). We begin with a multilevel model to estimate opinion for individuals given demographic and geographic characteristics. Individual responses are explicitly modeled as nested within states and state-level effects capture residual differences. I model survey responses as a function of gender, race, age, education, region, state, and state presidential vote share. These are standard predictors of MRP and perform quite well (Lax and Phillips 2009).

The next step is imputation. We define each combination of demographic and geographic characteristics (for instance, a non-black female, aged 18–29, with a high school degree from Connecticut) as a “person-type.” Each of the 3,264 person-types has an associated probability of supporting a particular policy, which is modeled in the multilevel regression as a function of individual and state covariates. Imputation is conducted on each person-type even if absent from the sample.

The final stage is post-stratification. Post-stratification corrects for differences between state samples and state populations by weighting the predicted values of each person-type in each state by actual census counts of that person-type in a state. We use the population frequencies obtained from the public-use micro data samples in 2010 supplied by the census bureau for post-stratification. The imputed opinion of each person-type is then weighted by the corresponding population frequencies. In the final step, we calculate the average response over each person-type in each state and summarize to get point predictions and uncertainty intervals.

Adding a Time Component

We add a time component by pooling surveys across a small time frame; in the example below, we use a three-quarter moving average to estimate quarterly opinion toward the ACA. For instance, to get point estimates for Q1 in 2011 using a three-quarter pooled window, we combine estimates from Q4 in 2010, Q1 in 2011, and Q2 in 2011 and then perform the MRP technique on this pooled dataset. The MRP process is repeated for each quarter after moving the time frame up a quarter at a time. By pooling and taking the median estimate, the first and last quarters are missing. Pacheco shows that while there is a tradeoff between the reliability of estimates and sensitivity to very short-term shocks, the efficiency benefits of pooling over a small time period outweigh the costs of biasedness.

The use of multilevel modeling and post-stratification overcomes two major problems that arise when trying to measure state opinion from national surveys. Multilevel modeling increases the reliability of less populous states via “shrinkage towards the mean.” Indeed the MRP approach has been shown to be superior to the aggregation method in terms of reliability, particularly when sample sizes are small, for instance, when N is less than 2,800 across all states (Lax and Phillips 2009). Post-stratification corrects for nonrepresentativeness due to sampling designs by adjusting estimates so that they are more representative of state populations.

Validity Check

State opinions toward the ACA, if valid as we have measured them, should correlate with other variables that attempt to measure the same concept. There are two state surveys that asked residents about ACA favorability (see Appendix A.2): The Kentucky Health Issues Poll (KHIP) 2010–2014 and the Ohio Health Issues Poll (OHIP) 2011. Both surveys were conducted by the Institute for Policy Research at the University of Cincinnati and funded by the Foundation for a Healthy Kentucky and the Healthy Foundation of Greater Cincinnati.1 When used with proper weights, aggregate estimates from KHIP and OHIP are representative of state populations. A key difference between the KFF polls and KHIP and OHIP is that the latter are yearly surveys, while the estimates from KFF shown in Figure 1 are quarterly. Additionally, recall that our estimates are based off a small moving average, which introduces additional error, albeit to improve reliability. Given this, it would be unlikely for our estimates to correspond exactly with measures from KHIP or OHIP. Nonetheless, we can still get a sense of how well MRP performs by comparing my estimates with those obtained from KHIP and OHIP.

Appendix A.2 shows the percentage of Kentucky/Ohio residents who support the ACA according to KHIP/OHIP compared to the MRP estimates. While the MRP estimates are not exactly the same as those from KHIP or OHIP, there are substantial similarities. Moreover, the correlation between the MRP estimates and the estimates from KHIP is a healthy .92, if the most dissimilar estimate in 2010 is excluded. If anything, MRP seems to underestimate shifts in opinion toward the ACA in Kentucky, no doubt due to the multilevel regression that pulls state averages toward the national mean in order to increase reliability. This suggests that it will be more difficult to obtain statistical significance in dynamic analyses that use these estimates, providing a more stringent test of the hypotheses outlined in the article.

Notes

1. We started with Graham, Shipan, and Volden's (2013) list of articles on policy diffusion and updated it through 2014. We then read each article to determine if and how public opinion was included in the analyses as well as in the theoretical discussions. We include only articles that looked at policy diffusion in the American states. Graham, Shipan, and Volden's (2013) original list is broad with studies that mention diffusion but do not focus on how or why policies are adopted; these were also not included in our final list of articles.

2. Pacheco (2012) refers to this mechanism as the “social contagion model.”

3. It is possible that the frequency with which states apply for grants reflects the amount of money states were awarded. For example, states that initially asked for money may not need to apply for future grants, while others need more funding. However, we think it unlikely that this is the case. States which were awarded less money for their first grant were not more likely to apply for later grants. For example, Alabama, which was awarded over $8 million for their first grant, did not apply for future funding. But Colorado, which received close to $40 million from their initial grant, applied for three more grants during this time period. To ensure that states' previous grant awards do not drive our results, we ran models which included the amount of money awarded for the previous grant. Our results remain unchanged.

4. States' decisions on Medicaid expansion was another highly visible policy choice in implementing the ACA. Due to data limitations, we chose not to study this policy. We expect, however, that the role of public opinion in diffusing states' Medicaid expansion decisions is similar to gubernatorial announcements about marketplace structure since both decisions were very public.

5. Few questions ask about insurance marketplaces (only five over the time period), and none capture preferences on grant activity. We follow the lead of others (Brace et al. 2002; Plutzer and Berkman 2005), and assume these questions capture a broader ideology about the ACA. Validity analyses largely confirm this assumption. Five surveys asked respondents about general opinions toward the ACA and insurance marketplaces. The correlation between these opinions is modest (r = .51). More important, in results available on request, both outcomes are predicted by similar covariates. For the most part, the same individual characteristics that predict support for the ACA also predict support for state marketplaces.

6. As pointed out by a reviewer, the MRP strategy may create more similarity in opinion for certain states, particularly the less populated states, which may falsely provide evidence of spread in opinion. This concern is precisely why we include few state-level covariates and rely mostly on individual demographic factors to estimate opinion in the first stage of the MRP strategy. We also add that the strategy creates similarity among states based on their sample sizes, not based on factors that we believe influence the spread of opinion such as contiguity or ideological similarity. We would be more concerned if the least populated states were in the same geographic area. Finally, we reanalyze the error correction models in table 3 but drop the ten least populated states that are at the highest risk of contaminating the results. Inferences from these models are nearly identical to the models reported in the manuscript.

7. See table A1 in the appendix for full text of the question wording and dates of the survey.

8. The  appendix provides additional information about the estimation strategy as well as validation checks.

9. This modeling strategy has the benefit of keeping all states in the analysis after the initial grant application, but assumes that all grant applications are predicted by the same covariates (see Boehmke 2009a: 236–37), which may not be realistic. As a robustness check, we ran separate models for the first grant application, second grant application, and so on. This strategy is not ideal since many observations are dropped from the analysis, and is inefficient since many of the covariates have the same effect for all grant applications. The results from these models are largely similar to the models shown.

10. With the goal of helping to expand health insurance coverage, the RWJF created the State Network in order to provide technical assistance to states as they worked to implement the ACA. State network members include Alabama, Colorado, Illinois, Maryland, Michigan, Minnesota, New Mexico, New York, Oregon, Rhode Island, and Virginia.

11. Inferences regarding the influence of public opinion are nearly identical when quarter/year dummy variables are included. We decided to include linear and squared versions of time since there were many quarters where no state announced its marketplace.

12. Volden (2006) also uses this strategy to control for the repeated nature of this type of dependent variable. As a robustness check, we also separately model the first instance of State A emulating State B's grant application activity, second instance, and so on. These models produce largely similar results.

13. We use Pacheco's (2014) measure of state ideology.

14. Each pair of states is included twice in each quarter. When we cluster by dyad (not directed-dyad), each cluster includes both pairs of dyads. However, clustering by directed-dyad does not significantly change the results.

15. We also replicated the models using time, time2, and time3 as suggested by Carter and Signorino (2010). This does not significantly change the results.

16. All other variables held constant at their mean or modal values.

1. The sample size for KHIP varies across time, but averages around 1,500 with statewide estimates being accurate to plus/minus 2.5 percent. See www.healthy-ky.org for more information. The sample size for the 2011 OHIP survey is 908; statewide estimates will be accurate to plus/minus 3.3 percent. For more information, see www.healthyfoundation.org/ohip.html.

References

Althaus, Frances A., and Henshaw, Stanley K.
1994
. “
The Effects of Mandatory Delay Laws on Abortion Patients and Providers
.”
Family Planning Perspectives
26
:
228
33
.
Barrilleaux, Charles, and Rainey, Carlisle.
2014
. “
The Politics of Need: Examining Governors' Decisions to Oppose the ‘Obamacare’ Medicaid Expansion
.”
State Politics and Policy Quarterly
14
:
437
60
.
Beck, Nathaniel, and Katz, Jonathan N.
2011
. “
Modeling Dynamics in Time-Series-Cross-Section Political Economy Data
.”
Annual Review of Political Science
14
:
331
52
.
Beck, Nathaniel, Katz, Jonathan N., and Tucker, Richard.
1998
. “
Taking Time Seriously: Time-Series-Cross-Section Analysis with a Binary Dependent Variable
.”
American Journal of Political Science
42
:
1260
88
.
Berinsky, Adam J.
1999
. “
The Two Faces of Public Opinion
.”
American Journal of Political Science
43
:
1209
30
.
Berry, Frances Stokes, and Berry, William D.
1990
. “
State Lottery Adoptions as Policy Innovations: An Event History Analysis
.”
American Political Science Review
84
:
395
415
.
Berry, Frances Stokes, and Berry, William D.
1992
. “
Tax Innovation in the States: Capitalizing on Political Opportunity
.”
American Journal of Political Science
36
:
715
42
.
Berry, William, and Baybeck, Brady.
2005
. “
Using Geographic Information Systems to Study Interstate Competition
.”
American Political Science Review
99
:
505
19
.
Berry, William D., Ringquist, Evan J., Fording, Richard C., and Hanson, Russell L.
1998
. “
Measuring Citizen and Government Ideology in the American States, 1960–93
.”
American Journal of Political Science
42
:
327
48
.
Beyle, Thad L.
2004
. “
The Governors
.” In
Politics in the American States
, 8th ed., edited by Gray, Virginia, Hanson, Russell L., and Jacob, Herbert,
194
231
.
Washington, DC
:
CQ
.
Blendon, Robert J., Benson, John M., and Brulé, Amanda.
2012
. “
Understanding Health Care in the 2012 Election
.”
New England Journal of Medicine
367
, no.
17
:
1658
61
.
Boehmke, Frederick J.
2009a
. “
Approaches to Modeling the Adoption and Diffusion of Policies with Multiple Components
.”
State Politics and Policy Quarterly
9
:
229
52
.
Boehmke, Frederick J.
2009b
. “
Policy Emulation or Policy Convergence? Potential Ambiguities in the Dyad Event History Approach to State Policy Emulation
.”
Journal of Politics
71
, no.
3
:
1125
40
.
Brace, Paul, Sims-Butler, Kellie, Arceneaux, Kevin, and Johnson, Martin.
2002
. “
Public Opinion in the American States: New Perspectives Using National Survey Data
.”
American Journal of Political Science
46
:
173
89
.
Brewer, Thomas L.
2003
. “
The Trade Regime and the Climate Regime: Institutional Evolution and Adaptation
.”
Climate Policy
3
:
329
41
.
Burstein, Paul.
2003
. “
The Impact of Public Opinion on Public Policy: A Review and An Agenda
.”
Political Research Quarterly
56
, no.
1
:
29
40
.
Carter, David B., and Signorino, Curtis S.
2010
. “
Back to the Future: Modeling Time Dependence in Binary Data
.”
Political Analysis
18
:
271
92
.
CMS (Centers for Medicare and Medicaid Services)
. n.d. The Center for Consumer Information and Insurance Oversight. “
Creating a New Competitive Health Insurance Marketplace
.” www.cms.gov/CCIIO/Resources/Marketplace-Grants/index.html (accessed on December 18, 2015).
Crowley, Jocelyn Elise.
2004
. “
When Tokens Matter
.”
Legislative Studies Quarterly
29
:
109
36
.
Erikson, Robert S., Wright, Gerald C., and McIver, John P.
1993
.
Statehouse Democracy: Public Opinion and Policy in the American States
.
Cambridge
:
Cambridge University Press
.
Erikson, Robert S., MacKuen, Michael B., and Stimson, James A.
2002
.
The Macro Polity
.
Cambridge
:
Cambridge University Press
.
Geer, John Gray.
1996
.
From Tea Leaves to Opinion Polls: A Theory of Democratic Leadership
.
New York
:
Columbia University Press
.
Gelman, Andrew, and Little, Thomas C.
1997
. “
Poststratification into Many Categories Using Hierarchical Logistic Regression
.”
Survey Methodology
23
, no.
2
:
127
35
.
Gilardi, Fabrizio, and Füglister, Katharina.
2008
. “
Empirical Modeling of Policy Diffusion in Federal States: The Dyadic Approach
.”
Swiss Political Science Review
14
:
413
50
.
Gollust, Sarah E., Barry, Colleen L., Niederdeppe, Jeff, Baum, Laura, and Fowler, Erika Franklin.
2014
. “
First Impressions: Geographic Variation in Media Messages during the First Phase of ACA Implementation
.”
Journal of Health Politics, Policy and Law
39
:
1253
62
.
Graham, Erin R., Shipan, Charles R., and Volden, Craig.
2013
. “
The Diffusion of Policy Diffusion Research in Political Science
.”
British Journal of Political Science
43
:
673
701
.
Gray, Virginia, Lowery, David, and Godwin, Erik K.
2007
. “
The Political Management of Managed Care: Explaining Variations in State Health Maintenance Organization Regulations
.”
Journal of Health Politics, Policy and Law
32
:
457
95
.
Greer, Scott L.
2011
. “
The States' Role under the Patient Protection and Affordable Care Act
.”
Journal of Health Politics, Policy and Law
36
:
469
73
.
Hill, Kim Quaile, Leighley, Jan E., and Hinton-Andersson, Angela.
1995
. “
Lower-Class Mobilization and Policy Linkage in the U.S. States
.”
American Journal of Political Science
39
:
75
86
.
Hyland, Andrew, Higbee, Cheryl, Li, Qiang, Bauer, Joseph E., Giovino, Gary A., Alford, Terry, and Cummings, K. Michael.
2005
. “
Access to Low-Taxed Cigarettes Deters Smoking Cessation Attempts
.”
American Journal of Public Health
95
:
994
.
Jacobs, Lawrence R., and Shapiro, Robert Y.
2000
.
Politicians Don't Pander: Political Manipulation and the Loss of Democratic Responsiveness
.
Chicago
:
University of Chicago Press
.
Jacobs, Lawrence R., and Skocpol, Theda.
2012
.
Health Care Reform and American Politics: What Everyone Needs to Know
.
New York
:
Oxford University Press
.
Jones, David K., Bradley, Katharine W. V., and Oberlander, Jonathan.
2014
. “
Pascal's Wager: Health Insurance Exchanges, Obamacare, and the Republican Dilemma
.”
Journal of Health Politics, Policy and Law
39
:
97
137
.
KFF (Kaiser Family Foundation)
.
2013
. “
State Marketplace Profiles
.” kff.org/health-reform/state-profile/state-exchange-profiles.
KFF (Kaiser Family Foundation)
.
2015
. “
Kaiser Family Foundation Poll: January 2010–January 2015
” (dataset). Princeton Survey Research Associates International (producer).
Storrs, CT
:
Roper Center for Public Opinion Research, RoperExpress (distributor)
.
Konisky, David M.
2007
. “
Regulatory Competition and Environmental Enforcement: Is There a Race to the Bottom?
American Journal of Political Science
51
:
853
72
.
Lax, Jeffrey R., and Phillips, Justin H.
2009
. “
How Should We Estimate Public Opinion in the States?
American Journal of Political Science
53
:
107
21
.
Lax, Jeffrey R., and Phillips, Justin H.
2012
. “
The Democratic Deficit in the States
.”
American Journal of Political Science
56
:
148
66
.
Manza, Jeff, and Cook, Fay Lomax.
2002
. “
A Democratic Polity? Three Views of Policy Responsiveness to Public Opinion in the United States
.”
American Politics Research
30
:
630
67
.
Matsusaka, John G.
2001
. “
Problems with a Methodology Used to Evaluate the Voter Initiative
.”
Journal of Politics
63
:
1250
56
.
Mooney, Christopher Z., and Lee, Mei-Hsien.
2000
. “
The Influence of Values on Consensus and Contentious Morality Policy: U.S. Death Penalty Reform, 1965–82
.”
Journal of Politics
62
:
223
39
.
Morehouse, Sarah M., and Jewell, Malcolm E.
2004
. “
States as Laboratories: A Reprise
.”
Annual Review of Political Science
7
:
177
203
.
Nicholson-Crotty, Sean.
2009
. “
The Politics of Diffusion: Public Policy in the American States
.”
Journal of Politics
71
:
192
205
.
Pacheco, Julianna.
2011
. “
Using National Surveys to Measure State Public Opinion over Time: A Guideline for Scholars and an Application
.”
State Politics and Policy Quarterly
11
, no.
4
:
415
39
.
Pacheco, Julianna.
2012
. “
The Social Contagion Model: Exploring the Role of Public Opinion on the Diffusion of Antismoking Legislation across the American States
.”
Journal of Politics
74
:
714
34
.
Pacheco, Julianna.
2013
. “
The Thermostatic Model of Responsiveness in the American States
.”
State Politics and Policy Quarterly
13
, no.
3
:
306
32
.
Pacheco, Julianna.
2014
. “
Measuring and Evaluating Changes in State Opinion across Eight Issues
.”
American Politics Research
42
:
986
1009
.
Park, David K., Gelman, Andrew, and Bafumi, Joseph.
2004
. “
Bayesian Multilevel Estimation with Poststratification: State-Level Estimates from National Polls
.”
Political Analysis
12
:
375
85
.
Park, David K., Gelman, Andrew, and Bafumi, Joseph.
2006
. “
State-Level Opinions from National Surveys: Poststratification Using Mulitilevel Logistic Regression
.” In
Public Opinion in State Politics
, edited by Cohen, Jeffrey E.,
209
28
.
Stanford, CA
:
Stanford University Press
.
Plutzer, Eric and Berkman, Michael B.
2005
. “
The Graying of America and Support for Funding the Nation's Schools
.”
Public Opinion Quarterly
69
:
66
86
.
Rigby, Elizabeth.
2012
. “
State Resistance to ‘Obamacare.’
Forum
10
, no.
2
:
1
16
.
Rosenthal, Alan.
1990
.
Governors and Legislatures: Contending Powers
.
Washington, DC
:
CQ
.
RWJF (Robert Wood Johnson Foundation)
. n.d. “
About State Network
.” statenetwork.org/about/ (accessed November 21, 2016).
Shipan, Charles and Volden, Craig.
2008
. “
The Mechanisms of Policy Diffusion
.”
American Journal of Political Science
52
, no.
4
:
840
57
.
Soss, Joe.
1999
. “
Lessons of Welfare: Policy Design, Political Learning, and Political Action
.”
American Political Science Review
93
:
363
80
.
Soss, Joe, and Schram, Sanford F.
2007
. “
A Public Transformed? Welfare Reform as Policy Feedback
.”
American Political Science Review
101
:
111
27
.
Taylor, Jami K., Lewis, Daniel C., Jacobsmeier, Matthew L., and DiSarro, Brian.
2012
. “
Content and Complexity in Policy Reinvention and Diffusion: Gay and Transgender–Inclusive Laws against Discrimination
.”
State Politics and Policy Quarterly
12
:
75
98
.
Volden, Craig.
2006
. “
States as Policy Laboratories: Emulating Success in the Children's Health Insurance Program
.”
American Journal of Political Science
50
:
294
312
.
Walker, Jack L.
1969
. “
The Diffusion of Innovations among the American States
.”
American Political Science Review
63
:
880
99
.
Warshaw, Christopher, and Rodden, Jonathan.
2012
. “
How Should We Measure District-Level Public Opinion on Individual Issues?
Journal of Politics
74
:
203
19
.
Weaver, R. Kent.
2000
.
Ending Welfare as We Know It?
Washington, DC
:
Brookings Institution
.
Zukin, Cliff, and Snyder, Robin.
1984
. “
Passive Learning: When the Media Environment is the Message
.”
Public Opinion Quarterly
48
:
629
38
.
Freely available online through the Journal of Health Politics, Policy and Law open access option.