Abstract

Context: Medicaid is the largest health insurance program by enrollment in the United States. The program varies across states and across a variety of dimensions, including what it is called; some states use state-specific naming conventions, for example, MassHealth in Massachusetts.

Methods: In a preregistered online survey experiment (N = 5,807), the authors tested whether public opinion shifted in response to the use of state-specific Medicaid program names for the provision of information about program enrollment.

Findings: Replacing “Medicaid” with a state-specific name resulted in a large increase in the share of respondents reporting that they “haven't heard enough to say” how they felt about the program. This corresponded to a decrease in both favorable and unfavorable attitudes toward the program. Although confusion increased among all partisan groups, there is evidence that state-specific names may also strengthen positive perceptions among Republicans. Providing enrollment information generally did not affect public opinion.

Conclusions: These findings offer suggestive evidence that state-specific program names may muddle understanding of the program as a government-provided benefit. Policy makers seeking to bolster support for the program or claim credit for expanding or improving it may be better served by simply referring to it as “Medicaid.”

Medicaid, the health insurance program for lower-income Americans, covered 86 million people in June 2023 (CMS 2023). This makes it the largest health insurance program in the United States by enrollment; in comparison, about 66 million people were enrolled in Medicare in June 2023.

Medicaid is broadly popular, with three quarters (76%) of Americans reporting a somewhat or very favorable view of the program. Democrats are more likely to report net (somewhat or very) favorable views (90%) than are independents (75%) or Republicans (65%) (Kirzinger et al. 2023). This polarization in net favorability is sharper than that observed for Medicare, where 89% of Democrats report a net favorable view compared to 78% of independents and 79% of Republicans.

Beyond net favorability, polls suggest Democrats are also more enthusiastically favorable toward Medicaid: 43% of Democrats report having a “very favorable” view of the program, compared to only 24% of independents and 19% of Republicans. Similar (although slightly less polarized) trends are observed for Medicare, with about half (49%) of Democrats reporting a “very favorable” view of the program, compared to 28% of both Republicans and independents (Kirzinger et al. 2023).

During the effort to repeal the Affordable Care Act (ACA) and replace it with the American Health Care Act of 2017—legislation that would have increased the number of uninsured, largely through cuts to Medicaid—two thirds (65%) of Americans opposed reductions to federal program funding. Here again there were marked partisan differences: opposition to Medicaid cuts was nearly universal among Democrats (88%), while only 38% of Republicans opposed these changes. Independents fell in between, with 63% opposing Medicaid cuts (Kirzinger et al. 2017).

Partisan differences in Medicaid favorability could reflect differences in a variety of relevant underlying attitudes. For example, two thirds (67%) of Democrats believe that the federal government should be more involved in health care than it is today, while a majority of Republicans (56%) think it should be less involved (McIntyre et al. 2023). A large majority of Democrats (79%) would characterize Medicaid primarily as a health insurance program (rather than as a government welfare program), but just more than half of Republicans (54%) would characterize it primarily as a welfare program (Kirzinger et al. 2023).

Attitudes toward the Medicaid program and the “deservingness” of its beneficiaries are also demonstrably racialized (Gilens 1996; Gollust and Lynch 2011; Leitner, Hehman, and Snowden 2018; Maltby and Kreitzer 2023). Support for expanding state Medicaid programs under the ACA differed by race, with state decisions correlated to the opinion of white residents (while not correlating to the views of nonwhite residents) (Grogan and Park 2017a). Survey research has found that conservatives and non-Hispanic white respondents who exhibit higher levels of racial resentment are more likely to support policies that would require certain Medicaid beneficiaries to work in order to maintain Medicaid benefits (Haeder, Sylvester, and Callaghan 2021). Similarly, non-Hispanic white respondents with high racial resentment are more likely than their counterparts with low racial resentment to favor other policies that would impose administrative burden in Medicaid, such as more frequent renewals and eliminating presumptive eligibility policies (Haeder and Moynihan 2023).

To date, there has been relatively limited research on the malleability of partisan attitudes toward Medicaid writ large (as distinct from expansion under the ACA). Haeder, Sylvester, and Callaghan (2023) found that emphasizing citizenship and residency requirements can improve perceptions of the program, particularly among Republicans. In the specific context of program expansion, a 2013 poll found that highlighting the number of uninsured and the fact that states declining to expand were forgoing federal dollars increased public support (KFF 2013). Along similar lines, a content analysis of statements by Republican governors who supported Medicaid expansion found that they often relied on frames related to morality and economic benefits to their state (Singer and Rozier 2020).

In this article, we focus on two other plausibly salient features of the Medicaid program: what it is called and how many people it serves. We first ask whether states opting to “brand” their Medicaid programs using state-specific program names affects public support for the program. Second, we investigate whether providing information about the scale of the Medicaid program (enrollment levels in one's home state) influences support.

Using a preregistered national survey experiment,1 we find that using state-specific naming conventions—tailored to a respondent's state of residence—instead of “Medicaid” results in a substantively large decrease in both positive and negative attitudes toward the program; instead, a significantly larger share of respondents reported that they “haven't heard enough to say” how they feel about the program. This occurs even though the question prompt consistently describes the program as “government health insurance for certain low-income adults and children in your state.”

There is limited heterogeneity in these treatment effect results based on the partisanship of the respondent. Relative to being provided “Medicaid,” Republicans in our sample are more likely to say they have a “very favorable” view (and are less likely to report a “somewhat favorable” view) when state-specific names are used; but this shift does not change net favorability among Republicans, and neither change is as large as the increase in the share of Republicans reporting uncertain (“don't know enough to say”) opinions. Favorability among Democrats and independents erodes as substantially larger shares report uncertain attitudes toward the program.

In general, providing information about program enrollment does not appear to change public opinion overall or for specific partisan groups. The one nuance we observed is that Republicans who received both the state-specific name and enrollment information are less likely to report “very favorable” views (and are more likely to report “somewhat favorable” views) than Republicans who only received the state-specific naming condition. In other words, enrollment information appears to attenuate the somewhat-to-very-favorable shift that we observed among Republicans who received the state-specific naming condition alone.

Background and Theory

At Medicaid's inception in 1965, the program covered a narrow set of populations in the United States, primarily low-income Medicare beneficiaries and women and children receiving Aid for Dependent Families (Moore and Smith 2005). The number and types of people covered by Medicaid have expanded incrementally over time, with the ACA representing the largest fundamental change to Medicaid eligibility rules. Today, the program covers about one in seven of the country's nonelderly adults and more than one third of the children (Keisler-Starkey and Bunch 2022). Medicaid enrollment reached historic highs during the COVID-19 public health emergency as the result of a policy that required states to keep enrollees continuously covered from early 2020 to early 2023. As states unwind this continuous coverage policy, enrollment numbers are expected to fall (ASPE 2022).

The program varies across states on a variety of dimensions, including what it is called. Some states use state-specific naming conventions such as Medi-Cal in California, TennCare in Tennessee, and Texas STAR. In total, 14 states use state-specific names for their Medicaid programs (across eligibility groups). Most states also have tailored names for their Children's Health Insurance Program (a subsidiary of Medicaid serving children and pregnant individuals); Vermont calls their CHIP program Dr. Dynasaur, while Georgia offers PeachCare for Kids. Some states have assigned names for their Medicaid managed care programs, which typically span eligibility groups; where managed care penetration is particularly high, this may be functionally equivalent to having a state-specific name for the full program. There are nine states that use state-tailored names for their managed care programs and enroll more than 90% of nonelderly adults into managed care plans.

Historical interviews with program officials suggest that these state-specific Medicaid program names were adopted to dampen the perception of Medicaid as a “welfare” program (Perry 2003; Zabawa 2001). According to one account, “MassHealth is the name Massachusetts has given to its Medicaid and SCHIP programs in an effort to minimize the stigma that may deter some people from applying for benefits” (Mitchell and Osber 2002). As other scholars put it more recently, “The new names intentionally hid the connection to Medicaid and presented a new frame that eligible recipients are deserving” (Grogan and Park 2017b).

More recently, some states branded efforts to expand Medicaid under the ACA, particularly in red and purple states. In 2015, then-governor of Indiana Mike Pence attempted to create distance between his state's program and “Medicaid expansion” more generally, intentionally calling it the “Healthy Indiana Plan 2.0” (HIP 2.0).2 Pence also said, “I believe Medicaid is not a program we should expand. It's a program that we should reform—and that's exactly what we're accomplishing. HIP 2.0 is not intended to be a long-term entitlement program. It's intended to be a safety net that aligns incentives with human aspirations” (Rudavsky and Groppe 2015). In other words, Governor Pence believed that Medicaid expansion was worthwhile policy to implement in his state, but he did not want to call it that.

In general, replacing “Medicaid” with state-specific naming conventions might be expected to “submerge” the role of the government (Mettler 2011). Even where the state program writ large does not have a state-specific name, submersion may occur if there is a moniker specifically for the state's managed care system, if managed care predominates (Tallevi 2018). For example, “Diamond State Health Plan” refers to Delaware's Medicaid managed care program, and 95% of Medicaid enrollees in Delaware are enrolled in Medicaid (KFF 2022).

This submersion could plausibly improve the program's popularity, particularly if people continue to link “Medicaid” not just with government but also with “welfare”(Huber and Paris 2013; Rasinski 1989; Shaw and Shapiro 2002; Smith 1987). One might expect these opinion shifts to be particularly pronounced among Republicans, who have more negative attitudes, on average, toward welfare and toward expanding the government's role in health care. Submersion has previously been shown to affect behavior in different ways depending on partisanship. Lerman, Sadin, and Trachtman (2017) found that Republicans without other sources of coverage have been less likely than their Democratic counterparts to enroll in health insurance coverage from the marketplaces created by the ACA, but those partisan differences can be attenuated by sending Republicans to an enrollment broker interface (HealthSherpa.com) that deemphasizes the role of government, rather than sending them to the federal HealthCare.gov website.

Even if state-specific naming conventions do not erode the public's connection between the program and government, generally speaking there may be an effect if the names submerge the role of the federal government and instead link the program to state government. According to a 2017 Washington Post–University of Maryland poll, only about 10% of Americans trust the federal government to “do what is right” always or most of the time (CAPC/Washington Post2017). More than twice as many (23%) report trusting their state governments to do what is right always or most of the time.

Ex ante, the effect of providing enrollment information is not obvious. Previous research has found that Americans with a connection to Medicaid, either personally or through a family member or close friend, are more likely to consider the program important, regardless of partisanship (Grogan and Park 2017b). Surfacing how commonplace Medicaid enrollment is could also make the program feel less targeted and more universal, decreasing the likelihood that people view the program as “welfare,” with negative connotations (Cook and Barrett 1992; Greenstein 2022). It is thus possible that “emphasizing the wide reach” of Medicaid could improve public perceptions and foster positive policy feedbacks (Michener 2019). However, if respondents have entrenched negative attitudes toward government involvement in health care and believe the Medicaid program should be more limited in scope—likely to be truer of Republicans than of Democrats or independents—making enrollment numbers salient could dampen favorable attitudes or exacerbate negative perceptions.

Experimental Design and Analytic Approach

We employ an information-provision survey experiment with a 2 × 3 factorial design to determine whether varying what Medicaid is called and/or whether respondents are given information about the scale of program enrollment in their state affects how favorably (or unfavorably) they feel toward the program. Both treatment conditions are explained in greater detail below.

We adapt wording for our question of interest from existing surveys. Precise wording for the “control” version of the treatment question—using “Medicaid” and not providing any enrollment information—follows (sample question wording for each possible treatment arm is available in the appendix): “Thinking about the ways people in this country get their health insurance coverage: in general, do you have a favorable or an unfavorable opinion of Medicaid, the government health insurance for certain low-income adults and children in your state?”

Responses to this experimental survey question follow a Likert scale ranging from very favorable to very unfavorable, centered around “haven't heard enough to say.” In our main analyses, we treat these responses as discrete binary outcomes; that is, we run a regression using “responded ‘very favorable’” as an outcome, a second regression using “responded ‘somewhat favorable’” as an outcome, and so on.3 This facilitates an interpretation of the regression results as the predicted probability that an individual responds within a given level of favorability.

To evaluate whether treatment effects vary with self-described partisanship, we use a three-category partisanship variable: Democrat (reference group), Republican, or independent/third party. We interact this categorical partisanship variable with an indicator for being assigned to one of the treatment conditions.

Following our prespecified analysis, our regressions include demographic characteristics of the respondents as covariates to improve estimate precision in our main analyses. These covariates are income (binned, reference category: less than $25,000), gender (reference category: woman), age (binned, reference category: younger than 45), and college degree status (reference category: has college degree). The appendix includes a balance table showing that our covariates are similar across treatment groups and versions of our models that omit these covariates.

Following the literature on reporting results from survey experiments, our regression tables (presented in the appendix) report unweighted sample average treatment effects (Franco et al. 2017; Miratrix et al. 2018). For ease of interpretation, figures presented in the report predicted probabilities holding covariates at their means.

Treatment 1: State-Specific Naming Condition

In our first factorial condition, respondents were randomly assigned to receive a question that either used “Medicaid” or a state-specific naming condition (with state-specific names tailored to the respondent's self-reported state of residence). For states that have employed state-specific naming conventions for their full Medicaid programs (14 states), we use that program name, such as MassHealth. For states that do not use a state-specific name for the overall program, but do so for their managed care population, we used the managed care program name only in cases where more than 90% of nonelderly Medicaid beneficiaries were enrolled in managed care (nine states).

For states that refer the program simply as “Medicaid” or “Medical Assistance,” we invented a program name such as “Healthy Maryland” or “Beehive Care” (Utah) for survey respondents assigned to the state-specific naming treatment. We also created program names in cases where state-specific naming conventions only extended to certain eligibility groups (e.g., only children or only adults who gained access to Medicaid through expansion under the ACA). We worried that these group-specific names were less likely to be recognized than names universally used for the program within the state, and in the case of expansion-based names, that responses to them would be more likely to be biased by perceptions of deservingness. Empirical analyses were conducted separately for states where the naming condition used real versus invented names. In the case of invented program names, we disclosed to respondents upon survey completion that the survey used a false name for the Medicaid program.

Treatment 2: Enrollment Information Condition

Respondents were also randomized to either receive no information about program enrollment or to one of two treatment conditions that provided information about how many state residents were enrolled in the program as of 2022 (using publicly available administrative data from the Centers for Medicare and Medicaid Services). One version of the treatment condition expressed this volume as a (rounded) number; the other expressed it as a percentage of the total state population, with the idea being that in some states Medicaid populations are substantial portions of the overall population.

Table 1 summarizes the structure of our 2 × 3 factorial design. In our results, we analyze these treatment (naming convention and enrollment information) effects separately. Appendix table 3 displays the full list of treatments used, including all state-specific names and enrollment totals. Table A1 shows the balance of key demographic characteristics across arms.

Table 1

Experimental Design

ArmProgram nameEnrollment informationN
Medicaid None 959 
Medicaid Total enrolled 982 
Medicaid Percentage of population 962 
State-specific name None 960 
State-specific name Total enrolled 947 
State-specific name Percentage of population 997 
ArmProgram nameEnrollment informationN
Medicaid None 959 
Medicaid Total enrolled 982 
Medicaid Percentage of population 962 
State-specific name None 960 
State-specific name Total enrolled 947 
State-specific name Percentage of population 997 

Survey Administration

This web survey was administered through independent polling firm Data for Progress from late April to early May 2022. The panel, a nationwide sample of US adults, was recruited through high-quality online sample provider PureSpectrum (Data for Progress n.d.; Fischer et al. 2022).4

Preregistered Hypotheses

We preregistered several hypotheses drawing on theory and earlier empirical work. We predicted that removing federal government nomenclature from the program would increase public support for the program. As discussed previously, this was the stipulated rationale for states calling their programs something other than Medicaid in the first place.

We predicted that effects of this treatment would be heterogenous by partisanship. Since Republicans are more skeptical of government and more likely to support privatization of government programs (Rudolph and Popp 2009), we expected they would also be more likely to respond positively to state-specific naming conventions that omit references to “Medicaid.”

  • H1A: Calling the program a state-specific name will lead to higher support for the program relative to it being called Medicaid.

  • H1B: The state-specific name treatment will have a larger positive effect among Republicans than among Democrats.

We also hypothesized that providing information about program enrollment would provoke attitudinal change. By revealing the scale of these programs, we predicted respondents would, on average, become more supportive as a result of updating their previous opinions on the universality of Medicaid. However, we hypothesized this would depend on partisanship, as Republicans are more likely to view Medicaid as a “welfare” program and may feel unfavorably toward welfare programs covering large populations.

  • H2A: Relative to not being told the size of a state's Medicaid program, individuals will be more supportive of the program when they are told the size.

  • H2B: The state-specific name treatment will have a larger positive effect among Democrats than among Republicans.

Effect of State-Specific Naming Conventions

Our regressions to evaluate the effect of state-specific naming conventions take the following form:
Favorability_Outcome = α + β1State_Name + β2Enrollment_Info   + β3PartyID + γX+ ε

As described above, Favorability_Outcome took the form of binary variables for each potential response (very favorable, somewhat favorable, haven't heard enough to say, somewhat unfavorable, very unfavorable). State_Name is an indicator for being assigned to a state-naming treatment, Enrollment_Info is an indicator for also being assigned to an enrollment information treatment, and PartyID is the categorical variable capturing self-described partisanship. X is a vector of our prespecified covariates (included to improve precision). Our coefficients of interest are β1 from each respective model (capturing the main effect of treatment). To evaluate our hypotheses related to differential treatment effects by partisanship, we conduct models introducing one additional regression term, β4State_Name*PartyID (again, Democrats are the partisan reference group).

Recognizing that interpretation of results could differ depending on whether the state-specific program name is “real” or was invented for purposes of this experiment, we run models separately for people residing in states where the names are actually used by the state (N = 2,516) and for people residing in states the names were invented (N = 3,291).5

We find that referring to Medicaid by a (real) state-specific program name induces confusion. Figure 1 reports the (unweighted) predicted probabilities for each potential response item for our question of interest. On average, respondents in states that use state-specific program names (that is, omitting the states where names were invented for the experiment) are 12.5 percentage points more likely to say that they “haven't heard enough to say” how they feel about the program when the state-specific name is used. This increased uncertainty corresponds to decreases in both favorable and unfavorable attitudes toward the program. Full regression results are available in appendix table A3.

Figure 1

Distribution of overall attitudes toward Medicaid conditional on naming condition.

Notes: Figure reports (unweighted) predicted probabilities holding covariates at their sample means; full regression results available in appendix table A3. Error bars represent 95% confidence intervals. Model restricts to people residing in states where the state uses a state-specific program name.

Figure 1

Distribution of overall attitudes toward Medicaid conditional on naming condition.

Notes: Figure reports (unweighted) predicted probabilities holding covariates at their sample means; full regression results available in appendix table A3. Error bars represent 95% confidence intervals. Model restricts to people residing in states where the state uses a state-specific program name.

Close modal

Although there is heterogeneity in terms of baseline favorability—Democrats are substantially more likely to report “very favorable” views of the Medicaid program—there is little heterogeneity in the treatment effect based on the partisanship of the respondent. Figure 2 shows that the share of people reporting they “don't know enough to say” increases significantly for Democrats, independents, and Republicans. However, Republicans do seem to be slightly more inclined to report “very favorable” views of Medicaid when the state-specific name is used—an effect not observed among Democrats or independents.

Figure 2

Distribution of partisan attitudes toward Medicaid conditional on naming condition.

Notes: Figure reports (unweighted) predicted probabilities holding covariates at their sample means; full regression results available in appendix table A4. Error bars represent 95% confidence intervals. Model restricts to people residing in states where the state uses a state-specific program name.

Figure 2

Distribution of partisan attitudes toward Medicaid conditional on naming condition.

Notes: Figure reports (unweighted) predicted probabilities holding covariates at their sample means; full regression results available in appendix table A4. Error bars represent 95% confidence intervals. Model restricts to people residing in states where the state uses a state-specific program name.

Close modal

This finding is congruent with our hypothesis that any favorability-enhancing effect of state-specific names would be stronger among Republicans than among Democrats. However, we emphasize that these results do not imply that the naming condition increased favorability overall among Republicans; because of the increase in uncertainty, net favorability fell among all partisan groups.

The uncertainty created by state-specific names is severely exacerbated by the invented program names. In the states where we manufactured program names for purposes of the experiment (because the state did not have a state-specific name for their Medicaid program), the naming condition increased the share of respondents reporting that they “don't know enough to say” by about 26 percentage points (table A5).

We believe this is not such an abstract setting as to remove generalizability. The observed effects suggest that if a state were to rebrand its Medicaid program—or an aspect of the program, such as the expansion population—this change, absent a concerted communications and outreach effort, could engender considerable confusion. That confusion would not entirely dissipate over time, as evidenced by the persistence of a treatment effect in the states where we did not fabricate a name. Virginia rebranded the Medicaid program as “Cardinal Care” in 2023, and New Mexico plans to start calling its program (currently Centennial Care) “Turquoise Care” in 2024 (Brown 2022; Ress 2022).

To better understand the confusion that arose from the use of state-specific names, we plotted predicted probability that a respondent replied that they “hadn't heard enough to say,” stratifying respondents by (1) their naming condition treatment (Medicaid, real state-specific name, invented state-specific name) and (2) household income (fig. 3).

Figure 3

Share of respondents reporting uncertain views of the Medicaid program by income and naming condition.

Notes: Figure reports (unweighted) predicted probabilities holding covariates at their sample means. Error bars represent 95% confidence intervals.

Figure 3

Share of respondents reporting uncertain views of the Medicaid program by income and naming condition.

Notes: Figure reports (unweighted) predicted probabilities holding covariates at their sample means. Error bars represent 95% confidence intervals.

Close modal

The results are striking: in the lowest income group (<$25,000), where respondents plausibly are most likely to have personal experience with the Medicaid program, using actual state-specific Medicaid names scarcely moves the share of respondents saying they have not heard enough to say how they feel about the program; however, at every higher income level, these names seem to increase confusion. Furthermore, this lowest-income group demonstrates the highest increase in “haven't heard enough to say” when fake state-specific names are used, suggesting (perhaps expectedly) that the lowest-income respondents have the strongest grasp on what the Medicaid program is called in their state.

Importantly, these findings are rooted in the structure of our survey question of interest—specifically, offering a “haven't heard enough to say” response. Many surveys do not offer explicit “don't know” (DK) response items; in these cases, DK responses are frequently combined with other forms of item nonresponse, such as refusing to answer the question for some other reason. Recent experimental work in survey methodology finds that the inclusion of DK response items can explain variation in how confident individuals are about their survey responses when not offered a DK option (Graham 2021). A key implication of this finding is that including an explicit DK response item is preferred when uncertainty is a quantity of interest, as in the present context.

Effect of Providing Enrollment Information

The enrollment information treatment could plausibly vary based on whether the question referred to “Medicaid,” used a real state-specific program name, or used a fake state program name invented for purposes of the survey experiment. Because of this, we subset the data to assess the effect of the enrollment information treatment within each naming convention treatment (Medicaid, real state-specific names, and invented state-specific names) separately. Enrollment information treatment arms took two forms: expressing volume as a rounded number (e.g., 1.2 million) or as a percentage of the total state population. Preliminary analyses demonstrated that there were no significant differences between these two approaches, so they were pooled for our analyses.

Our regressions took the following form, with variables taking the same form as described previously:
Favorability_Outcome = α + β1Enrollment_Info + β2PartyID + γX + ε                                          

We find that providing enrollment information does not change attitudes for any of our naming conditions (appendix tables A7, A9, and A11). When we evaluate interactions with partisan identity, we find that Republican sentiment shifts from “very favorable” to “somewhat” favorable exclusively in the condition where the respondents were provided with a (real) state-specific name for the Medicaid program (appendix table A10). That is to say, the introduction of enrollment information seems to erode the shift to “very favorable” that was observed among Republicans who received the state-specific naming condition alone.

Limitations

Our study has several limitations. First, while our survey draws from a national sample, we do not use survey weights in generating our estimates. This approach to calculating the sample average treatment effect is consistent with the empirical literature, but it precludes us from asserting that our results are truly representative. However, we emphasize that our results are internally valid because of the randomized nature of our survey experiment and that they should have broad generalizability as a result of the sample's national scope, large size, and high quality.

Second, we cannot rule out the possibility that respondents confused Medicaid and Medicare when survey questions used “Medicaid.” Previous research has shown that these programs are sometimes conflated (Norton, DiJulio, and Brodie 2015). We attempted to minimize this concern by including verbiage about the program covering low-income residents, but it remains possible that some of “Medicaid's” popularity is attributable to conflation with the more familiar Medicare program.

Third, because our survey instrument does not have questions that measure stigmatization (experienced or internalized), we cannot establish whether calling Medicaid by another name reduces stigma associated with program enrollment, the purported justification for early rebranding efforts. Our finding that state-specific names increase the share of respondents saying they “haven't heard enough to say” if they favor the program suggests that these naming conventions obfuscate—at least in part, and in particular for middle- and higher-income people—the fact that these state programs are actually Medicaid. It remains an open question whether this confusion translates to reductions in either experienced or internalized stigma; we believe this is a fruitful area for future research.

Conclusion

We find that states’ use of state-specific monikers for their Medicaid programs does not improve the popularity of the program; instead, these names create confusion about what the program is, dampening both favorable and unfavorable attitudes toward the program. Providing information about how many people are enrolled in Medicaid does not meaningfully alter public opinion.

The apparent confusion caused by state-specific program names also poses a political puzzle: if a state Medicaid program is successfully expanded or otherwise improved while obfuscating the role of the state, is credit given to politicians diminished? Models of retrospective voting suggest voters already have a difficult time accurately assigning credit in a federalist system (Healy and Malhotra 2013).

We believe there is a strategic benefit to policy makers in thinking about the names they choose when talking about their Medicaid programs. If they want to claim credit, using state-specific program names in speeches and statements may not be as effective as simply referring to “Medicaid.” Our results that rely on invented state names also imply that states need to ensure they adequately market any program name changes—particularly among beneficiaries—to minimize confusion.

Finally, we suggest that this experiment sheds light on notable patterns of partisanship and support for Medicaid. Our expectation had been that renaming programs to mask the federal government's role in them would increase support more among Republicans than among Democrats, given Republican skepticism toward government involvement in health care. We found some support for this hypothesis, in that Republicans were more likely to report a “very favorable” opinion of the program when the survey used a state-specific name. However, this small shift did not result in a net increase in favorability because it was offset by a shift from “somewhat favorable” to “haven't heard enough to say.” The net increase in uncertainty, observed regardless of partisanship, attenuated both favorable and unfavorable attitudes toward the program.

Contrary to our expectations, sharing information about the reach of the Medicaid program did not increase favorable views toward the program. The only discernible change in opinion was among Republicans who also received the state-specific naming treatment condition; in this group, there was a shift from “very favorable” to “somewhat favorable,” suggesting that the enrollment information attenuated the favorability gains associated with the state-specific moniker. While earlier research suggests that direct or indirect (through a family member or friend) experience with Medicaid increases support for the program, scholars and advocates should be cautious in assuming that conveying the reach of the program in a less personal context will have the same effect.

While partisanship drives opinion on many policy issues in contemporary American politics, including Medicaid policy, attempting to overcome these pressures by rebranding large—and generally popular—public programs does not seem to narrow these divides. Unless these naming conventions successfully reduce stigma associated with the program (a question that is beyond the scope of this study), state-specific Medicaid names may be a solution in search of a problem, serving mainly to obscure the role of government without generating material increases in public support, regardless of partisanship.

Acknowledgments

The authors gratefully acknowledge Data for Progress and Ethan Winter for making this research possible. We also thank Matthew Motta and Matthew Graham for helpful comments on earlier drafts.

Notes

1.

Prespecified hypotheses and a prespecified analysis plan are available at https://osf.io/q63sg. Certain analytics choices (e.g., evaluating the effect of real and invented state-specific names separately) deviate from our prespecified approach, as described elsewhere.

2.

The original Healthy Indiana Plan (HIP 1.0) was a program in Indiana that offered Medicaid coverage to certain low-income adults who would not otherwise qualify for the program. HIP 1.0 beneficiaries had a Personal Wellness and Responsibility Account, which was styled similar to a health savings account, that they were required to contribute to as a condition of continued enrollment.

3.

We make this analytic choice for two key reasons. First, because Medicaid is a broadly popular program, changes in attitudes may be more likely to manifest as a shift from “somewhat favorable” to “very favorable” (or vice versa) rather than a transition from net-unfavorable to net-favorable views. Second, this choice most closely captures the quantity of interest–public opinion shift. Alternatively, we could treat the outcome as a continuous value, which would impose a strong assumption that “don't know” responses are centered at 0. Similarly, we could run multinomial logistic regressions. In both cases, results would be more difficult to interpret than the current analytic framework.

4.

Data for Progress is a polling firm that regularly conducts public opinion polling on salient political and policy issues. Data for Progress does not recruit its own samples but employs specialized high-quality online sample providers. Data for Progress runs regularly scheduled opinion polls to a web panel on which our survey instrument was included, and Data for Progress has been used for other published survey experiments.

5.

This approach differs from our original prespecified analysis in which we did not propose to stratify our sample based on whether the state-specific naming treatment used an actual program name or used a name invented for purposes of the experiment. We believe the approach described here—attending to whether the state-specific name is real or fake—yields results that are more directly interpretable and more relevant to our quantities of interest.

References

ASPE (US Department of Health and Human Services Office of the Assistant Secretary of Planning and Evaluation)
.
2022
. “
Unwinding the Medicaid Continuous Enrollment Provision: Projected Enrollment Effects and Policy Approaches
.” Issue Brief,
August
19
. https://aspe.hhs.gov/sites/default/files/documents/a892859839a80f8c3b9a1df1fcb79844/aspe-end-mcaid-continuous-coverage.pdf.
Brown, Scott.
2022
. “
Human Services Renames Medicaid in New Mexico
.” KRQE News 13,
December
16
. https://www.krqe.com/health/human-services-renames-medicaid-in-new-mexico/.
CAPC (University of Maryland Center for American Politics and Citizenship)/Washington Post
.
2017
. “
Washington Post, University of Maryland Poll: September 2017 [Roper iPoll #31114631]
.” https://doi.org/10.25940/ROPER-31114631 (accessed
December
20
,
2023
).
CMS (Centers for Medicare and Medicaid Services)
.
2023
. “
CMS Releases Latest Enrollment Figures for Medicare, Medicaid, and Children's Health Insurance Program (CHIP)
.”
September
29
. https://content.govdelivery.com/accounts/USCMSMEDICAID/bulletins/37309a2.
Cook, Fay Lomax, and Barrett, Edith.
1992
.
Support for the American Welfare State: The Views of Congress and the Public
.
New York
:
Columbia University Press
.
Data for Progress. n.d.
Our Methodology
.” https://www.dataforprogress.org/our-methodology (accessed
August
23
,
2023
).
Fischer, Johannes, Bisogno, Cecilia, Winter, Ethan, and O'Donnell, Ryan.
2022
. “
DFP 2022 Polling Accuracy Report
.” Data for Progress,
December
2
. https://www.dataforprogress.org/blog/2022/12/2/dfp-2022-polling-accuracy-report.
Franco, Annie, Malhotra, Neil, Simonovits, Gabor, and Zigerell, L. J.
2017
. “
Developing Standards for Post-Hoc Weighting in Population-Based Survey Experiments
.”
Journal of Experimental Political Science
4
, no.
2
:
161
72
. https://doi.org/10.1017/XPS.2017.2.
Gilens, Martin.
1996
. “
Race and Poverty in America: Public Misperceptions and the American News Media
.”
Public Opinion Quarterly
60
, no.
4
:
515
41
.
Gollust, Sarah E., and Lynch, Julia.
2011
. “
Who Deserves Health Care? The Effects of Causal Attributions and Group Cues on Public Attitudes about Responsibility for Health Care Costs
.”
Journal of Health Politics, Policy and Law
36
, no.
6
:
1061
95
. https://doi.org/10.1215/03616878-1460578.
Graham, Matthew.
2021
. “
‘We Don't Know’ Means ‘They're Not Sure.’
Public Opinion Quarterly
85
, no:
2
:
571
93
. https://doi.org/10.1093/poq/nfab028.
Greenstein, Robert.
2022
. “
Targeting, Universalism, and Other Factors Affecting Social Programs’ Political Strength
.” Brookings,
June
28
. https://www.brookings.edu/articles/targeting-universalism-and-other-factors-affecting-social-programs-political-strength/.
Grogan, Colleen, and Park, Sunggeun Ethan.
2017a
. “
The Racial Divide in State Medicaid Expansions
.”
Journal of Health Politics, Policy and Law
42
, no.
3
:
539
72
. https://doi.org/10.1215/03616878-3802977.
Grogan, Colleen, and Park, Sunggeun Ethan.
2017b
. “
The Politics of Medicaid: Most Americans Are Connected to the Program, Support Its Expansion, and Do Not View It as Stigmatizing
.”
Milbank Quarterly
95
, no.
4
:
749
82
. https://doi.org/10.1111/1468-0009.12298.
Haeder, Simon, and Moynihan, Donald.
2023
. “
Race and Racial Perceptions Shape Burden Tolerance for Medicaid and the Supplemental Nutrition Assistance Program
.”
Health Affairs
42
, no.
10
:
1334
43
. https://doi.org/10.1377/hlthaff.2023.00472.
Haeder, Simon, Sylvester, Steven M., and Callaghan, Timothy.
2021
. “
Lingering Legacies: Public Attitudes about Medicaid Beneficiaries and Work Requirements
.”
Journal of Health Politics, Policy and Law
46
, no.
2
:
305
55
. https://doi.org/10.1215/03616878-8802198.
Haeder, Simon, Sylvester, Steven M., and Callaghan, Timothy.
2023
. “
More than Words? How Highlighting Target Populations Affects Public Opinion about the Medicaid Program
.”
Journal of Health Politics, Policy and Law
48
, no.
5
:
713
59
. https://doi.org/10.1215/03616878-10637708.
Healy, Andrew, and Malhotra, Neil.
2013
. “
Retrospective Voting Reconsidered
.”
Annual Review of Political Science
16
, no.
1
:
285
306
. https://doi.org/10.1146/annurev-polisci-032211-212920.
Huber, Gregory, and Paris, Celia.
2013
. “
Assessing the Programmatic Equivalence Assumption in Question Wording Experiments
.”
Public Opinion Quarterly
77
, no.
1
:
385
97
. https://doi.org/10.1093/poq/nfs054.
Keisler-Starkey, Katherine, and Bunch, Lisa N.
2022
. “
Health Insurance Coverage in the United States: 2021
.” US Census Bureau, September. https://www.census.gov/content/dam/Census/library/publications/2022/demo/p60-278.pdf.
KFF (Kaiser Family Foundation)
.
2013
. “
Kaiser Health Tracking Poll: April 2013
.”
April
30
. https://www.kff.org/health-reform/poll-finding/kaiser-health-tracking-poll-april-2013/.
KFF (Kaiser Family Foundation)
.
2022
. “
Medicaid Managed Care Penetration Rates by Eligibility Group
.” July. https://www.kff.org/medicaid/state-indicator/managed-care-penetration-rates-by-eligibility-group/.
Kirzinger, Ashley, DiJulio, Bianca, Wu, Bryan, and Brodie, Mollyann.
2017
. “
Kaiser Health Tracking Poll—July 2017: What's Next for Republican ACA Repeal and Replacement Plan Efforts?
” Kaiser Family Foundation,
July
14
. https://www.kff.org/health-reform/poll-finding/kaiser-health-tracking-poll-july-2017-whats-next-for-republican-aca-repeal-and-replacement-plan-efforts/.
Kirzinger, Ashley, Presiado, Marley, Valdes, Isabelle, and Brodie, Mollyann.
2023
. “
KFF Health Tracking Poll March 2023: Public Doesn't Want Politicians to Upend Popular Programs
.” Kaiser Family Foundation,
March
30
. https://www.kff.org/medicaid/poll-finding/kff-health-tracking-poll-march-2023-public-doesnt-want-politicians-to-upend-popular-programs/.
Leitner, Jordan B., Hehman, Eric, and Snowden, Lonnie R.
2018
. “
States Higher in Racial Bias Spend Less on Disabled Medicaid Enrollees
.”
Social Science and Medicine
208
:
150
57
. https://doi.org/10.1016/j.socscimed.2018.01.013.
Lerman, Amy E., Sadin, Meredith L., and Trachtman, Samuel.
2017
. “
Policy Uptake as Political Behavior: Evidence from the Affordable Care Act
.”
American Political Science Review
111
, no.
4
:
755
70
. https://doi.org/10.1017/S0003055417000272.
Maltby, Elizabeth, and Kreitzer, Rebecca.
2023
. “
How Racialized Policy Contact Shapes the Social Constructions of Policy Targets
.”
Policy Studies Journal
51
, no.
1
:
145
62
. https://doi.org/10.1111/psj.12481.
McIntyre, Adrianna, Blendon, Robert, Benson, John, Findling, Mary, and Schneider, Eric.
2023
. “
Popular . . . to a Point: The Enduring Political Challenges of the Public Option
.”
Milbank Quarterly
101
, no.
1
:
26
47
. https://doi.org/10.1111/1468-0009.12599.
Mettler, Suzanne.
2011
.
The Submerged State: How Invisible Government Policies Undermine American Democracy
.
Chicago
:
University of Chicago Press
.
Michener, Jamila.
2019
. “
Medicaid and the Policy Feedback Foundations for Universal Healthcare
.”
Annals of the American Academy of Political and Social Science
685
, no.
1
:
116
34
. https://doi.org/10.1177/0002716219867905.
Miratrix, Luke, Sekhon, Jasjeet, Theodoridis, Alexander, and Campos, Luis F.
2018
. “
Worth Weighting? How to Think About and Use Weights in Survey Experiments
.”
Political Analysis
26
, no.
3
:
275
91
. https://doi.org/10.1017/pan.2018.1.
Mitchell, Janet, and Osber, Deborah.
2002
. “
Using Medicaid/SCHIP to Insure Working Families: The Massachusetts Experience
.”
Health Care Financing Review
23
, no.
3
:
35
45
.
Moore, Judith, and Smith, David.
2005
. “
Legislating Medicaid: Considering Medicaid and Its Origins
.”
Health Care Financing Review
27
, no.
2
:
45
52
.
Norton, Mira, DiJulio, Bianca, and Brodie, Mollyann.
2015
. “
Medicare and Medicaid at 50
.” Kaiser Family Foundation,
July
17
. https://www.kff.org/medicaid/poll-finding/medicare-and-medicaid-at-50/.
Perry, Michael.
2003
. “
Promoting Public Health Insurance for Children
.”
Future of Children
13
, no.
1
:
193
203
. https://doi.org/10.2307/1602648.
Rasinski, Kenneth.
1989
. “
The Effect of Question Wording on Public Support for Government Spending
.”
Public Opinion Quarterly
53
, no.
3
:
388
94
.
Ress, Dave.
2022
. “
Medicaid Managed Care Merger Starts Jan. 1 in Virginia
.”
News and Advance
,
November
30
. https://newsadvance.com/news/state-and-regional/medicaid-managed-care-merger-starts-jan-1-in-virginia/article_99cd243a-84f7-54ca-9ead-fe30c5bda516.html.
Rudavsky, Shari, and Groppe, Maureen.
2015
. “
Gov. Pence Gets Federal OK for Medicaid Alternative
.”
Indianapolis Star
,
January
27
. https://www.indystar.com/story/news/politics/2015/01/27/gov-pence-gets-federal-ok-medicaid-alternative/22396503/.
Rudolph, Thomas J., and Popp, Elizabeth.
2009
. “
Bridging the Ideological Divide: Trust and Support for Social Security Privatization
.”
Political Behavior
31
, no.
3
:
331
51
.
Shaw, Greg, and Shapiro, Robert.
2002
. “
Trends: Poverty and Public Assistance
.”
Public Opinion Quarterly
66
, no.
1
:
105
28
.
Singer, Phillip, and Rozier, Michael.
2020
. “
Shifting Threats and Rhetoric: How Republican Governors Framed Medicaid Expansion
.”
Health Economics, Policy, and Law
15
, no.
4
:
496
508
. https://doi.org/10.1017/S174413312000002X.
Smith, Tom W.
1987
. “
That Which We Call Welfare by Any Other Name Would Smell Sweeter: An Analysis of the Impact of Question Wording on Response Patterns
.”
Public Opinion Quarterly
51
, no.
1
:
75
83
. https://doi.org/10.1086/269015.
Tallevi, Ashley.
2018
. “
Out of Sight, Out of Mind? Measuring the Relationship between Privatization and Medicaid Self-Reporting
.”
Journal of Health Politics, Policy and Law
43
, no.
2
:
137
83
. https://doi.org/10.1215/03616878-4303489.
Zabawa, Barbara.
2001
. “
The Access Problem: How Employee and Employer Issues May Increase Badgercare Participation by Impeding the Verification Process
.”
Wisconsin Women's Law Journal
16
:
215
40
.

Supplementary data