Abstract

Privatization has grown exponentially, both in salience and in form, over the past several decades. This shifting of administrative authority away from the state can make it difficult for program recipients to link their use of a federal program back to government, a disconnect known as “submerging” the state. However, privatization is a process that occurs in degrees, and not all privatization initiatives look alike. This study leverages variation in the implementation of Medicaid managed care, which is the most widespread form of Medicaid privatization, to examine how privatization maps onto state submersion and affects state visibility. This analysis shows that, although Medicaid managed care enrollment, at large, does not relate to recipients' self-reported Medicaid enrollment, when privatized Medicaid plans introduce administrative designs that obscure the role of government, Medicaid self-reporting declines. These findings demonstrate that policy recipients are less able to recognize both the personal relevance of a specific public program and the public nature of this interaction when privatized programs utilize design features that attenuate signals of government involvement. In highlighting this disconnect, this article shows how privatization makes it more difficult for policy recipients to engage in the civic sphere as informed advocates for their self-interest.

Privatization is a persistent presence within the leviathan of the US government and has been since the nation's founding (Kosar 2006). Over the past half century, however, the privatization of government services has grown in political salience and form (Verkuil 2007). This expansion and its consequences for service delivery are well documented (see, e.g., Hacker 2002; Kamerman and Kahn 1989), yet little is known about how privatization impacts its target population's political perceptions. Privatization involves the delegation of administrative authority to nonstate actors, creating an additional institutional layer between the public and the government. This shifting of administrative authority away from the state can make it more difficult for program recipients to link their use of a federal program back to government, a disconnect also known as “submerging” the state (Mettler 2011). However, privatization is a process that occurs in degrees, and not all privatization initiatives look alike. This study examines variation in the administrative design of privatized programs to isolate when privatization maps onto state submersion and affects state visibility.

The growth of Medicaid managed care plans, which present the most widespread form of Medicaid privatization, offers a unique opportunity to measure how privatization shapes people's understanding of their interaction with a government program or service. As a federal-state partnership program, states have flexibility in how they adopt Medicaid managed care, creating substantial between-state differences in implementation. In this research, I leveraged the resulting variation to isolate which features of this administrative shift are related to recipients' awareness of their Medicaid enrollment. Specifically, I used linked survey and administrative data to trace when privatization affects program recipients' perceptions of Medicaid's personal relevance, as well as whether recipients connect this program back to government.

This analysis shows that the form of privatization can alter its consequences for the target population. Privatization does not uniformly affect state visibility; rather, this consequence of privatization is contingent on its use of administrative designs that hide the role of the state. When I measured privatization simply using managed care enrollment, which is the most widely used measure of Medicaid privatization in studies of Medicaid underreporting, I found no relationship between privatization and Medicaid self-reporting. However, when I specified administrative features of Medicaid managed care organizations (MCOs) that obscure government's role, I found that these administrative features are related to the underreporting of Medicaid enrollment. Medicaid recipients become less likely to report their enrollment when MCOs mix Medicaid recipients and commercial enrollees in the same plan and when Medicaid MCOs include the private company's name in the MCO plan name. I also found that, when this misreporting occurs, Medicaid enrollees are frequently reporting enrollment in private insurance plans.

These results suggest that when privatized Medicaid plans utilize administrative elements that attenuate the role of the state, this encourages Medicaid participants to think of themselves as outside of the Medicaid population and to underestimate the role that government plays in the provision of their health insurance. In connecting privatization to patterns of Medicaid self-reporting, these findings also offer a partial explanation for the “Medicaid undercount,” the phenomenon in which surveys of Medicaid enrollment drastically undercount the size of the Medicaid population. Beyond Medicaid, this research highlights how privatization poses a threat to democratic accountability. By skewing people's understandings of the personal relevance of government social policy, privatization makes it difficult for policy recipients to engage in the political sphere as informed citizens and advocates for their self-interest.

Privatization and the Submerged State

Submerged policies can be distinguished from direct policies based on their use of private delivery mechanisms for administering government policies, creating potential difficulties for tracing this policy back to government (Hackett 2017). The administrative mechanisms commonly used by submerged policies include deferral of administrative capacity to nongovernmental organizations (commonly referred to as privatization) and reliance on the tax code for wealth transfers, rather than providing benefits directly (Mettler 2011). The implementation of these measures has allowed government to address a range of policy agendas without a corresponding increase in the size of formal institutions within the federal government (DiIulio 2014). The extent of the submerged state in US health care policy is exemplified by the “Mildred paradox.” Mildred was the mother-in-law of political scientist Don Kettl. In the last few years of her life, a combination of Medicaid and Medicare paid for Mildred's extensive health care needs, including her $85,000-per-year nursing home. Despite the numerous government-funded health services she received, Mildred never encountered a government employee (Kettl 2009). Stories like hers are increasingly common as governments shift responsibility for service provision toward private actors.1

In her seminal work on the submerged state, Mettler (2011) details how submerged state policies have led Americans to underestimate the extent to which they engage with and benefit from government actions. For instance, 60 percent of respondents who reported using the home mortgage interest deduction (a tax deduction based on interest payments for mortgages) also said that they have never used a government social program. In documenting this disjuncture, Mettler relies on self-reports of program use. By relying on self-reports alone, however, such research is unable to measure whether submerged state policies influence people's understanding of when they are benefiting from a specific government program. Put differently, it does not test whether policy submersion, broadly, or privatization, specifically, affects people's perceptions of program receipt. Additionally, the submerged programs Mettler surveyed are primarily wealth transfers located in the tax code. Consequently, while this research does much to identify the submerged state and further our understanding of how it functions, we still lack a rigorous test of how one of the main mechanisms of the submerged state—privatization—influences government visibility among a program's target population.

Privatization can refer to a broad range of policies, which complicates how we understand the relationship between privatization and the submerged state. At its most fundamental, privatization involves a transfer of responsibility from government to the private sector (Lundqvist 1988). Lundqvist (1988: 4) clarifies that in this formulation “it is the direction that counts  . . .  not some final or eternal location of that which is transferred.” In other words, privatization is not a dichotomous phenomenon; rather, the degree to which programs are privatized moves along a continuum. Submersion also operates along a continuum, with policies ranging from deeply submerged to highly visible (Mettler 2011). Policies move to the submerged end of this spectrum the more they rely on administrative mechanisms that erode the traceability of the benefit back to government. In this context, the deeper privatization “submerges” a policy, the more it hides the role of the state and state actors.

Variation in the administration of privatization suggests that this practice's effect on state visibility may be contingent on how privatization is implemented. A recent study of the Milwaukee school voucher program shows that parents of children who utilize school vouchers are more likely to feel that government impacts their child's education and that this interaction taught them about government (Fleming 2014). This finding suggests that vouchers remind people that government is responsible for privatized public services. Critically, however, Fleming (2014) studied voucher programs administered by state agencies. It is less clear whether vouchers administered by private tax-credit agencies would have the same informational effect, particularly considering Mettler's (2011) research showing that tax credits diminish recipients' awareness of their interactions with the state. The divergent implications of these studies indicate that not all forms of privatization have the same impact on policy submersion and that additional research is needed to determine when and how privatization affects state visibility.

Medicaid: A Case Study

Medicaid presents a valuable policy domain for assessing the relationships among privatization, program design, and awareness of government. Its prominence in state budgets, the health care system, and political rhetoric underscores the program's central role in US social policy. This suggests that Medicaid could be a difficult program to obscure from the public consciousness. However, Medicaid can also be a confusing program: it is jointly administered by federal and state governments as a federal-state partnership program; eligibility and coverage differ by state and recipient group; and the names of Medicaid programs differ between and within states. Similarity between the names of Medicaid and Medicare programs, as well as overlap between these two program populations, adds to the muddle over what Medicaid is and whom it serves. The complexity surrounding Medicaid suggests that, if privatization affects perceptions of government's role in people's lives, Medicaid is a policy area where we should expect to find evidence of this trend. Considering these divergent implications, this study tested whether the submersion of a social welfare program that looms large in government budgets and national discourse, but whose administration is fragmented, can impact recipients' awareness of this government intervention in their lives.

This question is fundamentally an inquiry into when government programs generate policy feeds. Policy feedback research examines how government policies can generate behavioral, attitudinal, and material changes in the target population and how these changes can in turn influence the construction of new policies (see Pierson 1993; Schneider and Ingram 1993; Howard 1999; Mettler and Soss 2004). As the term suggests, policy feedbacks have two stages: the “feed” process, in which the policy impacts the public, and the “back” portion, whereby the changed public influences the government. Analyses of the feed stage have taken to describing a policy's impact on the public as a “feed-forward” effect. These feeds can be bundled into two interrelated categories: resource effects, which refer to the material benefits that a policy bestows, and interpretive effects, which are changes to behavior, attitudes, and political self-conception (Pierson 1993). By examining how policy submersion influences recipients' perceptions of government's personal relevance, I am inquiring into how state submersion impacts the manifestation of interpretive effects.

Variation in Medicaid privatization between states allows this research to identify the interpretive effects of different administrative designs while holding the privatized policy (Medicaid) constant. Medicaid managed care presents one of the earliest moves toward Medicaid privatization, having emerged less than a decade after the creation of Medicaid (Tater, Paradise, and Garfield 2016). Under Medicaid managed care, states contract with MCOs. States pay MCOs a set rate, and the MCOs provide health insurance benefits to the enrollees. MCOs currently represent the most widespread form of Medicaid privatization in terms of enrollment (CMS 2014).2 I expected that Medicaid MCOs may hide the role of the state, considering that the added institutional layer of the MCO moves Medicaid administration away from the government by creating a nongovernmental (i.e., private) organization that directly interfaces with the Medicaid recipient.

However, not all managed care plans are administered alike. The Social Security Act's section 1915(b) waiver program gives states flexibility in determining whether and how to implement Medicaid managed care, and section 1115 allows states to apply for waivers granting additional administrative flexibility. Over time, these measures have allowed for substantial between-state variation in Medicaid MCO administration. If some Medicaid MCOs utilize different administrative features that affect the ease of tracing this health insurance back to government, then I would expect these plans to have a particularly pronounced effect on submerging the state and obscuring recipients' recognition of the public origins of their health insurance.

In comparing the design of Medicaid MCOs, two structural elements stand out as most likely to strain this ease of traceability. The first concerns how Medicaid MCOs are named. As is common with employer-based insurance plans, Medicaid managed care plans have plan names. The names of Medicaid MCO plans vary depending on the MCO. For instance, Medicaid MCO enrollees in San Joaquin, California, can be enrolled in the Health Plan of San Joaquin, while Indiana-based Medicaid enrollees may be covered by an MCO named Anthem MCO. As this suggests, some Medicaid MCO plans include the name of the insurance company administering the plan in their plan name, other MCOs keep Medicaid or the name of the geographic region being served in the plan name, and still other plans do neither. It stands to reason that plans that keep the word Medicaid or the geographic unit served in the plan's name are more likely to be recognized as Medicaid plans than are those whose plan name includes the name of a private insurance company. When Medicaid enrollees in Anthem MCO refer to their health insurance card or list their insurance on a medical form, they are referring to a private company. This branding is likely to remind enrollees of the nongovernmental administration of their health insurance plan, further obscuring the public (Medicaid) origins of this health insurance.

Another administrative element that varies among Medicaid MCOs and may further strain one's ability to trace this health insurance back to government concerns who else is enrolled in the MCO. According to the Centers for Medicare and Medicaid Services (CMS), Medicaid managed care plans can be classified as commercial MCOs or Medicaid-only MCOs. As the name indicates, Medicaid-only MCOs provide services to Medicaid beneficiaries only. Commercial MCOs, in contrast, provide services “to both Medicaid and commercial and/or Medicare” enrollees (CMS 2008).3 Commercial MCOs may be more likely to cause confusion about Medicaid enrollment because these MCOs are also providing services to non-Medicaid enrollees. Accordingly, commercial MCOs' plan materials (including their websites and their plan brochures) engage with and are targeted toward a broader population, whereas Medicaid-only MCO plan materials and personnel solely target the Medicaid population. In turn, the branding in Medicaid-only MCO documents is more likely to address and reinforce their Medicaid affiliation compared to commercial MCOs. Commercial MCOs also create the potential for Medicaid enrollees to remain with the same insurance provider as they switch from private to public health insurance.

Isolating these features of Medicaid managed care plans (how plans are named and who else is enrolled in the plan) may also help address a conflict in the Medicaid undercount literature. Inquiries into the Medicaid undercount have centered on two of the most prominent surveys used to measure Medicaid enrollment: the Census's Current Population Survey and the American Community Survey (ACS) (see, e.g., Abraham, Karaca-Mandic, and Boudreaux 2013; Boudreaux et al. 2015; Cantor et al. 2007; Davern et al. 2009). These studies show that, although it is seemingly straightforward to measure who receives Medicaid—just ask—the resulting responses can underestimate the size of the Medicaid population by more than 40 percent (Davern et al. 2009). Leading explanations of the Medicaid undercount attribute this mismeasurement largely to question wording and survey design (Davern et al. 2009; Klerman et al. 2009; O'Hara 2010; Pascale, Roemer, and Resnick 2009). However, the theoretical connection between privatization and the obscuring of government's presence suggests that Medicaid privatization may also contribute to the Medicaid undercount. Numerous studies examine the relationship between Medicaid managed care and the Medicaid undercount. Eberly, Pohl, and Davis's (2009) findings suggest that managed care usage can reduce the accuracy of Medicaid self-reports. Chattopadhyay and Bindman (2006) studied Medicaid managed care enrollment within the state of California and found that increased Medicaid managed care penetration correlates with lower estimates of Medicaid enrollment. Conversely, other researchers have found that managed care enrollment fails to predict, or even lessens, the Medicaid undercount (Call et al. 2008; Kincheloe et al. 2006). In this vein, Plotzke, Klerman, and Davern (2010) used individual-level data matched to administrative records to show that managed care usage does not have a statistically significant effect on the accuracy of Medicaid reporting in the Census's Current Population Survey. Notably, although these studies offer disparate findings, they are identical in that they all treat Medicaid managed care as a dichotomous variable. They measure whether a Medicaid plan is administered by an MCO, without accounting for variation among managed care plans. By isolating specific Medicaid managed care administrative features, however, I can identify whether particular program design elements shift managed care plans' impact on the Medicaid undercount.

Given the theorized relationship between submerged policy designs and awareness of program use, I tested the hypothesis that increased Medicaid privatization leads to decreased Medicaid self-reporting among Medicaid enrollees. I operationalized Medicaid privatization using the following indicators: MCO penetration in the Medicaid population, the construction of Medicaid MCO plan names, and the member composition of the Medicaid MCO (i.e., commercial MCOs or Medicaid-only MCOs).

Measuring Medicaid Privatization and Medicaid Reporting: The American Community Survey

To test my hypothesis, I built on a previously conducted linkage of Medicaid enrollment records to survey self-reports. The ACS is a nationally representative survey conducted by the Census Bureau that contains a battery of questions on health insurance. These questions allow participants to select the option of whether they obtain their health insurance through “Medicaid, Medical Assistance, or any kind of government-assistance plan for those with low incomes or a disability.” This inclusive phrasing suggests that even if people fail to recognize that they are part of Medicaid specifically, if they recognize themselves as part of a government-assistance health insurance plan they would select this program option.4 This indicates that Medicaid enrollees who do not select this option are not just confused about the term Medicaid; they are also explicitly rejecting the public origins of their health insurance. In 2013, the State Health Access Data Assistance Center, in conjunction with the US Census Bureau, matched ACS insurance data to Medicaid Statistical Information System (MSIS) administrative records. They conducted this linkage using an anonymized version of the internal ACS, which contains over 4 million personal records for the 2008 survey. These records were then linked to MSIS records via protected identification keys that the Census Bureau developed. This linkage enabled researchers to estimate the size of the Medicaid undercount in the 2008 ACS (see Boudreaux et al. 2015). For each state and the District of Columbia, they used the MSIS data to calculate the Medicaid undercount as the ratio of the number of persons who reported having Medicaid in the ACS and were enrolled in Medicaid, to the number of persons enrolled in Medicaid. I used this ratio as my dependent variable.5 Because of disclosure concerns, this linked data could not be released at any geographic unit smaller than the state level (Boudreaux et al. 2015).6 The use of a dependent variable measured at the state level does not present a methodological challenge for my analysis because the independent variable of interest is also measured at the state level. Notably, by providing a robust measure of program enrollment, the ratio created by the linked ACS-MSIS data offers an opportunity to build on Mettler's analysis of state submersion. Mettler (2011) measured whether people who report using a specific program also identify this program as part of government social policy. The linked ACS-MSIS data, however, captures whether privatization influences program recipients' recognition of the personal relevance of the privatized government program, in this case Medicaid.

My key explanatory variables are state-level measures of Medicaid privatization. I captured overall Medicaid privatization rates by measuring state-level Medicaid managed care penetration. I also measured privatized Medicaid plans' use of administrative designs that introduce additional hurdles for recognizing the public origins of this program. I operationalized this concept by measuring (1) the percentage of the Medicaid population enrolled in a commercial MCO and (2) the percentage of the Medicaid population enrolled in an MCO with the private insurance company's name in the plan name. For clarity, I refer to this measure as “privately named Medicaid MCOs.”

Next, I specified relevant control variables to limit the possibility that a correlation between privatization and the Medicaid undercount is the product of a correlated state-level variable. One such potential factor that I controlled for is the size of a state's elderly population. Persons sixty-five or more years of age are eligible for Medicare. Persons who are eligible both for Medicare, because of their age, and Medicaid, because of their financial status, may be particularly vulnerable to misreporting due to their dual-eligibility status. However, their Medicare eligibility may also give them a heightened awareness of, and attention to, government's role in their health insurance. To control for the potential that this population behaves differently than the population at large, I included a variable that measures the percentage of the Medicaid population sixty-five or more years of age in each state, as this portion of the population would also be eligible for Medicare. Another concern is that potential language barriers could harm Medicaid reporting. It is possible that language-based cues, such as the private naming of Medicaid MCOs, affect non-English speakers and persons with only limited English-language skills disparately. To control for this possibility, I included a measure that captures the percentage of each state's adult population that is limited English proficient according to the US Census Bureau's ACS estimates.7

My analysis also included a measure of the adult population enrolled in Temporary Assistance for Needy Families (TANF). Until the Personal Responsibility and Work Opportunity Reconciliation Act of 1996, Medicaid eligibility was linked to Aid to Families with Dependent Children (AFDC), a federal program that provided financial assistance to low-income families. AFDC was a highly stigmatized program, and this stigma spread to Medicaid, impacting how Medicaid was structured (Pracht 2007; Soss and Schram 2007). In particular, the size of the adult AFDC population was shown to influence states' decisions to switch to Medicaid managed care. States with larger AFDC populations were more likely to adopt Medicaid managed care because it was considered a less generous alternative to fee-for-service care (Pracht 2007). Because the 1996 welfare reform legislation restructured AFDC into TANF, including a measure of the adult TANF population may help to control for an element of the political calculus related to implementing privatization initiatives.8 To avoid the bias introduced by TANF underreporting (see Klerman, Ringel, and Roth 2005; Meyer, Mok, and Sullivan 2009), I used administrative records of state-level adult TANF enrollment, as reported by the Department of Health and Human Services. The historical development of Medicaid also suggests that racial identity has the potential to influence willingness to report Medicaid enrollment. In the latter half of the twentieth century, attitudes on welfare became strongly correlated with attitudes on race (Kellstedt 2003). In this process, welfare became a pejorative term that could conjure racialized images of an undeserving nonwhite population dependent on government largess (Gilens 2000; Katz and Thomas 1998). The race coding of AFDC was ultimately a key motivating factor in the push to remake AFDC into TANF (Meyerson 1996). However, TANF and Medicaid at least partially retained the perception that they were programs for poor African Americans (Gilens 2009; Soss and Schram 2007), even though the majority of Medicaid enrollees are white (CMS 2014). Given that this racial stigma may manifest as an unwillingness among the white population to report their Medicaid enrollment, my regression analysis included a measure of the portion of a state's Medicaid population that identifies as white.

My final two control variables address educational achievement and state ideology. As education levels increase, people should be increasingly capable of shouldering the cognitive burden imposed by survey questions. Education is a predictor of political knowledge, and persons with higher levels of education are more likely to answer political knowledge questions in surveys correctly, particularly when asked about specific facts relevant to declarative memory storage (Delli Carpini and Keeter 1997; Prior and Lupia 2008).9 Accordingly, I included a measure that captures the percentage of the adult population in each state that has at least a high school diploma. Lastly, given concerns that privatization may be proxying for state ideology, I delineated the effects of privatization (on Medicaid reporting) from those of state ideology by including state ideology in my model. This measure of state ideology was taken from the state ideology index created by Tausanovitch and Warshaw (2013) as part of the American Ideology Project. The inclusion of state ideology as a control variable is also necessary considering that another explanation for the Medicaid undercount, aside from lack of awareness or confusion about the program's public origins, is that respondents are consciously misreporting their Medicaid use. In particular, the stigma attached to this public benefit may encourage the misreporting of Medicaid enrollment. This form of error is a concern for my analysis only to the extent that these conscious misreports are correlated with the state's degree of privatization. The most plausible mechanism through which the two would be connected is political ideology. Persons in more conservative states may be less willing to admit their use of social welfare programs. Accordingly, the state-level political ideology variable helps control for the effects of this alternative cause of misreporting.

I used the aforementioned variables to construct a regression model that captures the relationship between Medicaid privatization and Medicaid reporting (henceforth model 1). This model, shown in tables 1–3, was estimated with robust standard errors to protect the resulting estimates from being unduly impacted by deviations from modeling assumptions created by heteroscedasticity and outliers (Western 1995).

Findings from the Linked ACS-MSIS Data

To ensure that any observed relationship between privatization and Medicaid underreporting is not a product of copious control variables, I first tested for a bivariate relationship between Medicaid reporting and privatization. While I present model-based results here, appendix figures A1–A3 contain graphical illustrations of these bivariate relationships. As figure A3 suggests, and as column 1 of table 1 shows, I found no evidence of a statistically significant bivariate relationship between managed care penetration and Medicaid reporting (p = 0.941). However, I did find a statistically significant relationship between Medicaid reporting and commercial MCO penetration (p = 0.019; see table 2, column 1), with Medicaid reporting decreasing as commercial MCO penetration increased. By decreased Medicaid reporting, I am referring to decreased Medicaid reporting among persons who were enrolled in Medicaid according to MSIS administrative data. Similarly, I found that increased penetration of privately named MCOs relates to decreased Medicaid reporting (p = 0.012; see table 3, column 1).

Next, I reanalyzed these data with the full set of control variables listed in model 1. To address the potential for multicollinearity among my control variables, I successively added the control variables that are likely to be correlated (race, language proficiency, TANF enrollment, and education). Across all specifications, my results are consistent with my findings from the bivariate analyses. At any conventional level of statistical significance, I found no evidence of a relationship between managed care penetration and Medicaid underreporting (see table 1, columns 2–6). Yet, as the percentage of the Medicaid population that uses a commercial MCO grows, the portion of the Medicaid population that reports their enrollment decreases (i.e., the Medicaid undercount grows) (p < 0.05; see table 2, columns 2–6).10 Similarly, as the percentage of the Medicaid population enrolled in privately named MCOs grows, Medicaid reporting among Medicaid enrollees declines (p < 0.05; see table 3, columns 2–6). These findings suggest that privatization, on its own, may not impact Medicaid self-reporting. However, when privatized Medicaid plans utilize administrative designs that obscure the traceability of government, Medicaid self-reporting declines.

I also further isolated privatization as the mechanism driving this misreporting by respecifying my model to address the politics of health care more directly. One concern is that Medicaid privatization proxies for a state's larger treatment of Medicaid recipients and that I am incorrectly attributing Medicaid underreporting to privatization rather than to an antecedent, such as a state's general neglect of or support for its Medicaid program. Accordingly, I respecified my model to include control variables for Medicaid generosity in terms of both average spending per adult enrollee and the expansiveness of a state's Medicaid eligibility.11 I also controlled for Medicaid take-up rates, meaning the percentage of eligible recipients who are enrolled in Medicaid. This measure reflects a state's commitment to registering Medicaid-eligible persons in this health insurance program, as well as Medicaid-eligible persons' willingness and ability to enroll. In this vein, I included a series of indicators that capture enrollment hurdles, including whether the state allows for joint family applications, whether the state has an in-person interview requirement,12 and whether the state has asset tests for eligibility.13 I added these control variables both individually and as groups. Across all specifications, I found that commercial MCO penetration and privately named MCOs continue to have a significant impact on Medicaid underreporting (see appendix tables A1 and A2 for regression output).

I also checked for the possibility that my findings are simply the result of a few influential states, or outliers. I identified influential states by calculating the Cook's distance for each observation across the different measures of privatization and across all model specifications. My results did not change when I excluded these influential states from my analysis. I also specified influential states using DFBeta calculations, and once again my results did not change when I excluded the states that this method identified as influential.14 These robustness checks indicate that my results are not driven by a few influential states.

Overall, these analyses show that the form of the privatization matters. Medicaid plan name and commercial managed care penetration—two program elements that have a strong potential to obscure the public nature of Medicaid—have a clear and consistent relationship with Medicaid underreporting. However, when I measured privatization simply using Medicaid managed care enrollment, I found no significant relationship between this measure of privatization and Medicaid underreporting. These disparate results demonstrate the value of incorporating variation in program design into studies of privatization to better isolate when privatization relates to decreased state visibility.

With respect to the relative size of the impact of privatization on Medicaid underreporting, my results show that for every one hundred people added to either a commercial MCO or a privately named MCO, approximately ten fewer people will report their Medicaid enrollment. To get a sense of how this plays out at the state level, my results show that Pennsylvania's commercial Medicaid MCO population of 968,713 relates to approximately 91,000 fewer persons reporting their Medicaid enrollment in the state. Similarly, its privately named Medicaid MCO population of roughly 3,357,000 relates to approximately 396,000 fewer people reporting their Medicaid.

It is important to stress that, although these findings highlight a robust association between privatization and Medicaid underreporting, this does not prove causation. While I have included theoretically relevant control variables to account for other potential causal explanations, this does not eliminate the possibility that the patterns shown by this research are caused by an alternative correlated factor. Moreover, although the linked MSIS-ACS data provide a measure of Medicaid enrollment free from the bias of self-reporting, it does so by utilizing state-level measures. This use of aggregate data, which protects the anonymity of the survey respondents, also prevents me from directly assessing how Medicaid privatization impacts individual behavior. The phrasing of the ACS question on health insurance suggests that Medicaid enrollees who do not select the “Medicaid” response option both do not think of themselves as Medicaid enrollees and do not think of their health insurance as part of a government-assistance program.15 However, the reliance on state-level measures means that any attempt to extrapolate individual-level behavior from my analysis risks engaging in an ecological fallacy. Accordingly, to study the relationship between Medicaid privatization and self-reporting at the individual level, I turned to the National Longitudinal Study of Adolescent to Adult Health (Add Health), which contains individual-level measures of Medicaid reporting.

Measuring Medicaid Privatization and Medicaid Reporting: Add Health Survey Data

The Add Health survey is a multiyear panel survey that follows a nationally representative sample of adolescents through to adulthood. In this analysis, I used the restricted-use version of the Add Health survey, which provides a sample size that is three times larger than the public use files.16 The third wave of this survey contains data gathered from August 2001 to April 2002 from more than 7,000 respondents eighteen to twenty-eight years of age earning less than 200 percent of the federal poverty level. As this income level suggests, the survey should be expected to contain a number of Medicaid recipients from across the United States. Ideally, I would connect these Add Health data to MSIS records; however, such a linkage is impossible due to privacy concerns. Given this limitation, I instead constructed a measure to serve as a proxy for the accuracy of a respondent's Medicaid self-reporting. To calculate this measure, I began by estimating a respondent's Medicaid eligibility. Medicaid eligibility is based primarily on state eligibility limits for total household income, parental status, employment status, and qualifying health conditions. I used these determinants of eligibility to identify which Add Health respondents were likely to be Medicaid eligible. I then limited my analysis to the survey respondents whom I estimated were eligible for Medicaid in their state.

The dependent variable in this analysis is a dichotomous measure of whether or not the Medicaid-eligible respondent reported Medicaid enrollment. As this measure contains individual-level responses, it provides a useful complement to the ACS-MSIS data. However, it is clearly less ideal than the linked MSIS-ACS data for determining the accuracy of Medicaid reporting—Medicaid eligibility does not equate to enrollment. Additionally, my measure of eligibility is a rough proxy and does not reflect the state's actual eligibility determination and subsequent enrollment. This mismeasurement is likely to attenuate any effects of privatization. Another concern caused by my construction of the dependent variable is the possibility that a correlation between the dependent variable (accuracy of Medicaid reporting) and privatization is a function of a state's Medicaid eligibility limits, because these limits may correlate with a state's decision to privatize Medicaid. The state-level political environment is likely to affect both eligibility cutoffs and privatization decisions. Accordingly, as with model 1, I included a measure of state-level political ideology.17 I also controlled for the state's general treatment of its Medicaid population by including measures of state-level Medicaid spending and Medicaid take-up, along with the series of indicators that capture Medicaid enrollment procedures. Given that the Add Health data are collected at the individual level, I included the following control variables from model 1 at the individual level: TANF enrollment,18 language skills,19 race, education,20 and political ideology.21 The use of individual-level data also allowed me to isolate additional personal characteristics that may affect how an enrollee experiences Medicaid. Concerns that managed care providers will maneuver to serve only the healthiest populations (a practice known as “creaming”) suggest that people in MCOs might not have the same health needs as the general Medicaid population (Demone and Gibelman 2013). I addressed this concern by including a measure of respondents' general health. As with my analysis of the ACS-MSIS data, I added these control variables sequentially to show that my findings are robust across modeling specifications. In this analysis, my explanatory variable of interest—Medicaid privatization—remained at the state level. Because I included both individual and state-level variables, I analyzed the Add Health data with a multilevel model (henceforth referred to as model 2). Multilevel modeling allowed me to account for systematic unexplained variation at both the individual and group levels when estimating group-level regression coefficients, such as states' Medicaid privatization (Gelman 2006; Gelman and Hill 2007: 246).22 This improves the likelihood that if a relationship between Medicaid privatization and Medicaid reporting is detected it is a function of Medicaid privatization and not a separate individual or state-level factor. 23

Findings from the Add Health Data

I began my analysis of the Add Health data by once again testing for a bivariate relationship between Medicaid reporting and Medicaid privatization. Here, I found a statistically significant correlation between Medicaid underreporting and both commercial MCO enrollment (p = 0.024; see table 4, column 1) and enrollment in privately named Medicaid plans (p = 0.022; see table 5, column 1). These relationships hold across all model specifications. As the percentage of a state's Medicaid population in a commercial MCO increases, Medicaid-eligible respondents become less likely to report Medicaid enrollment (p < 0.05 for all specifications; see table 4). Likewise, as the percentage of a state's Medicaid population in privately named MCOs increases, Medicaid-eligible respondents become less likely to report Medicaid enrollment (p < 0.05 for all specifications; see table 5).

These results, however, do not tell us how these Medicaid eligible respondents are misreporting their insurance status. Do the respondents who fail to report Medicaid enrollment report that they are uninsured or do they report a different type of insurance coverage, such as private insurance? To evaluate this question, I kept the sample limited to Medicaid-eligible respondents, but I created a new dichotomous dependent variable that measures whether a respondent reports Medicaid or the respondent reports having private insurance. By private insurance, I am referring to health insurance that is provided by a source other than the government. Using this measure, I found that both increased commercial MCO enrollment and increased enrollment in privately named MCOs relate to an increased propensity to report private insurance (compared to reporting Medicaid enrollment) among the Medicaid-eligible population (p < 0.05). These findings held across all specifications (see appendix tables A3 and A4). Although these relationships are statistically significant in the predicted direction, the effect size is modest, with a 1 percent standard deviation increase in either commercial MCO penetration or privately named MCO penetration corresponding to a less than 1 percent decrease in the likelihood of reporting Medicaid. However, the measurement error in the dependent variable and the use of aggregate state-level measures for Medicaid privatization are both likely to attenuate the effect size substantially. This cautions against drawing strong interferences on the size of these relationships, because the likelihood of this attenuation indicates that their magnitude is greater than what the Add Health data reflect.

I also replicated this analysis using a dependent variable that measures whether the respondent reported either Medicaid or a lack of insurance. With this measure of the dependent variable, I found no statistically significant relationship between enrollment in privately named MCOs and Medicaid reporting at any traditional level of significance (see appendix table A6). I also found that the relationship between enrollment in commercial MCOs and Medicaid reporting is dependent on the modeling specification (see appendix table A5). Viewed together, the Add Health data suggest that Medicaid privatization, in terms of commercial MCO penetration and privately named MCOs, relates to an increased number of Medicaid-eligible respondents reporting that they have private insurance, but this privatization does not have a clear impact on whether respondents report having health insurance.

In sum, while the ACS data provide a robust measure of the relationship between Medicaid privatization and Medicaid underreporting in the aggregate, the Add Health data demonstrate that this misreporting is occurring at the individual level and that, when this misreporting occurs, people are often misreporting their Medicaid enrollment as private insurance. This finding is in line with Boudreaux et al.'s (2015) conclusion that respondents who misreport their Medicaid status are more likely to report having private health insurance rather than no health insurance. The similarity of that finding to the Add Health output in the present study suggests that these results are not merely an artifact of how I constructed the Medicaid-eligibility variable. The distinction between misreporting private health insurance and misreporting no health insurance is also consistent with my expectation for why Medicaid privatization should affect the Medicaid undercount. I theorized that privatization has the potential to influence Medicaid reporting by making it difficult to trace this health insurance back to government. The relationships identified in this research confirm that forms of privatization that hamper traceability are related to Medicaid recipients' decreased recognition of the public origins of their health insurance.

Implications of Submerging the State's Role in Medicaid

This research shows that the privatization of a major social program can relate to decreased awareness of program use and increased difficulty in connecting an experience back to government. Just as critically, it clarifies that not all privatization initiatives have the same effect and that the form of the privatization—namely, its use of administrative elements that obscure the role of the state—structures when this lack of recognition occurs. These findings emphasize the importance of treating privatization as more than a dichotomous measure when examining its relationship to civic perceptions. As noted earlier, however, this analysis should be understood as establishing a correlation but not necessarily causation. In this study, I have identified a pattern showing that an increase in administrative designs that obscure government's role in Medicaid corresponds with a larger portion of the Medicaid population failing to recognize and report their enrollment. The persistence of this pattern in two different data sets, and with multiple modeling specifications, demonstrates the robustness of this relationship.

These findings caution against interpreting self-reports as an unbiased measure of program use, particularly when government programs are theorized to be a part of submerged-state policies. Methodologically, this suggests that estimates of Medicaid enrollment that rely on self-reports should be corrected to account for states' levels of privatization, specifically their reliance on commercial MCOs and privately named MCOs. The Congressional Budget Office, along with state governments, use Medicaid enrollment statistics from Census surveys to estimate the cost of and compliance with legislation (see, e.g., CBO 2015; Legislative Budget Board 2013). Increasing reliance on Medicaid managed care makes the relationship between privatization and the measurement of Medicaid enrollment particularly relevant. In 1991, less than 10 percent of the Medicaid population was enrolled in managed care (HCFA 2000). By 2014, that number had grown to 77 percent (CMS 2016). With managed care occupying an increasingly dominant role in Medicaid administration, the analyses presented in this article provide guidance on which elements of managed care policy design to account for when generating corrected estimates of Medicaid enrollment.

The civic consequences of this research, however, extend beyond inaccurate measurement. Medicaid is the largest source of public health insurance in the country (CMS 2017). It is directly responsible for the health care of communities at the margins of American society: children, their parents, the elderly, and the disabled living below or near the poverty line. This research shows that privatization can make it difficult for the people who rely on Medicaid to understand how the political fate of this program affects them and their families. Recipients who do not recognize Medicaid or the government's personal relevance to their health insurance face a disadvantage in advocating for their self-interest in this area. Prominent health reform efforts currently propose rolling back the Medicaid expansion of the Affordable Care Act and restructuring how the program is financed. In covering this development, news outlets have provided anecdotal evidence highlighting people's confusion over whether the fate of the Medicaid program affects them personally (Newkirk 2017). This study provides empirical evidence on the causes and scope of this disconnect among the Medicaid population.24

Beyond Medicaid, this research suggests that privatization can alter people's understandings of the personal relevance of government's social policy. When people fail to recognize that they are interacting with a public-assistance program, conceptions of government's personal relevance and the usefulness of this program are, by definition, distorted. The presence of this distortion indicates that this portion of the population is forming opinions about government and its personal relevance based on a skewed image of government's role in their lives. My hope is that calling attention to this nuanced phenomenon, and government's role in its creation, encourages scholars to continue to investigate the implications of this submersion for the affected populations' civic understandings and behaviors.

Funding for this research was provided by the University of Pennsylvania GAPSA-Provost interdisciplinary research award.

Appendix

Notes

1. The public's preferences for social spending but against government intervention in markets help to explain why submerged policy designs, such as tax expenditures, are popular delivery mechanisms (Faricy 2016; Faricy and Ellis 2014). Faricy and Ellis (2014) show that otherwise identical social welfare programs receive greater public support when they are portrayed as using tax expenditures as delivery mechanisms compared to direct spending. Powerful special interests and the need for legislative compromise have similarly helped propel the increase in privatization (Morgan and Campbell 2011).

2. A voucher system did not become an approved form of Medicaid delivery until 2013, when states applied to the Centers for Medicare and Medicaid Services for waivers to use vouchers for Medicaid expansion under the Affordable Care Act (KFF 2015).

3. States have adopted commercial MCO plans in attempts to increase the provider network and reduce stigma associated with Medicaid plans (Holahan et al. 1998). In theory, the private population in a commercial MCO also offers a “financial cushion” for the MCO (Pracht 2007).

4. This phrasing also suggests that the ACS health insurance question cannot be used to track Medicaid overreporting, as respondents can accurately select this response option if they are enrolled in government health insurance plans beyond Medicaid.

5. The published State Health Access Data Assistance Center paper (Boudreaux et al. 2015) codes this variable so that higher values indicate lower levels of Medicaid reporting (i.e., a larger Medicaid undercount). To make interpretation of the data more easily comprehensible for the purposes of this article, I rearranged the coding of the dependent variable so that higher values indicate higher levels of Medicaid reporting.

6. Disclosure concerns prevent the Census Bureau from releasing new analyses that include respondent-level indicators for commercial MCO enrollment and privately named MCOs.

7. Persons are deemed limited English proficient if they report speaking English less than “very well.” Given the correlation between the size of a state's limited-English-proficient population and the size of a state's Latino population, I also specified a version of this model that includes a measure of the percentage of a state's Medicaid population that identifies as Latino. The inclusion of this variable did not affect any of the substantive findings reported here. Regression output with this additional control variable is presented in the  appendix.

8. I also addressed the political calculus behind the decision to implement Medicaid managed care by including a measure of the percentage of each state's employed population that is in a union. Strong union membership has been shown to be correlated with resistance to managed care initiatives (Pracht 2007). The inclusion of this variable did not affect the substantive findings presented here. Regression output with this control variable is presented in the  appendix. For simplicity, the Latino identification and union membership variables were presented in the same regression; however, my results remain unchanged when these measures were added to the regression individually.

9. Prior and Lupia's (2008) research takes care to note that quick memory recall is distinct from political learning skills.

10. The significance of the commercial MCO variable is particularly noteworthy given the construction of this variable: the 2008 CMS managed care report defines a Medicaid MCO as commercial if it provides services either to a non-Medicaid population or to Medicare enrollees. This means that the commercial MCO variable also accounts for plans with non-Medicaid enrollees on public health insurance because they are on Medicare. I would expect this to attenuate the effects of the commercial MCO variable because I cannot disaggregate non-Medicare commercial MCOs from Medicare commercial MCOs. Compared to plans that serve a general, non-Medicaid population, I would not expect plans that serve the Medicare population to look significantly more like commercial plans than Medicaid-only MCOs. The commercial MCO variable's statistical significance is also notable given that I controlled for the potential confusion created by the Medicaid-Medicare overlap and that a larger dual-eligible population is related to improved Medicaid reporting.

11. I captured the expansiveness of Medicaid eligibility by measuring both the percentage of the population that lives below 200 percent of the poverty line and the percentage of the population on Medicaid. Accounting for the percentage of the population that lives below 200 percent of the poverty line allows the Medicaid penetration variable (i.e., percentage of the population on Medicaid) to capture Medicaid generosity with respect to the expansiveness of the state's Medicaid eligibility and not simply the number of low-income residents in the state.

12. Although in-person interviews present an enrollment hurdle, I expected this requirement to increase Medicaid reporting, as the interview is likely to make clear that the recipient is applying for Medicaid.

13. For simplicity, the output in the  appendix displays these three variables in the same regression; however, my results remain unchanged when these measures were added to the regression individually.

14. DFBeta provides a measure of how substantially a coefficient is altered by the deletion of an observation (Field, Miles, and Field 2012). I used Stata 14 software to make these calculations (StataCorp LLC, College Station, TX).

15. As noted above, the ACS health insurance question asks respondents whether they obtain their health insurance through “Medicaid, Medical Assistance, or any kind of government-assistance plan for those with low incomes or a disability.”

16. The restricted-use files also contain additional state-level contextual variables, as well as the proper weight components needed for a multilevel model.

17. State-level political ideology is calculated as the difference between the proportion of votes cast for the Democratic and Republican presidential candidates in the 2000 election.

18. Given the overlap between the TANF-eligible and Medicaid-eligible populations at the time of this survey (Nadel, Wamhoff, and Wiseman 2003), including self-reports of TANF enrollment allowed this measure to serve as a proxy for a respondent's ability to report social program use. I also remeasured TANF based on state-level enrollment estimates and found that the inclusion of this aggregate measure did not change my results.

19. The Add Health measure of language proficiency focuses on frequency of using a non-English language.

20. I also remeasured education using a series of dichotomous variables that indicate one's highest level of educational achievement (some high school, a high school diploma or the equivalent, some college, and a college degree or greater). Output with these measures is included in the  appendix. In keeping with the linked ACS-MSIS data, I also included a measure of whether respondents identified as Latino. Unfortunately, I could not control for union membership, as I did with the linked ACS-MSIS data, given that wave 3 of the Add Health data did not include this measure.

21. I omitted the variable measuring recipients sixty-five or more years of age, given that the Add Health sample was limited to respondents eighteen to twenty-eight years of age. Moreover, because the sample was already limited to persons with incomes under 200 percent of the federal poverty level, I did not include the measures of Medicaid expansiveness used in the ACS-MSIS model. Notably, when I respecified the Add Health model to include these additional state-level indicators, my substantive findings did not change.

22. It is not possible to include state indicators along with state-level predictors in classical regression; however, multilevel models allow the regression slope and the intercept to vary simultaneously by state (Gelman and Hill 2007: 246).

23. I also reanalyzed this model as a logistic regression with standard errors clustered at the state level; the results were consistent with my findings from the hierarchical model.

24. The research presented here also indicates that Medicaid recipients will be less likely to benefit from the boost in political participation that Medicaid enrollment can provide. Medicaid expansion under the Affordable Care Act has made the communities that benefit from this expansion more politically active (Clinton and Sances 2016). However, this public responsiveness is predicated on people's recognition of the role that Medicaid plays in their lives. The findings presented here suggest that these participatory benefits will be limited depending on each state's Medicaid MCO administrative arrangements.

References

References
Abraham
Jean M.
,
Karaca-Mandic
Pinar
, and
Boudreaux
Michel H.
2013
. “
Sizing Up the Individual Market for Health Insurance: A Comparison of Survey and Administrative Data Sources
.”
Medical Care Research and Review
70
, no.
4
:
418
33
. .
Barabas
Jason
,
Jerit
Jennifer
,
Pollock
William
, and
Rainey
Carlisle
.
2014
. “
The Question(s) of Political Knowledge
.”
American Political Science Review
108
, no.
4
:
840
55
. .
Boudreaux
Michel H.
,
Call
Kathleen Thiede
,
Turner
Joanna
,
Fried
Brett
, and
O'Hara
Brett
.
2015
. “
Measurement Error in Public Health Insurance Reporting in the American Community Survey: Evidence from Record Linkage
.”
Health Services Research
50
, no.
6
:
1973
95
. .
Call
Kathleen Thiede
,
Davidson
Gestur
,
Davern
Michael
, and
Nyman
Rebecca
.
2008
. “
Medicaid Undercount and Bias to Estimates of Uninsurance: New Estimates and Existing Evidence
.”
Health Services Research
43
, no.
3
:
901
14
. .
Cantor
Joel C
,
Monheit
Alan C.
,
Brownlee
Susan
, and
Schneider
Carl
.
2007
. “
The Adequacy of Household Survey Data for Evaluating the Nongroup Health Insurance Market
.”
Health Services Research
42
, no.
4
:
1739
57
. .
CBO (Congressional Budget Office)
.
2015
. “
The 2015 Long-Term Budget Outlook
.”
June
16
. www.cbo.gov/publication/50250.
Chattopadhyay
Arpita
, and
Bindman
Andrew B.
2006
. “
The Contribution of Medicaid Managed Care to the Increasing Undercount of Medicaid Beneficiaries in the Current Population Survey
.”
Medical Care
44
, no.
9
:
822
26
. .
Clinton
Joshua
, and
Sances
Michael W.
2016
. “
The Politics of Policy: The Initial Mass Political Effects of Medicaid Expansion in the States
.”
September
22
draft.
Nashville, TN
:
Vanderbilt University
.
CMS (Centers for Medicare and Medicaid Services)
.
2008
. “
Medicaid Managed Care Enrollment Report and Summary Statistics as of June 30, 2008
.”
June
30
. www.communityplans.net/ResourceCenter/MedicaidManagedCareData/tabid/361/Default.aspx.
CMS (Centers for Medicare and Medicaid Services)
.
2014
. “
Annual Centers for Medicare and Medicaid Services (CMS) Statistics Report
.” Publication no. 03510.
US Department of Health and Human Services
. www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/CMS-Statistics-Reference-Booklet/Downloads/CMS_Stats_2014_final.pdf.
CMS (Centers for Medicare and Medicaid Services)
.
2016
. “
Medicaid Managed Care Enrollment and Program Characteristics, 2014
.” www.medicaid.gov/medicaid-chip-program-information/by-topics/data-and-systems/medicaid-managed-care/downloads/2014-medicaid-managed-care-enrollment-report.pdf.
CMS (Centers for Medicare and Medicaid Services)
.
2017
. “
Medicaid.gov: Eligibility
.” www.medicaid.gov/medicaid/eligibility.
Davern
Michael
,
Klerman
Jacob Alex
,
Baugh
David K.
,
Call
Kathleen Thiede
, and
Greenberg
George D.
2009
. “
An Examination of the Medicaid Undercount in the Current Population Survey: Preliminary Results from Record Linking
.”
Health Services Research
44
, no.
3
:
965
87
. .
Delli Carpini
Michael X.
, and
Keeter
Scott
.
1997
.
What Americans Know about Politics and Why It Matters
.
New Haven, CT
:
Yale University Press
.
Demone
Harold W.
, and
Gibelman
Margaret
.
2013
.
The Privatization of Human Services: Policy and Practice Issues
.
New York
:
Springer
.
DiIulio
John
.
2014
.
Bring Back the Bureaucrats: Why More Federal Workers Will Lead to Better (and Smaller!) Government
.
West Conshohocken, PA
:
Templeton Press
.
Eberly
Todd
,
Pohl
Mary Beth
, and
Davis
Stacey
.
2009
. “
Undercounting Medicaid Enrollment in Maryland: Testing the Accuracy of the Current Population Survey
.”
Population Research and Policy Review
28
, no.
2
:
221
36
. .
Faricy
Christopher
.
2016
. “
The Distributive Politics of Tax Expenditures: How Parties Use Policy Tools to Distribute Federal Money to the Rich and the Poor
.”
Politics, Groups, and Identities
4
, no.
1
:
110
25
.
Faricy
Christopher
, and
Ellis
Christopher
.
2014
. “
Public Attitudes toward Social Spending in the United States: The Differences between Direct Spending and Tax Expenditures
.”
Political Behavior
36
, no.
1
:
53
76
. .
Field
Andy
,
Miles
Jeremy
, and
Field
Zoë
.
2012
.
Discovering Statistics Using R.
London
:
Sage
.
Fleming
David J.
2014
. “
Learning from Schools: School Choice, Political Learning, and Policy Feedback
.”
Policy Studies Journal
42
, no.
1
:
55
78
. .
Gelman
Andrew
.
2006
. “
Multilevel (Hierarchical) Modeling: What It Can and Cannot Do
.”
Technometrics
48
, no.
3
:
432
35
.
Gelman
Andrew
, and
Hill
Jennifer
.
2007
.
Data Analysis using Regression and Multilevel/Hierarchical Models
.
Cambridge
:
Cambridge University Press
.
Gilens
Martin
.
2000
.
Why Americans Hate Welfare: Race, Media, and the Politics of Antipoverty Policy
.
Chicago
:
University of Chicago Press
.
Gilens
Martin
.
2009
. “
Racial Attitudes and Opposition to Welfare
.”
Journal of Politics
57
, no.
4
:
994
1014
. .
Hacker
Jacob S.
2002
.
The Divided Welfare State: The Battle over Public and Private Social Benefits in the United States
.
New York
:
Cambridge University Press
.
Hackett
Ursula
.
2017
. “
Theorizing the Submerged State: The Politics of Private Schools in the United States
.”
Policy Studies Journal
45
, no.
3
:
464
89
. .
HCFA (Health Care Financing Administration)
.
2000
. “
A Profile of Medicaid: Chart Book
.”
US Department of Health and Human Services
. www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/TheChartSeries/downloads/2tchartbk.pdf.
Holahan
John
,
Zuckerman
Stephen
,
Evans
Alison
, and
Rangarajan
Suresh
.
1998
. “
Medicaid Managed Care in Thirteen States
.”
Health Affairs
17
, no.
3
:
43
63
. .
Howard
Christopher
.
1999
.
The Hidden Welfare State: Tax Expenditures and Social Policy in the United States
.
Princeton, NJ
:
Princeton University Press
.
Kamerman
Sheila
, and
Kahn
Alfred
.
1989
.
Privatization and the Welfare State
.
Princeton, NJ
:
Princeton University Press
.
Katz
Michael B.
, and
Thomas
Lorrin R.
1998
. “
The Invention of ‘Welfare’ in America
.”
Journal of Policy History
10
, no.
4
:
399
418
.
Kellstedt
Paul M.
2003
.
The Mass Media and the Dynamics of American Racial Attitudes
.
New York
:
Cambridge University Press
.
Kettl
Donald F.
2009
.
The Next Government of the United States: Why Our Institutions Fail Us and How to Fix Them
.
New York
:
Norton
.
KFF (Henry J. Kaiser Family Foundation)
.
2015
. “
Medicaid Expansion in Arkansas
.” www.kff.org/medicaid/fact-sheet/medicaid-expansion-in-arkansas.
Kincheloe
Jennifer
,
Brown
E. Richard
,
Frates
Janice
,
Call
Kathleen Thiede
,
Yen
Wei
, and
Watkins
Jim
.
2006
. “
Can We Trust Population Surveys to Count Medicaid Enrollees and the Uninsured?
Health Affairs
25
, no.
4
:
1163
67
. .
Klerman
Jacob A.
,
Davern
Michael
,
Call
Kathleen Thiede
,
Lynch
Victoria
, and
Ringel
Jeanne D.
2009
. “
Understanding the Current Population Survey's Insurance Estimates and the Medicaid ‘Undercount.’
Health Affairs
28
, no.
6
:
991
1001
. .
Klerman
Jacob Alex
,
Ringel
Jeanne S.
, and
Roth
Beth
.
2005
. “
Under-reporting of Medicaid and Welfare in the Current Population Survey
.” SSRN Scholarly Paper ID 754364.
Rochester, NY
:
Social Science Research Network
.
Kosar
Kevin R.
2006
. “
Privatization and the Federal Government: An Introduction
.” CRS Report for Congress Order Code RL33777. www.fas.org/sgp/crs/misc/RL33777.pdf.
Legislative Budget Board, Texas State Government
.
2013
. “
Legislative Budget Board Staff Estimate: Affordable Care Act (ACA) Optional Medicaid Expansion: Fiscal Impact Estimate for Texas, State Fiscal Years (SFYs) 2014–2023
.” www.lbb.state.tx.us/Documents/Appropriations_Bills/83/Decision_Docs/Expansion%20Estimate%20March%2004%202013.pdf.
Lundqvist
Lennart J.
1988
. “
Privatization: Towards a Concept for Comparative Policy Analysis
.”
Journal of Public Policy
8
, no.
1
:
1
19
. .
Mettler
Suzanne
.
2011
.
The Submerged State: How Invisible Government Policies Undermine American Democracy
.
Chicago
:
University of Chicago Press
.
Mettler
Suzanne
, and
Soss
Joe
.
2004
. “
The Consequences of Public Policy for Democratic Citizenship: Bridging Policy Studies and Mass Politics
.”
Perspectives on Politics
2
, no.
1
:
55
73
. .
Meyer
Bruce
,
Mok
Wallace K. C.
, and
Sullivan
James X.
2009
. “
The Under-reporting of Transfers in Household Surveys: Its Nature and Consequences
.”
National Bureau of Economic Research
. www.nber.org/papers/w15181.pdf.
Meyerson
Harold
.
1996
. “
Wither the Democrats
.”
American Prospect
,
March–April
. prospect.org/article/wither-democrats.
Morgan
Kimberly J.
, and
Campbell
Andrea Louise
.
2011
.
The Delegated Welfare State: Medicare, Markets, and the Governance of Social Policy
.
New York
:
Oxford University Press
.
Nadel
Mark
,
Wamhoff
Steve
, and
Wiseman
Michael
.
2003
. “
Disability, Welfare Reform, and Supplemental Security Income
.”
Society Security Bulletin
65
, no.
3
. www.ssa.gov/policy/docs/ssb/v65n3/v65n3p14.html.
Newkirk
Margaret
.
2017
. “
Why Kentucky Couldn't Kill Obamacare: A Lesson for Congress
.”
Bloomberg
,
January
23
. www.bloomberg.com/news/articles/2017-01-23/why-kentucky-couldn-t-kill-obamacare-a-lesson-for-congress.
O'Hara
Brett
.
2010
. “
Is There an Undercount of Medicaid Participants in the 2006 ACS Content Test?
” American Community Survey Research and Evaluation Program.
Washington, DC
:
US Census Bureau, Housing and Household Economic Statistics Division
. www.census.gov/content/dam/Census/library/working-papers/2010/acs/2010_OHara_01.pdf.
Pascale
Joanne
,
Roemer
Marc I.
, and
Resnick
Dean Michael
.
2009
. “
Medicaid Underreporting in the CPS: Results from a Record Check Study
.”
Public Opinion Quarterly
73
, no.
3
:
497
520
. .
Pierson
Paul
.
1993
. “
When Effect Becomes Cause: Policy Feedback and Political Change
.”
World Politics
45
, no.
4
:
595
628
. .
Plotzke
Michael R.
,
Klerman
Jacob Alex
, and
Davern
Michael
.
2010
. “
How Does Medicaid-Managed Care Impact Reporting of Medicaid Status?
Health Services Research
45
, no.
5
:
1310
23
. .
Pracht
Etienne E.
2007
. “
State Medicaid Managed Care Enrollment: Understanding the Political Calculus That Drives Medicaid Managed Care Reforms
.”
Journal of Health Politics, Policy and Law
32
, no.
4
:
685
731
. .
Prior
Markus
, and
Lupia
Arthur
.
2008
. “
Money, Time, and Political Knowledge: Distinguishing Quick Recall and Political Learning Skills
.”
American Journal of Political Science
52
, no.
1
:
169
83
. .
Schneider
Anne
, and
Ingram
Helen
.
1993
. “
Social Construction of Target Populations: Implications for Politics and Policy
.”
American Political Science Review
87
, no.
2
:
334
47
. .
Soss
Joe
, and
Schram
Sanford F.
2007
. “
A Public Transformed? Welfare Reform as Policy Feedback
.”
American Political Science Review
101
, no.
1
:
111
27
. .
Tater
Margaret
,
Paradise
Julia
, and
Garfield
Rachel
.
2016
. “
Medi-Cal Managed Care: An Overview and Key Issues
.”
Kaiser Commission on Medicaid and the Uninsured, Henry J. Kaiser Family Foundation
. files.kff.org/attachment/issue-brief-medi-cal-managed-care-an-overview-and-key-issues.
Tausanovitch
Chris
, and
Warshaw
Christopher
.
2013
. “
Measuring Constituent Policy Preferences in Congress, State Legislatures, and Cities
.”
Journal of Politics
75
, no.
2
:
330
42
. .
Verkuil
Paul R.
2007
.
Outsourcing Sovereignty: Why Privatization of Government Functions Threatens Democracy and What We Can Do about It
.
New York
:
Cambridge University Press
.
Western
Bruce
.
1995
. “
Concepts and Suggestions for Robust Regression Analysis
.”
American Journal of Political Science
39
, no.
3
:
786
817
.