Abstract

Research on the disconnected worlds of academia and policy often highlights the potential for intermediary organizations to serve as research brokers incorporating academic research in the policy-making process. Yet, we know little about how this process plays out in practice. This article examines the degree to which the extensive academic research on the effects of the Affordable Care Act was incorporated within the high-profile legislative advocacy campaign aimed at blocking the law's repeal in early 2017. In this campaign, advocates gathered and produced information to support their legislative and mobilization efforts, making it a good case for examining the evidentiary basis underlying applied policy analysis. We identified little direct dissemination of academic studies and only a minor role for academic studies in this advocacy effort. Even so, we identify several pathways by which academic research was incorporated into the debate, primarily through the efforts of academics to publish timely analysis in venues more accessible to advocates than traditional scholarly journals. We conclude that academic research can play a role in advocacy, but it is typically an incomplete one. Instead, advocates tend to rely on research that is either produced or packaged by an array of governmental and nongovernmental policy organizations.

Scholars have long been interested in the ideas that inform the policy-making process, with a particular focus on the extent to which academic research filters into governmental decision making. Health services research scholars, in particular, have been concerned that policy makers largely ignore the vast stock of knowledge that researchers have generated in this field. Yet a focus on public policy makers alone truncates the field of vision. The world of health care policy is populated with not only congressional and executive branch officials but also a large array of profit-making and nonprofit groups. Ideas about health policy are incubated in this sprawling, heterogeneous community, often carried by the individuals who move back and forth between nongovernmental organizations and government service. Yet, we know relatively little about the kinds of evidence used by people operating within this realm of advocacy groups and other organizations when they make demands around public policy.

This article explores the degree to which the extensive academic research on the effects of the Affordable Care Act (ACA) was employed in the advocacy campaign against its repeal. We focus on this case study to illuminate the processes and pathways by which academic studies can get incorporated into policy making via an issue network populated by research organizations, associations, advocacy groups, interest groups, and think tanks. Admittedly, the early 2017 effort to block repeal of the ACA is a distinctive case in its high political profile, wide-reaching coalition, and condensed time frame. Yet, this unusual case also provides a unique opportunity to investigate whether and how scholarly research is deployed in legislative advocacy. The combination of an extensive academic literature assessing the effects of the ACA and an advocacy community motivated to draw on evidence about the effects of the ACA make this an instance in which we would expect academic research to be used in one way or another by the advocacy campaign. This data-rich case can therefore help us identify distinct ways by which academic studies and other forms of research are communicated, packaged, and translated in a politically charged policy debate.

In our bibliometric analysis of reports, fact sheets, opinion pieces, and other advocacy materials disseminated by members of an advocacy coalition, we found little evidence of direct dissemination of academic research, despite the extensive published literature on the effects of the ACA. When academic findings and voices were evident, it was primarily through translational or dissemination efforts undertaken specifically to inform the current policy debate or via publications (such as Health Affairs) or organizations (such as the Georgetown Health Policy Institute) that intentionally orient their focus to policy makers. Research brokers—individuals or organizations that have typically been understood to be a conduit between academia and the policy world—were generally not the ones engaging in this translational work, which instead was done by entrepreneurial academics seeking to apply their knowledge to the policy sphere. And while academic research was found to enter the debate indirectly, as an evidentiary base cited in more applied policy research, even the academic pieces cited tended to be those published in more accessible venues—primarily in the policy-focused journal Health Affairs, its blog, other blogs, or working papers available on university websites or through sites such as the National Bureau of Economics Research. Overall, our study reveals that, if academics want their research to enter into the policy-making process at the advocacy stage, they have to make their research broadly accessible, which includes choosing formats and publication venues that best insert their finding into policy discussions.

Background

A recurring debate in the health services field concerns the role of research in shaping the policy-making process. Reflecting on two decades of health systems research and a century of policy debate, Anderson (1966: 40) observed that the power of ideological commitments and divergent vested interests in this area was so great that “the results of research are not applied rationally in public policy”—a pessimistic conclusion that a number of subsequent evaluations supported (see examples discussed in Bice 1978: 175–77; Fox 1979). Assessments of the direct impact of research in other policy areas were similarly pessimistic (Weiss 1980), spurring efforts to find ways to bridge the gap between (social) scientific knowledge and public policy (National Research Council 2012; Oliver et al. 2014; Dobrow et al. 2006; Ness and Gandara 2014). However, others have countered the view that researchers and policy makers inhabit alternative universes (Newman, Cherney, and Head 2015), finding instead that academic research is viewed as more credible and objective than research from other organizations (Doberstein 2017; Head et al. 2014; Rich 2004) and can play a role in influencing policy makers in the United States and other countries (Shulock 1999; Haynes et al. 2011; Desmarais and Hurd 2014; Glied and Miller 2015).

Yet as Bice (1978: 175) noted, one reason for diverging conclusions on these questions may be different understandings about what the policy-making process entails: for those who see public policy as “explicit and authoritative decisions taken by identifiable government officials,” the impact of research can often seem scant. Excessive focus on the direct use of research findings by public officials obscures the realities of “knowledge creep” and “decision accretion”—the diffuse, indirect way in which ideas enter into policy-making communities (Weiss 1980). Moreover, a broader community of actors is involved in most policy debates, not simply those with immediate decision-making power (Bice 1978: 176). Examining that wider field of actors and organizations pushes the researcher to take into consideration the political and social context in which research is packaged and used (Davies and Nutley 2008: 9–10; see also Peterson's contribution to this special issue).

A broader vision is all the more necessary given the nature of the health policy-making field. With a sprawling landscape of interest organizations, nonprofit advocacy groups, think tanks, and government actors (operative not only in federal politics but also at state, local, and in some instances international levels), health policy making has given rise to “issue networks” (Heclo 1978) or “advocacy coalitions” (Sabatier 1988) around the particular policy questions at stake. Issue networks not only involve those in formal positions of power and a discrete number of organizations lobbying to influence them but also encompass a broader array of interested individuals and organizations knitted together by their expertise in a policy domain. Moreover, individuals often use a revolving door to move back and forth between spheres—starting out, for example, as legislative or executive branch staffers and then moving into lobbying organizations or think tanks, and perhaps then back again into a position of more direct policy-making influence (Blanes i Vidal, Draca and Fons-Rosen 2012). Making sense of policy-making processes in health care thus requires one not only to analyze the formal decision-makers involved but also to wrap one's scholarly arms around the vast, amorphous domain of activists and experts that seek to shape these decisions.

Even though scholars of interest group politics recognize that, for policy advocates, “information is the coin of the realm” (Kersh 2007: 390; Heclo 1978), existing studies of the role of research in policy making often neglect this intermediary layer of organizations and their role as disseminators or producers of policy-relevant knowledge. To some extent, the individuals operating within these organizations can be thought of as “research brokers”—intermediaries who transform complex research findings or ideas into user-friendly issue briefs or reports (Esterling 2004; Rigby 2005; Ward, House, and Hamer 2009). In fact, health policy makers report turning to advocacy and other nonacademic groups for information when forming policy (Dodson, Geary, and Brownson 2015; Jabbar et al. 2014). As Kersh's (2007: 391) participant-observation study of corporate health care lobbyists showed, these individuals were not simply “influence-peddlers” but “political information seekers, processors, translators, and deployers.”

All of this brings up the question of the sources of the information being used: to what extent are these individuals drawing on academic research in their advocacy or lobbying work? Esterling (2004) found that, when policy ideas have strong research-based support, interest and advocacy groups are likely to pick up on these ideas in support of their agendas. Kersh (2007) also noted academic research as one source among many for the information that lobbyists aggregate and package for consumption by members of Congress and their staffs. Yet there are many other generators of research knowledge, too—particularly the more traditional think tanks and government research shops that are often staffed by PhDs (see Peterson, this issue). And the landscape of knowledge producers and disseminators has grown increasingly complex, with increasing numbers of ideologically driven groups that are less committed to traditional research (Rich 2004; Weaver 1989), as well as for-profit firms generating their own policy-relevant products (Drezner 2017). The proliferation of these varied organizations has blurred the boundary between purely academic research and the public policy world (Stone 2007), yet we know little about how those operating in this intermediary zone deploy academic versus other forms of research and evidence in their policy advocacy work.

An expanded view of the health policy-making field, and the importance of the research brokers who operate within it, raises a number of questions. When advocates in the health policy field make claims, on what evidentiary basis are they doing so? Do research brokers draw on academic research to bolster their assertions, and are brokers from some organizations more likely to do so than others? Are there publication formats that are especially conducive to the dissemination of academic ideas? And finally, if academic research is not very present in advocacy campaigns, then whose research is?

Research Design

We investigated these questions through analysis of information disseminated during an advocacy campaign against repeal of the ACA. The day after Trump's election to the presidency, Ron Pollack, president of the liberal advocacy group Families USA, declared “total war” on any attempt to repeal the ACA. “We've got the battle of our lifetime ahead of us,” Pollack said in an interview with Politico, adding, “We're going to have a huge number of organizations from all across the country that will participate in this effort” (Haberkorn and Demko 2016). One month later, Families USA joined with the Center for American Progress and the Center for Budget and Policy Priorities to launch the Protect Our Care (POC) coalition that pulled together a large number of labor, civil rights, women's, antipoverty, and health care advocacy organizations.

Together, these stakeholder groups focused on developing and communicating compelling arguments about the consequences of ACA repeal for the American public. In so doing, they drew on a wide array of evidence and information, ranging from peer-reviewed journal articles to a databank of stories from individual Americans—and almost everything in between—and generated a large number of memos, fact sheets, and reports designed to bolster their extensive advocacy efforts in Washington, DC, and around the country.

We focus on this case not because it is a typical case—it is not—but because of its potential to illuminate different ways in which academic studies get incorporated into the policy-making process via an issue network populated by research organizations, associations, advocacy groups, interest groups, and think tanks. This case has two key characteristics that make it particularly fruitful terrain on which to examine this question: extensive academic research on the issue at hand, and active efforts by advocates to gather and publish information to support their legislative advocacy and public mobilization efforts.

Unlike many policy debates, there is already copious academic research on the specific policy under debate. In fact, evaluating the effects of the ACA has been a central aim of health policy researchers inside and outside of academia since the bill's passage. A simple query in Academic Search Complete of peer-reviewed journals identified 3,420 pieces that concern the ACA.1 The journal with the highest number of these articles was Health Affairs—a journal that works to make its articles widely accessible to policy makers and policy wonks alike (table 1). For instance, the journal hosts a policy briefing in Washington, DC, on the day each new issue is released. Further, those not subscribing to the journal can still access any article published more than three years ago, and the journal's blog, Health Policy Briefs, e-mail alerts, RSS feed, Twitter posts, and Facebook presence all give the journal a high level of visibility among nonacademics.

The second key characteristic likely to increase research usage in this case is adoption of an informational advocacy strategy among members of the coalition. Members of the coalition shared with one other extensive information on the politics and policy of ACA repeal to coordinate and advance the extensive and multipronged advocacy efforts of all the organizations. This dissemination process included a weekly conference call and ongoing e-mail distribution of resources after each call, in a weekly summary, and also occasional e-mails focused on amplifying an individual report, news article, or event. We examined the content of these e-mails over a four-month period—ending our analysis when the House Republicans passed their version of a repeal and replace bill, the American Health Care Act, on May 4, 2017, as this represented an end point of the initial campaign that already provided us with ample sources to analyze. During the eighteen-week period under study (January 3–May 4, 2017), 153 e-mails were sent to the listserv related to coalition efforts.

Much of this communication centered on political developments or served as a means of organizing upcoming advocacy events, but 57 of the e-mails (37 percent) contained at least one information resource for use by advocates articulating the effects of the ACA, its repeal, and/or various replacement proposals. Across these 57 e-mails, we identified 122 unique resources disseminated for use by the coalition in their legislative advocacy or grassroots organizing efforts. These 122 resources included reports, fact sheets, links to commentary, and issue briefs—all addressing the effects of the ACA, its repeal, the various replacement bills under consideration, or specific reform provisions under debate. We excluded press releases, organizational position statements, talking points, multimedia resources, and webinars because, in these contexts, the norms regarding how to cite the facts provided are less defined.

We coded these 122 advocacy resources to capture the time between publication and first dissemination to the coalition, the organization publishing the resource, its format (report, brief, or estimates), and whether the resource was from an academic source (defined broadly to include an academic journal or its associated blog, a working paper repository, university-based research center, or individual academic expert). Then we compiled a citation data set capturing all the sources cited in each of the 122 informational resources. This allowed us to assess the degree to which academic research served as an evidentiary base for policy analysis produced and disseminated by nonacademic organizations. These 1,366 citations were each coded for format, organization, date, and frequency of citation across the 122 resources.

Findings

As discussed above, as part of a much broader and comprehensive advocacy campaign, the POC coalition disseminated via e-mail 122 informational resources to help coalition partners discuss the impacts of the ACA, its repeal, and/or potential replacement plans. A notable characteristic of these resources was that they were primarily new pieces written specifically for the current policy and political debate. In fact, 38 percent of the pieces disseminated had been published the same day they were disseminated to the coalition, and another 47 percent were published within the same week. Only 5 percent were more than a month old when disseminated.2

The majority of these (76 pieces, 62 percent) were short issue briefs, commentaries, or blog posts, which typically synthesized existing research. Another 35 (29 percent) were reports presenting new research findings or going in depth into a topic. And the remaining 11 (9 percent) were links to estimates, such as government data, projections, or state-specific data. These resources were disseminated throughout the study period. Yet, the majority (74 pieces, 61 percent) were disseminated in March after introduction of the Republican American Health Care Act, with far fewer in January (19), February (14), or April (15). The large increase in disseminated pieces in March was driven by a disproportionately high number of briefs (52) disseminated in that month; in fact, these 52 March issue briefs made up 43 percent of all the pieces disseminated throughout the entire four-month period under study (table 2).

Direct Dissemination of Academic Research

Table 3 lists the organizations producing and publishing the information resources disseminated within the coalition. We categorized the 122 resources by the sector producing the information: academic (including journals, university-based research centers, and individual academics publishing in blogs and newspapers), government, and other organizations. Given our previous findings that the informational resources disseminated by this advocacy coalition tended to be short issue briefs published on a short time frame and tailored to the current policy debate, it is not surprising that only a small minority of the sources were from academic sources (18 resources representing 15 percent of all the pieces disseminated), and even fewer were produced and published by governmental sources (8 sources, 7 percent). The vast majority of informational resources disseminated among the coalition partners were produced by organizations, primarily think tanks, nonprofit research centers, associations, and advocacy organizations, typically those within the coalition. In fact, as table 3 shows, the three main sources of disseminated information were the three organizations leading the coalition: the Center for Budget and Policy Priorities, Families USA, and the Center for American Progress.

We also found differences in the format of information disseminated when the information was produced by these different sectors. Figure 1 presents the percentages of briefs, reports, and estimates, by sector: academic, government, or organizational. We see the most distinct difference between the resources published by governmental sources, which were never presented as briefs but instead took the form of reports (63 percent) or estimates (38 percent), and other organizations, whose products were more likely to be briefs (68 percent) or reports (28 percent). Academic resources that were disseminated tended to take all three formats: briefs (61 percent), followed by estimates (22 percent) and research reports (17 percent). In this way, academic researchers seem able to capitalize on their reputation as objective analysts to provide information similar to that generated by governmental agencies (facts, figures, and estimates), as well as to produce pieces in formats similar to that created by advocacy and think tank organizations—often mirroring the issue brief format they typically use and policy makers are used to.

A closer look at the information disseminated from the academic sector makes it even clearer that these pieces are not typical journal articles evaluating the ACA (despite the substantial body of academic research on this topic we discussed earlier). Instead, most academic sources were written as briefs primarily to synthesize the literature, explain different aspects of the policy debate, and often make policy recommendations (table 4). Only three were research reports—all published by university-based policy research centers without formal peer review. The remaining four instances were estimates of the effects of different policy proposals produced and published by academics in news sources that are easily accessed online by policy makers (e.g., Vox and the Hill).

These findings suggest a number of pathways by which academic research may be incorporated into the policy debate—but each instance required academics to undertake a synthesis of research specifically prepared to speak to the current policy debate and/or incorporate explicit dissemination efforts surrounding the release of research findings relevant to the current policy debate. We found no evidence of advocates serving as academic research brokers reaching back into the academic literature to identify previously published research relevant to the policy debate. It was more common for university-based research centers or entrepreneurial academics to take on the research broker role—bringing together academic findings and actively working to disseminate them to advocates and coalition partners who may use these resources in their advocacy work.

Indirect Use of Academic Research

In addition to the direct dissemination of academic sources, we were interested in the degree to which academic sources were used indirectly as foundational evidence in the policy analysis generated for use in the advocacy campaign. We also wanted to uncover whether academic sources were more likely to be used in resources prepared in different formats (e.g., briefs vs. reports) or by different sectors (e.g., government or academic) and identify key characteristics of academic sources that were cited.

To investigate these questions, we developed a data set capturing the citations used in all 122 of the disseminated pieces. Across these sources, we identified 1,366 citations that were drawn from a wide range of sources. Table 5 lists the 24 sources that were cited at least 13 times. At the top of the list are a number of policy organizations, most notably the Kaiser Family Foundation (with 101 citations), the Center for Budget and Policy Priorities (with 77 citations), Families USA (with 50 citations), and Urban Institute (46 citations). Many of the other highly cited sources are governmental entities, including the Office of the Assistant Secretary for Planning and Evaluation in the US Department of Health and Human Services (with 68 citations), the Congressional Budget Office (61 citations), the Centers for Medicare and Medicaid Services (51 citations), Congressional committees and offices (49 citations) and other entities in the Department of Health and Human Services (47 citations). Other highly cited sources were the media, with the most citations to the New York Times (27 citations) and Vox (22 citations). Only two academic sources were found among these top sources: Health Affairs (28 citations, 9 of those to its blog) and the Georgetown Health Policy Institute (14 citations). In addition, nine of the Vox citations and three citations of the Hill were to commentary or estimates from academics published on their blogs.

In total, only 127 citations (9.4 percent) were coded as from academic sources. These included 62 journal articles, 38 working papers, 12 pieces by academics in the media, and 5 reports from the National Academies. The sources for these academic citations are listed in table 6. Although the journal articles cited are published in a wide range of journals, the largest share were from Health Affairs, with 28 citations to either its journal or posted on its blog. Significantly, many of the other citations to academic work were to other formats, primarily working papers published by university-based research centers, on news blogs, or as part of the National Academies.

Another way to assess the use of academic work is to examine how many of our 122 disseminated resources cited any academic sources (table 7). This provides a somewhat more encouraging outlook on the use of academic research because, although only 9.4 percent of the citations were to academic work, 40 percent of the 122 resources cited at least one academic source, and 27 percent cited at least one journal article. This pattern did not differ by format (i.e., report, brief, estimates), although we did find differences by sector. Academic resources were most likely to cite other academic sources, with a full 89 percent including an academic citation and 61 percent including at least one journal article. Government sources were least likely to cite academic work: one piece by the Department of Treasury cited working papers, but none of the other government pieces cited work published in academic journals. Organizations' likelihood of citing academic sources fell in the middle, with 35 percent citing an academic source and 23 citing at least one journal article.

What are some of the pathways by which academics, and their research, get drawn into policy debates? To explore that question, we examined the seven most cited pieces written by academics (table 8). Most of these pieces do not conform to the conventional academic article, which aims to address a question of general scholarly interest and extensively cites existing scholarly works published in peer-reviewed sources. The two papers that most closely fit this latter description were published in the Journal of Health Economics and Contraception. A third paper that appeared in Health Affairs was rooted in scholarly research but written to be accessible to a nonacademic audience, being shorter and more focused on a specific policy question at hand. The remaining pieces that appeared in blogs and other online sources are examples of scholars providing analysis or commentary of narrow policy issues. These papers do not aim to summarize existing scholarly work but, rather, to bring the knowledge and expertise of the authors to bear on a particular question. For instance, the two pieces that appeared on the Vox website were estimates of the effects of Republican proposals to replace the ACA on out-of-pocket health costs.

Many of these pieces have qualities that could help explain why they were brought into the debates over the ACA repeal. The two pieces that most closely conform to academic conventions were both publicly available rather than behind a paywall—in one case, due to requirements that research funded by the National Institutes of Health be made publicly available, and in another instance because the journal had an open access option (for which journals often charge authors high fees). The Health Affairs article was behind a paywall, but it was easy to locate a blog post summary of the paper online (Rosenquist 2015). The Vox papers were written by a Harvard economist but coauthored with a researcher at the Center for American Progress and an industry expert. And the paper in Contraception was authored by three researchers at the Guttmacher Institute, itself an organization that straddles the scholarly and advocacy domains. In short, although we are not engaging in a rigorous comparative analysis of influential and noninfluential articles, those pieces that did gain some attention are suggestive of what factors might help bring academic voices into policy-making debates. If academics want to get their voices heard, publishing in blogs and newspapers, enabling open access, or coauthoring with policy advocates are some ways to do so.

Overall, the citation analysis provides insight into the limited and indirect role academic research may play as an evidentiary base for other policy analysis. We found little evidence of academic research being cited in the reports and estimates produced by government agencies. And even some pieces published in academic venues or by academics cited little to no academic work. About a third of information produced by nongovernmental and nonacademic organizations did include citations to academic research, although it was only about 5 percent of what they cited and was dwarfed by citations to other organizations and governmental sources. Most of these academic citations were to papers in Health Affairs or other easier-to-access research sources such as National Bureau of Economic Research working papers, university-based research centers, or even in media such as Vox and the Hill blogs discussed earlier.

If academics were not the main source of information and ideas, what sources were cited in the work disseminated as part of the antirepeal campaign? Most citations were to governmental sources, work previously done by the organization writing the piece, work by similar organizations, or a select group of nonacademic, nongovernmental organizations that are not engaged in the advocacy coalition but still produce evidence used by it. In the latter instance, there were many references to work by organizations such as Kaiser Family Foundation, the Urban Institute, and the Commonwealth Fund. These organizations seemed to play a distinct role in this informational campaign, being more like government agencies that advocates cite regularly for facts and figures than advocacy organizations that actively disseminate issue briefs.

To get a better sense of the role different organizations and agencies play in the process of packaging and communicating research within this particular issue network and advocacy campaign, we compared organizations and agencies on two dimensions: how often their work was disseminated during the advocacy campaign and how often their work was cited. We found differences among organizations and agencies (table 9). The group of organizations located in the upper left quadrant played the role of aggregators—groups that wrote research reports or issue briefs that were disseminated to campaign members but were less likely to be cited as the source of research or information even within their own pieces.

The group in the lower right quadrant includes nongovernmental organizations (e.g., Kaiser Family Foundation) and government agencies (e.g., Office of the Assistant Secretary for Planning and Evaluation and the Congressional Budget Office), that are research-heavy outfits—organizations and agencies that are not directly involved in crafting the particular pieces disseminated by advocates but whose work was clearly the bedrock of this information-rich campaign. These organizations, rather than academics, generated much of the research that forms the underlying foundation for advocacy work.

A third set of entities were involved in both activities, as both the source of information sent out as part of the campaign—the short issue briefs and fact sheets that aim to summarize existing information and galvanize action—and frequently cited in the various disseminated pieces, including those they author. Not surprisingly, the three main organizations involved in the POC coalition fit in this upper right quadrant, but so do Vox, the Hill, and Health Affairs.

Another cluster of organizations is on the opposite side of the spectrum in the lower left quadrant: organizations that were neither the source of the disseminating material nor much cited in the research. These groups are likely serving to supplement the information prepared and packaged by the other entities. Yet, due to the small sample of disseminated pieces by these groups, we are not able to determine the role these entities played in this informational advocacy campaign, beyond noting their engagement and significance in it.

Discussion and Conclusions

Does academic research matter in the policy process? The campaign to repeal the ACA offers an interesting vantage point on that question given that, since its passage in 2010, the ACA has been the source of an enormous quantity of academic research in a large number of disciplines. We investigated the role of academic research in the antirepeal campaign through analysis of the materials disseminated by the advocacy coalition against repeal, examining the auspices and nature of these materials and then analyzing citation patterns of those disseminated pieces. This strategy enabled us to assess both whose voices were directly employed in the informational strategy of the coalition and whose research formed the evidentiary bedrock of the campaign. Our larger purpose was to address existing debates about the policy impacts of academic research.

Overall, we found ourselves on the “pessimistic” side of that debate, as we found that academic research had only limited direct and indirect influence. Only a handful of academic papers or reports, or instances of academic writing in nonacademic venues, were disseminated by the leaders of the campaign as resources shared through its e-mail list to coalition partners. The citation patterns of the disseminated works also revealed the limited influence of scholarly voices, with only 9.5 percent of the more than thirteen hundred citations being to academics or academic works. It should be noted, however, that while our findings reveal the limited influence of academic expertise, they do not support claims that policy analysis as a whole is not influential. To the contrary, and much as Shulock (1999) shows in her own study, the generation and dissemination of policy analysis were a central element of the successful campaign to block repeal of the ACA. Where we part company with her assessment of legislative deliberations, however, is in the specific sources of policy analysis. Policy organizations based in Washington, DC, not academics, were the main generators of the policy information and analysis.

Of course, our study focused on a particular stage of the policy-making process: the advocacy phase. By contrast, much of the research on this topic examines whether academic work and other forms of policy analysis shape the deliberations and decisions of public officials operating within the legislative or executive branches. Yet, given the importance of interest organizations in US politics, and the fact that many people cycle in and out of these types of organizations and positions in government, it is important to include this sphere as a potential point of influence for scholarly research. In line with existing studies of the importance of interest and advocacy groups as research aggregators and disseminators (Heclo 1978; Esterling 2004; Kersh 2007), our study shows how the activities of these groups are very much grounded in policy research but that most of the research is generated outside academia, by the organizations themselves or by other advocacy groups, think tanks, or government agencies.

Our definition of academic research was fairly capacious, including not only conventional journal articles but also opinion pieces penned by scholars in newspapers or blog posts, reports published by university-based research centers, and publications in Health Affairs. Although one might argue that this wider scope exaggerates the extent to which academic research had influence, taking this broader perspective helped us identify some of the pathways through which academic voices have been heard in this advocacy campaign. As Kersh (2007) noted in his study of health care lobbyists, these individuals are not steeped in academic research but do regularly peruse such journals as Health Affairs, JAMA, and the New England Journal of Medicine. In our study, Health Affairs was an especially important source for connecting academics and policy advocates.

Even so, scholars themselves have to take the initiative to make their findings or expertise relevant and influential. Contrary to our expectations, in this advocacy campaign there was very little traditional research brokering, in which organizations and individuals that lie between academia and the policy world translate the ideas from the former to the latter. Instead, academics typically had to be their own brokers, seeking out ways to make their voices heard. Thus, scholars whose ideas and research became part of the advocacy campaign often published in more broadly accessible formats and journals and coauthored with people involved in the policy world. Disseminated reports of new academic research tended to be published by university-based research centers that undertake explicit translational activities, such as publishing state-specific facts sheets and issuing press releases. In short, if academics want to have influence in these types of policy debates, they need to find ways to cross the substantial divide between academia and advocacy, rather than hoping that someone else will bring their work into the policy-making domain.

This finding may also have implications for future research on the impact of academic research on the policy realm. Some argue we are living in a “golden age of academic engagement with the public sphere,” as ever more academics distribute their ideas and research on blogs and other online sites (Lynch 2016). The informational world has changed dramatically since the time when some of the most influential literature was penned on the impact of academic research on public policy. We encourage future scholars to take more seriously this increasingly complex informational environment, marked not only by a proliferation of research-generating organizations but also by an explosion of more accessible venues for academic scholarship.

Although our focus in this article was on uncovering the influence of academic research, our study also shed light on the nature and role of organizations and government agencies as sources of ideas and evidence in this information-heavy advocacy campaign. The materials disseminated by the coalition were largely rooted in the research conducted by independent organizations and government agencies. Another group of organizations then largely served as packagers and disseminators of this research, while a third group—which included the organizations leading the campaign—did both. This categorization of groups based on their relationship to research reveals a more complex organizational landscape than that often described by the literature on think tanks or on advocacy groups. Additionally, it highlights a key role for government agencies and research-focused policy organizations in providing the (relatively) objective facts, figures, estimates, and evidence we would often expect from academics or the traditional notion of think tanks. What is needed is a more nuanced theory of the ecology of policy analysis organizations to better understand how they operate individually, in cooperation, and as part of a complex issue network to shape policy proposals, debates, and outcomes.

This study raises a number of other questions that could be the subject of future work. One concerns the extent to which our findings are a “beltway” story. Most of the organizations involved in generating, packaging, and/or disseminating ideas and analysis in our study are located in either Washington, DC, or in the Northeast Corridor that runs between Washington and Boston. Although there are numerous academic institutions within this corridor as well, many are far from it, and thus academics from those institutions may simply lack the social network ties that often shape the flow of information. Additionally, our focus on information disseminated among the national organizations certainly overlooked policy analysis (and a great deal of other advocacy work) occurring at the same time in the states, particularly districts of Republican members targeted by the coalition. Given the importance of state-specific policy analysis, we expect that other forms of information were likely being generated and disseminated within each state context, which may have better integrated the research or academics lodged in neighboring universities—particularly when that research speaks to localized concerns.

A second question is raised by the fact that, although we were able to establish the limited influence of academic work in this advocacy campaign, we did not tackle whether or not this is problematic. Do academic researchers and their research have certain valuable qualities—a real or perceived commitment to impartiality, evidence, and peer review, for example—that are not being adequately tapped, with the result that policy debates driven more by ideology than by facts and evidence? Or are the research-producing organizations, both governmental and nongovernmental, providing studies and ideas that resemble those of the scholars whose work they tend not to cite, making the disconnection of academia and policy making less consequential? Answering this normative question is important for understanding the degree to which we should be encouraging academics to make their research more accessible to applied researchers, advocates, and policy makers, such as by providing multiple versions of their work placed in different venues, through greater use of university-based research centers focused on applied work, or through institutional changes to incentivize greater engagement in the policy process. These decisions should be made after critical self-evaluation among academic policy researchers—identifying unique strengths of academic research and its greatest potential to inform the policy-making process given the existing and growing landscape of policy organizations already engaged in applied policy work.

Elizabeth Rigby is associate professor of public policy and political science at George Washington University. Her research examines the interplay of politics, policy making, and inequality. Her research has been published in a range of journals, including American Journal of Political Science, Journal of Policy Analysis and Management, Health Affairs, and Political Research Quarterly.

erigby@gwu.edu

Kimberly J. Morgan is professor of political science and international affairs at George Washington University. Her work examines the politics shaping public policies in Western Europe and the United States, with particular interests in immigration and the welfare state. She is the author or editor of four books, most recently The Many Hands of the State: Theorizing Political Authority and Social Control (2017).

Acknowledgments

We thank Emily Schraudenbach for her expert research assistance, Dahlia Remler and the other participants at the New York University Conference on Policy Analysis and the Politics of Health Policy, and JHPPL reviewers and editors for their valuable feedback and guidance on this project.

Notes

1.

The search was for the term Affordable Care Act in abstracts of scholarly journals and was conducted in June 2017 in Academic Search Complete.

2.

These estimates exclude three resources that listed no publication date and twenty resources that listed only the month of publication, which was the same as the month of disseminated in every case.

References

References
Anderson
Odin W.
1966
. “
Influence of Social and Economic Research on Public Policy in the Health Field: A Review
.”
Milbank Memorial Fund Quarterly
44
, no.
3, pt. 2
:
11
51
.
Bice
Thomas W.
1978
. “
Social Science and Health Services Research: Contributions to Public Policy
.”
Milbank Memorial Fund Quarterly Health and Society
58
, no.
2
:
173
200
.
Blanes i Vidal
Jordi
,
Draca
Mirko
, and
Fons-Rosen
Christian
.
2012
. “
Revolving Door Lobbyists
.”
American Economic Review
102
, no.
7
:
3731
48
.
Davies
Huw T. O.
, and
Nutley
Sandra M.
2008
.
Learning More about How Research-Based Knowledge Gets Used: Guidance in the Development of New Empirical Research
.
New York
:
William T. Grant Foundation
.
Desmarais
Bruce A.
, and
Hird
John A.
2014
. “
Public Policy's Bibliography: The Use of Research in US Regulatory Impact Analyses
.”
Regulation and Governance
8
:
497
510
.
Doberstein
Carey
2017
. “
Whom Do Bureaucrats Believe? A Randomized Controlled Experiment Testing Perceptions of Credibility of Policy Research
.”
Policy Studies Journal
45
, no.
2
:
384
405
.
Dobrow
Mark
,
Goel
Vivek
,
Lemieux-Charles
Louise
, and
Black
Nick
.
2006
. “
The Impact of Context on Evidence Utilization: A Framework for Expert Groups Developing Health Policy Recommendations
.”
Social Science and Medicine
63
, no.
7
:
1811
24
.
Dodson
Elizabeth A
,
Geary
Nora A.
, and
Brownson
Ross C.
2015
. “
State Legislators' Sources and Use of Information: Bridging the Gap between Research and Policy
.”
Health Education Research
30
, no.
6
:
840
48
.
Drezner
Daniel
.
2017
.
The Ideas Industry: How Pessimists, Partisans, and Plutocrats Are Transforming the Marketplace of Ideas
.
Oxford
:
Oxford University Press
.
Esterling
Kevin M.
2004
.
The Political Economy of Expertise: Information and Efficiency in National Politics
.
Ann Arbor
:
University of Michigan Press
.
Fox
Daniel M.
1979
. “
From Reform to Relativism: A History of Economists and Health Care
.”
Milbank Memorial Fund Quarterly Health and Society
57
, no.
3
:
297
336
.
Glied
Sherry A.
, and
Miller
Erin A.
2015
. “
Economics and Health Reform: Academic Research and Public Policy
.”
Medical Care Research and Review
72
, no.
4
:
379
84
.
Haberkorn
Jennifer
, and
Demko
Paul
.
2016
. “
Obamacare Defenders Vow ‘Total War.’
Politico
,
November
10
. www.politico.com/story/2016/11/obamacare-defenders-vow-total-war-231164.
Haynes
Abby S.
,
Gillespie
James A.
,
Derrick
Gemma E.
,
Hall
Wayne D.
,
Redman
Sally
,
Chapman
Simon
, and
Sturk
Heidi
.
2011
. “
Galvanizers, Guides, Champions, and Shields: The Many Ways That Policymakers Use Public Health Researchers
.”
Milbank Quarterly
89
, no.
4
:
564
98
.
Head
Brian
,
Ferguson
Michele
,
Cherney
Adrian
, and
Boreham
Paul
.
2014
. “
Are Policy-Makers Interested in Social Research? Exploring the Sources and Uses of Valued Information among Public Servants in Australia
.”
Policy and Society
33
, no.
2
:
89
101
.
Heclo
Hugh
.
1978
. “
Issue Networks and the Executive Establishment
.” In
The New American Political System
, edited by
King
Anthony
,
87
124
.
Washington, DC
:
American Enterprise Institute
.
Jabbar
Huriya
,
La Londe
Priya Goel
,
DeBray
Elizabeth
,
Scott
Janelle
, and
Lubienski
Christopher
.
2014
. “
How Policymakers Define ‘Evidence’: The Politics of Research Use in New Orleans
.”
Policy Futures in Education
12
, no.
8
:
1013
27
.
Kersh
Rogan
.
2007
. “
The Well-Informed Lobbyist: Information and Interest Group Lobbying
.” In
Interest Group Politics
,
7th
ed., edited by
Cigler
Allan J.
and
Loomis
Burdett A.
,
389
411
.
Washington, DC
:
CQ
.
Lynch
Marc
.
2016
. “
After the Political Science Relevance Revolution
.”
Washington Post
,
March
23
. www.washingtonpost.com/news/monkey-cage/wp/2016/03/23/after-the-political-science-relevance-revolution/?utm_term=.31fd05d7f7e9.
National Research Council
.
2012
.
Using Science as Evidence in Public Policy
.
Washington, DC
:
National Academies Press
.
Ness
Eric C.
, and
Gandara
Denisa
2014
. “
Ideological Think Tanks in the States: An Inventory of Their Prevalence, Networks, and Higher Education Policy Activity
.”
Educational Policy
28
, no.
2
:
258
80
.
Newman
Joshua
,
Cherney
Adrian
, and
Head
Brian W.
2015
. “
Do Policy Makers Use Academic Research? Reexamining the ‘Two Communities’ Theory of Research Utilization
.”
Public Administration Review
76
, no.
1
:
24
32
.
Oliver
Kathryn
,
Innvar
Simon
,
Lorenc
Theo
,
Woodman
Jenny
, and
Thomas
James
.
2014
. “
A Systematic Review of the Barriers to and Facilitators of the Use of Evidence by Policymakers
.”
BMC Health Services Research
14
, no.
2
. .
Rich
Andrew
.
2004
.
Think Tanks, Public Policy, and the Politics of Expertise
.
Cambridge
:
Cambridge University Press
.
Rigby
Elizabeth
.
2005
. “
Linking Research and Policy on Capitol Hill: Insights from Research Brokers
.”
Evidence and Policy: A Journal of Research, Debate and Practice
1
, no.
2
:
195
213
.
Rosenquist
Rebecka
.
2015
.
The ACA and Contraceptive Coverage: Policy Context of a New Study
. ldi.upenn.edu/aca-and-contraceptive-coverage (accessed
December
18
,
2017
).
Sabatier
Paul A.
1988
. “
An Advocacy Coalition Framework of Policy Change and the Role of Policy-Oriented Learning Therein
.”
Policy Sciences
21
, no.
2/3
:
129
68
.
Shulock
Nancy
.
1999
. “
The Paradox of Policy Analysis: If It Is Not Used, Why Do We Produce So Much of It?
Journal of Policy Analysis and Management
18
, no.
2
:
226
44
.
Stone
Diane
.
2007
. “
Recycling Bins, Garbage Cans or Think Tanks? Three Myths Regarding Policy Analysis Institutes
.”
Public Administration
85
, no.
2
:
259
78
.
Ward
Vicky
,
House
Allan
, and
Hamer
Susan
.
2009
. “
Knowledge Brokering: The Missing Link in the Evidence to Action Chain?
Evidence and Policy
5
, no.
3
:
267
79
.
Weaver
R. Kent
.
1989
. “
The Changing World of Think Tanks
.”
PS: Political Science and Politics
22
, no.
3
:
563
78
.
Weiss
Carol H.
1980
. “
Knowledge Creep and Decision Accretion
.”
Knowledge: Creation, Diffusion, Utilization
1
, no.
3
:
381
404
.