Abstract

Research-based evidence has the potential to influence health policy making, but its impact is contingent on a number of factors. Past analyses have underscored the role of politics in shaping policy choices, reflecting the nature of the social learning process and access of political forces at every stage of the policy continuum from agenda formation to policy enactment and implication. This article focuses on the mechanisms of the research evidence enterprise itself, delving directly into what influences the pathways of research evidence into policy making, taking into consideration both the production and consumption functions of such evidence. It presents a process model of the role of research evidence in policy making, examining in conceptual detail variations in the production of evidence in the research community; the communication of policy-relevant results through translation, framing, and other means; the methods of acquisition of the evidence by policy makers and their advisers; and the multiple ways in which evidence is put to use in policy decision making. I use examples from health care reform to illustrate features of the process model.

Donald Trump's 2016 presidential campaign and his administration introduced the proposition that words do not matter and in policy discourse “alternative facts” can compete with actual facts. That may represent the most extreme dismissal of information-based decision making, but even outside the Trumpian orbit tenuous connections between evidence and policy choices can appear. Prior experience in multiple policy settings and the cumulative scholarship over several decades on the role of research evidence in policy making should have long dissuaded anyone of the naive notion that the real world ever comports to what Carol Weiss (1979) describes as a linear “knowledge-driven model” in which basic inquiry leads to applied research and then the derivation and application of specific policy alternatives. Health services researchers, social scientists, and others whose analytical work bears on issues of health policy issues small and large probably have learned this lesson, as well as anyone exposed to policy making at any level of government and across both the public and private sectors. The explosion of health services research following the enactment of Medicare and Medicaid in 1965 has surely advanced by multiples the analytic capacity inside and outside government targeted at policy questions in health and health care, but it remains reasonable to conclude that the “know-do” gap—the gulf between what we understand about the dynamics of health and wellness and the policy instruments we actually employ—is likely as large as ever (Brown 1991; Moat, Lavis, and Abelson 2013).

Stated most broadly, policy making in any governmental setting, and even in corporate domains of the sort that now dominate much of US health care, involves a combination of ideas and power (Heclo 1974; Hall 1993; Kingdon 1995; Patashnik and Peck 2017; Peterson 1997; Wildavsky 1979). The power component—involving both raw and refined politics—manifests in the realm of clashing values, contending ideological and partisan stances, and competing interests, all played out in different institutional settings whose structural features shape the access of various players and the viability of the ideas and information they wield (e.g., Baker, Ginsburg, and Langley 2004; Blendon and Steelfisher 2009; Gold 2009; Jewell and Bero 2008; Lomas 2000a; Peterson 1997; Weiss 1999). Paraphrasing Lasswell's (1936) famous dictum about politics, Parsons (2002: 57) argues that even the very question of what works is not a matter of objective inquiry but, rather, poses the challenge of “whose evidence gets what influence, when and how” (see also Fischer 2003; Stone 2012). Others have identified the divergent cultures that divide the meaning and interpretation of information given by the research community that generates it and the policy-making or practitioner dominion that puts it to its own uses, if it applies it at all (e.g., Davies et al. 2002; Greenlick et al. 2005; Jacobson, Butterill, and Goering 2004; Schur et al. 2009). Even within the confines of the research community, what we conventionally characterize as objective empirical analysis almost necessarily bears the footprint, however subtle, of choices weighted by values, perspectives, and experiences associated more with politics than with science and reflecting which constituencies have full voice in both analytical communities and policy making (Fischer 2003; Stone 2012).

Despite these various inhibitions on the direct, instrumental use of research evidence, which I explore conceptually in some detail below, it is still appropriate for funders committed to improving health and health care, policy specialists in health services research and related fields, constituencies affected by policy action, and citizens more generally, all taking what Pawson (2006) would term a “realist perspective,” to call for deliberations and choices in policy making that are influenced by what social scientists would consider to be valid and reliable evidence. Evidence even by this standard can be appropriately contestable and open to varied interpretations, but whether in the form of knowledge accumulated, information obtained, data collected, or analysis performed, especially using established research methodologies, it is rooted in something observable or experienced. Others have described this as “empirical evidence derived from systematic research methods and analyses” (DuMont 2015: 23). Weiss (1999: 483) puts it nicely: “Even if we realize that evaluation is not the star in the policy drama, we have a responsibility to communicate the best information and analysis available to the principal players.”

Certainly that is a fundamental responsibility for policies that affect the lives and wherewithal of people as significantly and directly as those that define access to, and the quality of, delivered health care services, as well as the social, environmental, and behavioral factors that shape population health. And there is good news: much of what goes on in government (and in comparable settings in the private sector) can be well informed by research. Consider, for example, how the technically complex methods developed in the late 1980s for paying providers in the Medicare program—the Prospective Payment System for hospitals and the Resource-Based Relative Value Scale–driven Medicare Fee Schedule for physicians—were crafted in the legislative and implementation processes in the 1980s and early 1990s. They were hardly free of politics, and the resulting esoteric mechanisms could not fulfill the “technocratic wish” to solve the inherent tensions of values and interests, but they were deeply analytically informed (Belkin 1997; Laugesen 2016; Smith 1992).

Moreover, the scope of such analytical policy making has greatly expanded over the years. At the time that Medicare and Medicaid were being designed and enacted in the 1960s, there were relatively few people in government whom we would today recognize as experts in health policy, health services research, or medical research (Brown 1991; Peterson 1997). It would not be much of an exaggeration to say that government expertise was encapsulated by then undersecretary of Health, Education, and Welfare and long-time social policy technician Wilber Cohen, armed with his pen and yellow legal pad, plus a few others (Peterson 1990). Today the number of agencies and individuals with health policy expertise is vast (Gold 2009; Joyce 2011; Malbin 1980; Peterson 1995, 1997; Oliver 1993; Smith 1992). Important examples include the Office of the Assistant Secretary for Planning and Evaluation (ASPE; origins in 1966), the Centers for Medicare and Medicaid Services (CMS; origins as the Health Care Financing Administration in 1977), and the Agency for Healthcare Research and Quality (origins in 1989) in the US Department of Health and Human Services, and the Government Accountability Office (GAO; with enhanced policy analysis capacity in 1970s), Congressional Budget Office (CBO; origins in 1974), Medicare Payment Advisory Commission (MedPAC; origins in two previous commissions established in 1986), and even the professional committee staffs in Congress. The article by Steven Sheingold et al. in this issue, on value-based pricing, offers a close-in view of just how analytically sophisticated policy option evaluations can be in today's federal health agencies.

The recognition of the complex and muddied ways in which research evidence has limited but nonetheless potentially multiple kinds of beneficial impact on policy decision making leads to the major themes motivating this article. While the literature on the use of research evidence in policy making is rich in perspective and ideas about what is likely to advance or hinder the impact of analytically grounded evidence, and some studies have systematically acquired empirical information to support their conclusions (e.g., Coburn 1998; Esterling 2004; Hird 2005; Jewell and Bero 2008; Lavis 2004, 2006; Oliver 1993; Patashnik and Peck 2017; Smith 1992), the state of understanding in the field remains remarkably impressionistic or setting specific. Here I offer a layered, conceptual approach to these issues, informed largely by work in political science, including some of my own.

Evidence-Based Policy Making in the Shadow of Politics

To understand the mechanics and impact of evidence in the larger policy-making process, one can think of the overall analytical schematic as a three-layer cake. The first layer is the general process of social learning. Individuals, organizations, and even governments learn by doing and, if functioning well, do by learning. Under the best of circumstances, the policies selected and implemented are appropriately informed by lessons whose content is derived from careful assessments of the results of current practices, indigenous experiences in the past, comparable experiences of others in distant but suitably similar places, and perhaps logic models and simulations involving approaches for which there are not yet any empirical precursors. But as I have written elsewhere (Peterson 1997), there are substantive policy lessons—what works—largely in the purview of experts, and there are political policy lessons—what is doable in the political-institutional context—in which politicians, following their constituents and their analytical limitations, dominate. Especially when the policies under consideration are of major scope, the political can easily overwhelm the substantive (see also Bardach 2011; Nichols 2017; Patashnik and Peck 2017).

The second layer pertains to the individual components of the policy chain. It includes the potential ways in which evidence can inform the series of decisions about which issues are on the agenda, the selection of the options to be considered, the evaluations of those options, the choice of the policy to pursue, the method of its implementation, and then potentially its evaluation. Delving into each of these features of the chain individually reveals that both evidence and politics can be at play (Peterson, Leibowitz, and King 2017).

The third layer is the subject of this article: the more specific mechanics of how evidence itself moves, or fails to move, from the realm of research to influence on policy making at whatever node in the chain it takes place. I present a system-level process-model framework for unpacking the process—from production to use—through which research evidence must travel to have a role in policy making. Using this framework, we can identify the normatively optimal role of research evidence (from the standpoint of the analytical community), the more prevalent role played in the real world by politically tinged “evidence,” and the place of the social-economic-political-institutional context in shaping what kinds of information ultimately work their way through the system. This approach helps identify the appropriate targets of opportunity for enhancing the significance of evidence from health services and policy research. Throughout I offer a few illustrative examples associated with national debates over health care reform. It is a domain both rich with vast international and domestic experience and an enormous body of relevant health services and social scientific research evidence, and yet also rife with the overwhelming force of interest group, constituency, and ideologically driven electoral politics.

A Production-to-Use Framework for Research Evidence in Policy Making

Put in the simplest terms, the generation of potentially policy-relevant research and the process of decision making about policy directions respectively occur in what are commonly perceived to be two independent and “culturally” distinctive, often antagonistic, realms. They each are populated by individuals and institutions operating under distinctive norms, incentives, and expectations that are relatively consistent within each sphere but divergent across the two, so much so that they are thought to be largely reverse images of one another—veritable foreign lands with distinctive languages (Jacobson, Butterill, and Goering 2004; Schur et al. 2009).

The “production function” of research evidence is driven for the most part by the standard dictates of the academy: research agendas are the product of individual investigator curiosity, sometimes influenced by the enticements of funding entities wishing to motivate research into certain areas of interest. Investigations proceed in accordance with disciplinary standards for theoretical contributions, data collection, and analysis. The duration and completion of projects are often slowed by competing demands of other projects and obligations (such as teaching and service in university settings). Dissemination is concentrated in peer-reviewed venues, which are viewed as the gold standard for both reporting credible results and securing promotion and career marketability but typically delay substantially the availability of results, often beyond their relevance in the policy arena.

The “consumption function” for the use of information in policy making commonly has the opposite attributes: solutions are needed for policy issues propelled by the political agenda, a mix of electoral pressures, budget and authorization cycles, and crisis response. Decisions about which course of action to take have to be made with the data and evidence already in hand, whether or not real evidence is available and whatever its quality, and without regard to any theory building. Credible information is that which assuages political concerns and comports with prior beliefs rather than abstract principles of objectivity. Policy choices have to be made now, in committee markups, legislative floor votes, and executive sign-or-veto decisions, frequently not long after the issue hits the agenda in the first place. What links the “production function” and “consumption function,” if they are linked at all, is some kind of process of communication by which versions of research results accessible to laymen are “pushed” into the policy-making process through various forms of media or sought and “pulled” out of the research enterprise by policy makers, or their advisers, in search of edification and support.

Along lines somewhat similar to those of Lavis (2006: table 1), and as shown in figure 1, I dissect this simple picture of production, communication, and consumption of research evidence into a more elaborate process-tracing model with specific stages or attributes clustered under the rubrics of production, communication, acquisition, and use (see George and McKeown 1985; Ford et al. 1988). Most stages in the process incorporate necessary but not sufficient activities for research evidence to have an effect on policy making; indeed, such impact depends on the fulfillment of a sequence of necessary steps. For communication, acquisition, and use of research evidence, the process model recognizes two core contexts whose attributes shape and drive the process at any given time for any given issue: the socioeconomic-political context and the institutional (governing) context.

Production

For research evidence to have any kind of bearing on policy deliberation and decision making, it first has to be created. The participants in the research community involved in production can range fairly widely, from what might be termed puzzlers—those who tackle research questions because of the pure intrigue of the question rather than its practical applications—to engagers, researchers who are far more interested in the utility of results and willing to speak directly to policy makers (fig. 1), facilitating translation and dissemination. The research enterprise, however steered by investigator curiosity, research funding streams, and the current intellectual and career currency of particular topics, has to produce and publish, in some form, what we might call scientific findings, with at least some potential to inform how choices could or should be made on issues that plausibly involve state action. Peer-reviewed journal articles or books from mainstream academic or quasi-academic presses, as well as working papers intended for subsequent publication of that sort, are the primary resources.

I also include here gray literature, reports written by the same (or same sorts of) disciplinarily trained researchers who also publish in conventional peer-review settings and that may be subject to a form of peer assessment. Often commissioned through grants and contracts by governmental agencies and other funders as clients who have specific issues to address, these reports or other kinds of written presentations of analytically robust investigations are by their nature likely to be more overtly policy oriented than much of the scholarship that appears in the formal peer-reviewed literature and thus more “available” to the policy-making community. They include analyses produced by firms, think tanks, and agencies like Mathematica Policy Research (MPR), the RAND Corporation, the Kaiser Family Foundation, the American Enterprise Institute, the Commonwealth Fund, the Urban Institute, the Cato Institute, and the Government Accountability Office (GAO). Like conventional scholarship, gray literature contributes to the credentialed knowledge base that research evidence can bring to policy deliberation and decision making.

The production stage also exhibits presentations of research that does not comport with standard academic conventions about study design, causal modeling, data collection, analysis, and interpretation. Such ideologically oriented advocacy research is released by entities—including what Weaver (1989) calls “advocacy tanks,” much on the rise in recent decades—with clear a priori policy objectives seeking to evoke the patina of objective research evidence to support their preordained recommendations (see also Medvetz 2012; Rich 2005). Some of the best-known examples have emerged on the right funded by the Koch brothers. Their Americans for Prosperity Foundation, part of a larger network of conservative organizations, supports policy organizations at the national and state level committed to promoting unfettered free-market principles, far smaller government, and extensive deregulation, including on health care issues (Jones 2017; Mayer 2010). The American Legislative Exchange Council (ALEC), part of this network, has enjoyed considerable success in state legislatures with the model bills it developed and encouraged that support business interests (Hertel-Fernandez 2014). Given the ideological basis, partisan orientation, and organizational scope of Americans for Prosperity, a national correspondent suggested that it “may be America's third-biggest political party” (Bump 2014). Because presentations from these types of organizations may nonetheless reference and use the extant scholarly literature and analytic policy reports, and they feed into the policy process under the claim of evidence, I incorporate them into the productions section of figure 1 but with dashed lines to distinguish them from sources that use more academic conventions for what they mean by the term research evidence.

Using the typology employed by political scientist and health policy specialist Lawrence Brown (1991; see also Weaver 2000), the production process ultimately generates three kinds of information representing increasing levels of analytic sophistication and direction to policy decision making: documentation, describing the current state of affairs, what is happening; analysis, providing a rigorous explanation for why those things are happening; and prescription, systematic evaluations of what should be done to address problems that have been described and explained, growing directly out of the research community's causal understanding of the policy problem's origins. I posit that the funders and generators of research evidence have as their normative ideal a system in which the evidence has the full credibility of peer-reviewed scholarship and is sufficiently advanced to fulfill the requirements and advantages of prescriptive direction to policy decision making. Instead, most of us in the field witness more frequently a kaleidoscope of evidence, often proffered by ideologically oriented organizations, offering clear documentation, at best, of what is currently happening but with limited causal explanation and even less analytically founded prescriptive direction (see Hird 2005; Medvetz 2012; Weaver 1989, 2000).

The available evidence produced about almost every pertinent dimension of health care reform is so voluminous it is almost impossible to put to any metrics. First, just about every general approach to health system design has a concrete manifestation in the world, including among Organisation for Economic Co-operation and Development nations and even within the United States (Peterson 2017: chap. 4). For example, public health care systems of financing and delivery are found in the UK and the US veterans health system, single-payer financing of private delivery is the signature feature of Medicare in both Canada and the United States, coverage rooted in required employment-based insurance has a history in Germany and Hawaii, individual mandates with public subsidies for the purchase of competing private health insurance plans are used in the Netherlands and Massachusetts, and health savings accounts play a role in Singapore and the health coverage sponsored by a substantial number of American businesses. These various systems and their components have been subject to extensive research and analysis. Health Affairs alone, for example, has published over 3,000 articles on health care reform, 1,276 on insurance coverage, 339 on insurance markets, and 905 on Medicare. Numerous institutions from the World Health Organization to the Commonwealth Fund have added their own research in comparative health systems, and others, including the Urban Institute, the Cato Institute, and untold university research centers, have contributed to both the published and gray literature. This barely touches the surface of peer-reviewed scholarship and equivalent forms of reported research. Much extends well into explanation for system performance and prescription while also reflecting considerable range and dissension in interpretation and conclusions. To the extent that proposed reform initiatives and deliberations about them are ill-informed, it is not because of a dearth of evidence and analysis.

Communication

Even the highest-quality work that could substantially elevate the level of policy discourse—whether a matter of documentation, analysis, or prescription—will almost certainly remain isolated from the domain of policy making unless it has been actively translated into relatively brief presentations that are in analytically accessible, jargon-free language, perhaps tweaked into a number of versions that speak directly to multiple and divergent audiences (e.g., policy makers themselves, advisers to policy makers, and the public constituents of elected officials) (Gold 2009, 2015; Davies et al. 2002). In any policy arena, they create the operational “marketplace of ideas” (Drezner 2017). These easily understood presentations can range from short, conceptually unburdened, analytically uncomplicated articles in Health Affairs (a journal that explicitly strives to bridge the research and policy-making communities) to summary compilations in specialized media like Medical Benefits and Medicine and Health. They may include customized issue briefs, seminars or webinars, briefings for legislative staff or written testimony submitted for committee hearings, and letters addressed to policy makers. They can be found in stories in media outlets that are tapped into by both those engaged in policy making and the public, such as a New York Times article reporting recent findings from a study just released in JAMA.

By its own description, the Commonwealth Fund, and other organizations like it, functions as “research broker” for the policy-making community (Collins 2017).1 Health care reform has been the subject of reports or articles circulated by Commonwealth, RAND, the Kaiser Family Foundation, and MPR, among numerous other respected policy-oriented research organizations with easily searchable websites. The GAO, a major provider of analytical studies in direct response to requests from members of Congress (Peterson 1995), has performed some of the most extensive translation of research into legislatively useful information. In the case of reform, for example, there have been a total of 649 GAO reports and testimonies on health care cost control, well over 2,000 on Medicare and Medicaid, and 434 on health program evaluations, among other relevant topics.2 The Congressional Research Service (CRS) also has been an active source of evidence for policy makers. CRS documents are exemplars of synthesizing and translating extant research evidence about an issue into lay language expressly to be readily accessible to members of Congress and their staffs. Much of this information is obtained for and reported only to specific members of Congress who request it, but well over 800 CRS reports are generally available on health care reform.3 In recent years digital communications of various kinds, from blogs to social media, have expanded the pathways of communication to and from policy makers (Greenberg 2012; see also Obar, Zube, and Lampe 2012).

Some forms of communication deliver both translation and synthesis—the packing together and integration of results from numerous studies, which can be among the most effective ways to draw the attention of and communicate to policy makers the policy-relevant elements of research evidence (Gold 2009; Lomas 2005). A consensus within the research community on an issue substantially elevates the likely influence of expertise on the ultimate policy choices. One institutionalized example of systematically collating evidence in the health policy field is the Synthesis Project that was supported for a number of years by the Robert Wood Johnson Foundation. It used a panel of experts with close ties to the policy-making community to identify topics for which both the extant research evidence was robust and the timing was commensurate with the needs of policy makers. The Project's reports were “structured around policy questions, rather than research issues, [and] they distill[ed] and weigh[ed] the strength of research evidence in a rigorous and objective manner” (White and Dudley-Brown 2011: 142). Topics relevant to health care reform included the sources of growth in health care costs (Ginsburg 2008) and the impact on costs and health outcomes of consumer-directed health plans (Bundorf 2012). In all forms, these synthetic approaches are likely to gain the greatest resonance with policy makers when they are sensitive to, and inclusive of, the political environment and set of constraints in which elected officials, in particular, have to make their policy choices (Blendon and Steelfisher 2009). Overall, the lead is taken by policy promoters within the policy community. Some are “policy entrepreneurs,” who aggressively strive to move research-based ideas into the policy-making realm, while others become a leading “policy champion,” perhaps even an elected official, who has the institutional position and political resources to ensure the idea has an audience among those with policy-making authority (Coburn 1998; Kingdon 1995; Lomas 2000a; Oliver and Paul-Shaheen 1997; Roberts and King 1996; Walter, Nutley, and Davies 2005).

Communication as translation can take four forms (see fig. 1). In one version, an accessible but usually decentralized archive of some sort is created, without an explicit effort to move policy making in one direction or another (similar to what Gold [2015] refers to as a research “reservoir”). One could view it as a library of policy ideas and analytical results, available for anyone who is willing to roam the stacks and look for resources that may be of interest. There may be biases in what is chosen for inclusion, but the objective is to bring to the fore, in relatively plain language, collections of research findings as they become available (e.g., Medicine and Health; PolicyArchive, a searchable online archive of reports not found in the published literature; and the various online databases of published research). This form of translation is widely available and of considerable utility—analysts at CBO, for example, make extensive use of the available literature to score proposed legislation. Many participants in the policy-making process, though, may or may not see what is in these compendiums, and even if they do, unguided they may not recognize the value of the information or know how to search for it effectively. Thus, the impact on policy making is likely to be more attenuated than normatively desired.

A second form of communication involves an ongoing exchange that is more interactive, is less dependent on when research evidence is formally released, and employs both written and personal communication. It is not driven by specific moments of policy decision making but promotes the development of trust by policy makers in the sources of policy information and empowers people with policy-making authority to acquire, understand, and incorporate research results (Coburn 1998; Gold 2009; Huberman 1989; Lavis 2004, 2006; Lomas 2000a, 2000b, 2005; Meyer, Alteras, and Adams 2006; Walter, Nutley, and Davies 2005). In this exchange the producers and/or the conveyers of policy-relevant research and consuming policy makers or their advisers engage one another repeatedly in a “collaborative” or “co-production process” (Lomas 2005). The literature suggests that, because exchange both creates the potential for researchers to learn more about what policy makers need from the research community in terms of the relevance, timing, and analytical findings and offers the opportunity for policy makers, in turn, to learn from researchers how to understand and interpret research results, analysis, and the methodologies required to produce credible evidence, where exchanges are institutionalized, or at least normalized, research evidence is likely to have its greatest impact (e.g., Lomas 2005; Haynes et al. 2011). When I was evaluating grant proposals submitted to the Robert Wood Johnson Foundation Investigator Awards in Health Policy Research program, the biography of the one principle investigator stood out as an exemplar of such a relationship. This principle investigator had a close working relationship with a US senator, testified before a Senate subcommittee, as a contractor frequently briefed CMS staff and served on a number of its technical expert panels, and presented to groups that included officials from the ASPE, CMS, and MedPAC, among other activities. Where true exchange exists, it is also less likely that policy makers will grant the findings of poorly executed, or ideologically motivated, research the same standing and influence as those generated by studies that adhere to traditional standards of analysis. In short, this manner of translation and communication comes closest to the normative ideal of a policy-making process well informed and guided by research evidence well oriented to the needs of policy makers, but it is dependent on contextual factors (such as a local research infrastructure) and incentives that are likely to be present only rarely.

A third form of translation also meets the normative standard for research evidence. It actively seeks to push highly credible analytic findings and prescriptions into the policy-making process by disseminating to the appropriate decision makers and their advisers short, crisp, and clear translations of analytically reliable and highly policy-relevant research evidence. One example would be the efforts by the Robert Wood Johnson Foundation's Changes in Health Care Financing and Organization (HCFO) program. It existed for nearly thirty years through 2016, across two major periods of health care reform debates. HCFO projects yielded results still germane for assessing whether and how to repeal, replace, or repair the Affordable Care Act (ACA). The program used “Study Snapshots,” policy briefings, and small convenings of researchers with policy makers to convey to officials the practical applications of HCFO-funded studies. Other examples include the Health Policy Briefs produced by Health Affairs starting in 2009 and supported by the Robert Wood Johnson Foundation.5 According to former Health Affairs editor Susan Dentzer, these are prepared for the “imaginary new congressional staffer, Ashley,” who has a doctorate in art history and no prior health policy background (MPR and APPAM 2012). A number of organizations also sponsor webinars (for an extensive discussion of tools of dissemination and translation, see Gold 2015).

The fourth version of translation, probably the most prevalent, also falls in the domain of push activities but is designed to recognize and exploit for particular outcome objectives the fact that the process of translation subsumes making choices about what to include, what to exclude, what to highlight, and whom to target with what language. This “framing” is intended not to just translate but to advance a particular policy advocate's preferred lessons to be drawn. It is to push selected and purposefully interpreted research evidence (of varying methodological integrity) into the policy-making process (Lavis 2006). It involves deciding among different kinds of messages and choosing which audiences to address and thus, taken together, how the translation is to be framed to trigger the ideologically or partisanly desired policy response (Entman 1993; Gilliam and Bales 2003, 2004; Stone 2012).

Framing is a strategic activity designed to use metaphors, symbols, or other devices, along with supportive analytic results, to get the members of the target audience to activate from among their many preexisting internal models of how the world works the one that comports most closely with what those doing the framing would like to have embedded in policy (Gilliam Bales 2003, 2004; Lakoff and Johnson 1980; Stone 2012). It also employs messaging that combines the rational thinking process implicit in the call to evidence with the kind of emotional appeals that are highly influential in all forms of individual decision making (Lakoff and Johnson 1980; Westen 2007). The terms deciding, strategic activity, and doing the framing make it clear that in this instance we are discussing individuals and institutions, including actively engaged researchers themselves, seeking to make their work comprehensible and practical to policy makers, interest groups, advocacy organizations, and corporate affairs offices. The intent is to use their role as the conveyers of research results in ways that steer the information scanning of policy makers toward particular findings, shape their interpretive lenses for assessing those findings, and advantage specific policy options. Effective strategic framing can, on the one hand, ensure that valid research findings are seen, understood, and used by policy makers who otherwise would have difficulty recognizing and benefiting from the esoteric products of the research community. On the other hand, framing can also be exploited by stakeholders and ideologically motivated advocates to give methodologically deficient study results greater entrée into the policy-making process. It matters greatly who is doing the framing and for what purpose. Anyone who has been active in health care reform policy and politics would recognize this form of push activity as the primary means by which some forms of evidence enter the legislative arena.

Because this overall strategic form of communication is likely to be the dominant method of pushing research evidence that one witnesses in just about any part of the policy-making process, I give it particular prominence in figure 1. It is the major point in the governing system where power and structure interact with substantive analysis in the shaping of policy outcomes (Parsons 2002). Although framing is an important means for communicators to get policy makers to see and understand the policy significance of even the highest-quality research evidence, its effects may be especially pronounced when the evidentiary base is rife with conflicting findings, or when policy makers are unable to (or unwilling to) distinguish between findings rooted in the credible knowledge base of peer-reviewed research and those emanating from “studies” driven by ideological considerations and devoid of the conventions of analytical rigor. Both are common features of health care reform discourse.

Fueled by the rise of ideologically oriented institutes and think tanks, and given the complexities of most health-related issues and policies, this framing contributes to the frequent exploitation of “evidence” as “ammunition” by policy makers in support of their already held positions. In the form of ammunition, “analysis much more easily kills policy ideas than promotes them,” according to Judy Feder, who has been at the intersection of health policy research and policy making for several decades (MPR and APPAM 2012). Because framing is strategic, intended to elevate specific ideas and interests over others when no consensus exists, the presence, form, and themes of the frames used in a particular policy arena reflect the tensions over policy preferences that derive from the socioeconomic-political context, including demographic characteristics (dominant populations), the nature of group mobilization (e.g., commercial interests such as insurance and pharmaceutical companies are “privileged” groups that can most easily overcome the collective action problem; Olson 1965), and the extant research infrastructure (from the development of enhanced analytical capacity in government, or its demise through budget cuts and political appointments, to the spawning of “advocacy tanks”).

Acquisition

The production and communication of research evidence, even when pushed, do not ensure that it actually penetrates into the policy-making community. For policy-relevant research findings to have impact on what is done by policy makers, another necessary but insufficient condition is for it to be acquired by them in some form and have a reasonable degree of circulation within the policy-making arena. That can come about in two general ways (see fig. 1). One casts policy makers and their advisers in a reactive stance, being open and receptive to the push of research evidence directed to them either objectively translated into analytically supported policy solutions or purposively framed by policy entrepreneurs in ways that validate the policy makers' existing predispositions. Given policy makers' political incentives, short time horizons, and competing demands, it is challenging for them to be more than relatively passive recipients of information (see, e.g., Arnold 1990). This is likely to be the main avenue of research evidence into the policy-making process. But it is dependent on there being a receptive audience, which in part hinges on the capacity of researchers or others communicating their results to arrive at a time of informational need and be able to highlight the evidence most pertinent to the explicit policy choices that are live on the agenda. One needs to emphasize, too, that some of those with policy-making authority will be interested in analytically well-grounded evidence, and others will be amenable to only the kind of evidence they perceive as compatible with their ideological stances. The latter has become a proportionally larger group as partisanship and ideological polarization rise (Patashnik and Peck 2017). One version satisfies the normative standards regarding the role of research evidence in policy making and the other does not.

The other general means of acquisition has participants in the policy-making community being more proactive, energetically reaching out and striving to pull research evidence into the policy-making process (Lavis 2006). The pull mechanism may simply involve the willingness and proficiency of people in the policy-making community to search out ideas and supporting evidence that have been made available and accessible to them, perhaps “browsing” in the “libraries” I mentioned earlier. When I was a legislative assistant for health policy in the office of Senator Tom Daschle and we were drafting a comprehensive health care reform plan, a long-term care consumer protection act, and other health care legislation, we searched widely in the both academic and applied research domains for evidence-based ideas. We both hunted for published research and engaged in direct conversations with specialists. To be sure, we tended to give greater credibility to “translators” with compatible policy priors, but we sought to learn from the knowledge and expertise of people across multiple industries and also rejected the advice and admonitions of political allies who were resistant to credible research evidence, such as trial lawyers on medical malpractice reform.

This search and acquisition process may be made more routine and effective by elevating the internal analytical capacity of policy-making institutions, bringing it closer to the normative standard for the role of research evidence. That could entail establishing additional infrastructure to support policy makers in the search and acquisition task and the associated analysis of the evidence. Congress nurtured an explosion in such internal resources. The Congressional Budget and Impoundment Control Act of 1974 established the CBO, introducing the in-house ability to do complex scoring of the financial and other consequences of proposed bills and amendments (Joyce 2011). In the 1970s Capitol Hill also strengthened the analytical bona fides of existing agencies like the GAO and the CRS. The House and Senate further provided legislators and committees with additional resources to hire professional, analytically trained staff (Oliver 1993; Peterson 1995; Smith 1992; Whiteman 1995). Absent these new institutional capabilities, Congress functioned without the kind of internal institutional mechanisms needed to challenge low-ball program cost estimates like the ones President Johnson promoted for Medicare as the legislation was being crafted (Blumenthal and Morone 2009: 192). Later CBO, with its expansive and dependable assessments of budget effects and impacts on insurance coverage, would become the bane of both Democratic presidents advocating proposals for health care reform and subsequent Republican efforts to “repeal and replace” Obamacare (Brill 2015; Jacobs and Skocpol 2010; Skocpol 1997; Matthews 2017). Personal office staffs in Congress had changed, as well. The Daschle health policy staff that I joined in the early 1990s, for example, included individuals with MPP, PhD, and MD graduate degrees, well versed in the substantive details of the health care system and the methods of policy assessment. The increase since the 1960s in Capitol Hill's institutional analytical competence, and growth and professionalization of committee and personal office staffs, greatly enhanced the opportunity for Congress and its members to assertively pull research evidence into policy decision making (Bimber 1996; Joyce 2011; Peterson 1995). It is one reason that “the United States Congress is the most important national legislature in the world. . . . The only one that still plays a powerful, independent role in public policymaking” (Quirk and Binder 2005: xix).

The robustness of institutional analytical capacity varies enormously across American policy making. Even greater functionality than what Congress now exhibits had begun much earlier in the federal executive branch and manifested in the “policy shops” populating federal agencies as part of the “emergence of the evaluation research industry” (Frumkin and Francis 2015; Lynn 1989). Although “we know almost nothing about policy analysis conducted at the state level” (Hird 2005: 196), the literature that does exist suggests that few if any state-level executives can match the federal experience. It is also reasonable to presume that governors are typically better equipped than their legislatures, most of which do not have the resources to either receive pushed messages or pull analytically rich information, although there is considerable variation and respect for nonpartisan analysis across legislators (Hird 2005; Jones 2017; Rosenthal 2009, 2013; Squire 2012; Weissert and Weissert 2000). The National Conference of State Legislatures lists only three states with legislatures whose members are “full-time” and “well paid” and supported by “large staffs”: California, New York, and Pennsylvania (NCSL 2014; Rosenthal 2009; Squire 2012; Sabatier and Whiteman 1985). Even less professionalized states, though, can have nonpartisan research enterprises valued by their legislators (Hird 2005). Still, California, with the most professionalized legislature among the states, also boasts the widely respected Legislative Analyst's Office. Established in 1941—like GAO, originally more about accounting than policy analysis—it has a staff of over forty analysts and provides nonpartisan assessments of budgetary and other issues (Legislative Analyst's Office, n.d.; Hill 2003). The legislatures of sixteen states are at the opposite end of the professionalism spectrum and often have little in the way of analytical support agencies or staff; a few also have no policy analysts at all in the overall legislative arena (Hird 2005; NCSL 2014; see also Jones 2017; Rosenthal 2009; Squire 2012). Such states are especially prone to the influence of outside policy advocates. Legislative professionalism, for example, is negatively correlated with the passage of the pro-business model bills promoted by the conservative group ALEC (Hertel-Fernandez 2014).

Taking capacity building to the next level, a legislature may institute programs that enhance the policy research literacy of the individual policy makers and their advisers—what Jewell and Bero (2008) refer to as “training in evidence-based skills” (see also Coburn 1998; Majumdar and Soumeral 2009)—so that they have the personal ability to select credible analyses and interpret research evidence appropriately and effectively. Such individual-level capacity strengthening, albeit fairly rare in Congress and even more so at the state level (Curry 2015; Hird 2005), also reduces among policy makers the inclination to accept without skepticism the questionable findings of substandard research, especially important in policy areas like health where the issues can be highly complex.

The process by which research evidence is acquired in the policy-making community is thus substantially a product of the governing institutional context. For legislators to engage in the active search for relevant and high-quality research evidence, for example, they need analytical support services, such as professional personal staffs and access to the work of analytic legislative agencies, as well as the incentives of specialization created by the possibility of having legislative careers and long service on standing committees of jurisdiction. Such incentives have been promoted in the modern Congress but are inhibited in state legislatures, especially those with term limits (Deering and Smith 1997; Kousser 2005). The role of the executive branch in formulating and informing policy options, the degree of the independence of the legislature, the extent of legislator professionalism, the availability and skills of staff, the role of party leaders and partisanship in the legislature, the accessibility of the legislature to outside interests, and so forth, are all going to affect the desire and capacity of legislators to be more than passive receivers of policy-relevant evidence brought to them by strategic actors outside the legislature (Arnold 1990; Curry 2015; Hird 2005; Patashnik and Peck 2017; Peterson 1990, 1997; Rosenthal 2009; Squire 2012; Weaver and Rockman 1992; Weissert and Weissert 2000, 2012).

The striking rise in partisan and ideological polarization in Congress illustrates one of the most significant features of the institutional context, with pronounced implications for the receptivity of legislators to conventional research evidence.6Figures 2a–d show the ideological distribution of Democratic and Republican representatives and senators in two Congresses: the 93rd (1973–74), when Richard Nixon was president and promoting his Comprehensive Health Insurance Plan (CHIP), and the 111th (2009–10), with Barack Obama in the White House and the passage of the ACA. (The x-axes are based on the DW-NOMINATE [dynamic weighted nominal three-step estimation] scores [a left-right continuum associated with positions on government intervention] calculated by Royce Carroll, Jeff Lewis, James Lo, Nolan McCarty, Keith Poole, and Howard Rosenthal using all recorded roll-call votes.) Between the 93rd and 111th Congresses, conservative Democrats and liberal Republicans largely disappeared from both chambers, leaving no ideological overlap between the parties. In addition, while the median Democrat shifted slightly to the left, the median Republican moved dramatically to the right, especially in the House.

Between the time of CHIP and the ACA, electoral politics led to a significant retreat in the GOP's confidence in the research enterprise. It began in 1995 after an increasingly conservative Republican party, led by Newt Gingrich, in the 104th Congress took control of both the House and Senate in the 1994 elections for the first time since the early 1950s. Among the acts of that Congress was to shut down the Office of Technology Assessment (OTA), a congressional agency that, in the words of Bruce Barlett, who served in both the Reagan and first Bush administrations, “brought high-level scientific expertise to bear on legislative issues” (Barlett 2011; see also Bimber 1991). Although there were budget-cutting and duplication-reducing rationales for closing the OTA, long-time OTA director John Gibbons claimed that “the demise of the agency after it had proved its effectiveness reflected an anti-intellectual and antiscience mentality among some members of Congress who were not interested in looking at issues factually” (quoted in Leary 1995). After nearly three decades of Medicare being given bipartisan congressional oversight largely guided by advisory panels of experts, Republican legislative plans to transform Medicare into a capped program for beneficiaries to purchase private insurance immediately gained force because of the GOP's new majorities and the entrée they granted to different kinds of policy voices, not as the result of a new peer-reviewed evidence base or its translation (Oberlander 2003). By 2010 what had been bipartisan issues well informed by research evidence were transformed into deeply ideological concerns that split the parties. End-of-life planning, favored by a consensus in the community of health services, medical, and ethics research and with considerable bipartisan endorsement in the past, became the basis for the claims that the ACA would create “death panels” (Tinetti 2012). Whereas previously both Democrats and Republicans wishing to promote high-quality and more financially stable health care delivery favored comparative-effectiveness research, by the 111th Congress Republicans rejected it as a cover for rationing (Avorn 2009).

A further sign of the ideological swing in the realm of research and policy making is illustrated by the change that occurred at the Heritage Foundation, an influential producer of policy “evidence” (ranked seventh among “top think tanks in the United States”; McGann 2016: 56). Since its founding in 1973 it has been dedicated to conservative, market-oriented principles, but it has had a number of well-informed senior staff and a commitment to the think-tank mantel of policy-oriented research. Its founding president, Edwin Feulner, earned a PhD from the University of Edinburgh. For thirty-five years Stuart Butler—with undergraduate degrees in math and physics, an MA in economics, and a PhD in American economic history—served as director of its Center for Policy Innovation and earned a central place in the national health care specialist community. Feulner was succeeded in 2013 by Jim DeMint, an ideologue and politician who had been the fourth most conservative Republican in the US Senate during his last term in Congress (based on Voteview.com/data) and remained a leader in the hard-right Tea Party movement (Skocpol and Williamson 2013). His most advanced degree is an MBA, and he began his adult career in advertising. He also created Heritage Action for America, with explicit political and electoral objectives. Butler, in the meantime, jumped ship for the more establishment Brookings Institution. One was not likely to find DeMint, before his ouster in 2017, burdened by or even knowledgeable of the dictates and norms normally associated with the research enterprise.

By the time of the unsuccessful efforts of the Republican majorities in Congress and the new administration of Donald Trump to repeal and replace the ACA, congressional Republicans had moved even further to the right than shown in figure 2. The broader public and congressional debates were struck on largely ideological grounds or rhetorical rifts by the president and others, with policy-relevant claims that were not connected to or empowered by policy analysis. I was unable to identify any body of policy-oriented research presented in direct support of the specific provisions in either the House or the Senate repeal-and-replace plans that were debated in 2017, or in support of President Trump's publicly stated positions. Indeed, avoiding the regular order of the committee process, including hearings and testimony, and drafting the proposed bills largely in secret by small groups, the Republican leadership in each chamber made it difficult for research evidence to gain conventional entrée to the process (Carlsen and Park 2017). Such evidence did prove to be instrumental to the outcome, however, because the reports from CBO and other organizations gave a few Republican senators, such as Susan Collins from Maine and Lisa Murkowski from Alaska, sufficient pause to stand against their party's line and deny their plans a majority (Levey and Mascaro 2017).

Use

As I discuss at the beginning of this article, at times outcomes of the policy-making process have far more to do with politics and power than with the sway of any form of analytic information, especially research evidence as defined by the specialist community. But when political forces are not so thoroughly dominant, when “substantive learning” actually takes place in the policy-making process (Hall 1993; Peterson 1997), policy information may be used in a number of ways (see fig. 1). First one must distinguish between direct or instrumental uses, in which evidence is employed to inform and influence decisions about specific policy issues, such as legislative votes on particular provisions of bills, and indirect effects, best recognized in what Weiss (1979, 1999) terms enlightenment, when the accretion of policy research overtime changes the way policy makers understand an issue, its underlying dynamics, and thus the viability and appropriateness of policy options (see also Hird 2005, 2017). One need only consider the case of smoking and tobacco control policy in the United States over the past forty years—nationally and in individual states and communities—to appreciate the ultimate power of research evidence, hewing to the precepts of the scientific method, to contribute eventually, slow and diffused as it might be, to policy change by altering the entire basis on which an issue is considered.

Borrowing from Weaver (2000; similar to Weiss 1989; see Haynes et al. 2011), instrumental uses of policy research in policy making can be organized into four categories. Technocratic uses fit the normatively preferred image of research evidence—at its best, prescriptive—being harnessed to yield optimal public policy based on valid and reliable empirical analysis. It is as though credentialed researchers and policy makers have labored in concert to arrive at analytically well-supported, appropriate policy actions, given social goals. The second category of use, as a “fire alarm,” may well reflect the impact of equally rigorous evidence but for more limited purposes. Solid evidence of Lawrence Brown's “documentation” sort described earlier in the article, for example, would make clear the presence or aggravation of a problem warranting the attention of government officials, but sufficient knowledge may not be available to inform and guide the design of an effective response. A third use of policy information has it functioning as a “circuit breaker.” Here, too, the information could meet the normative requirements for research evidence, but again, its impact is narrowly delimited, in this instance to warn that a course of action under consideration would not succeed. Despite their limitations, the fire alarm and circuit-breaker roles for research evidence are important and vital to informed policy making. That was well illustrated, albeit briefly, when Speaker Paul Ryan, in the face of near certain defeat, pulled from a floor vote the American Health Care Act (AHCA), the March 2017 bill the Republican leadership and the Trump administration presented to repeal and replace the ACA. In clear circuit-breaker fashion, what at the time apparently thwarted the forming of a majority Republican coalition in the House, inclusive of moderate Republicans, and helped secure universal Democratic opposition was the most instrumental form of policy analysis: the scoring of the proposed AHCA by CBO. The CBO's report, widely covered in the print and broadcast media, projected that the AHCA would reduce overall net spending by $317 billion dollars, but the impact of that figure was drowned out by the estimated loss in health insurance coverage by 24 million people over ten years, including 14 million in the first year (CBO 2017). CBO scoring had similar impact on the Republican efforts to gain support within their ranks in the Senate for their version of repeal and replace, the Better Care Reconciliation Act (Goldstein and Snell 2017).

The final way in which policy research is used is much less beneficial and likely the most widespread, at least in legislative settings like Congress and on issues as ideologically divisive as health care reform (see Patashnik and Peck 2017). Policy makers selectively pick and choose “evidence” to use as ammunition in support of their own ideological predispositions. It is possible that in some cases the ammunition derives from research evidence, but it is more likely to come from advocacy research, and in either case, its intent is to give the air of credence to preconceived notions, not to advance knowledge-based policy making. Another strategy is simply to attack research-based conclusions as “just a theory,” a common refrain, for example, in the climate change debates (see Ghose 2013; Jacoby 2008). Given what often transpires in the production, communication, and acquisition stages, there is typically little opportunity for technocratic use of research evidence, only intermittent occasions for fire alarms or circuit breakers to be tripped, and far more routine openings for some kind of evidence to be exploited as ammunition. That is especially true in the realm of major efforts to restructure the health care system. All the translation and dissemination activities one can muster, and all the effective media training one can provide to even enthusiastic and willing researchers, will have relatively little bearing. Neither Democratic Senator Bernie Sanders, a vocal proponent of single-payer financing through “Medicare for All,” nor Republican House Speaker Paul Ryan, an avid promoter of limited government, unfettered markets, and transforming Medicare into a system of capped vouchers to purchase private health insurance and Medicaid into block grants or per capita caps to the states—close to the polar extremes of health policy ideology—will ever persuade the other no matter how well armed with hefty research reports.

There is one important complexity that prior research has highlighted. If the production process has resulted in a cacophony of seemingly credible studies reporting contradictory findings—if the even idealized research community fails to send a unified message—it is possible, in one sense, for research evidence to become extraneous to policy making. Such circumstances fuel the inclination of policy makers to mine the available information for purposes of rhetorical flourish and defense of ideological positions already held—as ammunition in their debates—merely reinforcing existing political forces rather than exhibiting an independent effect (Weaver 2000; Weiss 1979, 1989).

Conclusion: The Contexts and Contributions of Research Evidence

This assessment of the processes by which research evidence can enter into policy making underscores the profound effects—both direct and veiled—of keeping actual political practice far from the normative standards generally desired by the health service research analytical community and the funders of the studies it conducts. As a general proposition, the parameters of a democratic society, electoral incentives, the mobilization of interests, and structural features of a highly layered and fragmented political system will ensure that when there is a conflict between the policy directions derived from research evidence and those aligned with influential political forces, the latter will prevail. This is especially true when society, and its representatives in government, is intensely polarized into combative camps with divergent values, experiences, and aspirations, further colored by mutual distrust. No amount of sophisticated research, followed by astute translation and broad dissemination of the policy-relevant and analytically credible research findings, can overcome this general equilibrium.

Nonetheless, research evidence has had significant and beneficial impacts on the policy-making process under the right conditions. One of the sources for this positive effect is that, from the 1960s onward, experts with advanced analytical training have been embedded into government itself. As a thought experiment, imagine what would be the state of health policy making at the national level absent the creation of units like the CMS, the ASPE, the Agency for Healthcare Research and Quality, CBO, and MedPAC (joined since the ACA by the Center for Medicare and Medicaid Innovation and the Patient Centered Outcomes Research Institute); the shift toward policy-analytic capacity at the CRS, GAO, and Office of Management and Budget; and the massive expansion of professional staff with health policy portfolios, often with graduate-level training, at the congressional committees, at the personal offices of senators and representatives, and at innumerable nonprofit institutions and policy institutes. All of these changes occurred following the enactment of Medicare and Medicaid (matched, to a large degree, by the advancement of policy shops in major interest groups, such as AARP's Public Policy Institute, and the establishment of firms with sophisticated analytical capacity on health issues, such as the Lewin Group). Experts in government share the norms and methodologies of the broader research community—they are sometimes producers of evidence, as at the GAO, but certainly frequently are consumers of what is generated by the health services and policy research community and similar researchers. The value of this connection was revealed when the demise of the OTA left a significant hole in the link between the research community and those who perform the acquisition function in government. The participants and their relationships may not be the most advanced version of the exchange and capacity-building activities that join communication with acquisition, but it is the most developed and the easiest to nurture, and thus vital to protect.

With due regard for the practical limits, the reach and influence of research evidence could be enhanced in the future. To ensure greater accessibility to policy makers, more can be done to expand the scope of consumable morsels of evidence, to house them in accessible archives of various sorts, to make sure that those with policy-making authority know where to look and how to access this information as they need it, and to experiment with varied methods and technologies to enhance the push-translation form of communication (without nurturing any delusional expectations of creating a new world of evidence-driven policy making). As a reminder to the researchers themselves, who often want to remain unsoiled by the machination of politics (Keller 2009), in the words of Judy Feder, those “who think they can stay outside of the political fray, but their research is on policy-relevant issues,” need to understand that “if you don't bring it to the debate, others will use it, twist it, for their own purposes” (MPR and APPAM 2012).

Replication of research results and the emergence of a consensus position among credible analysts make it more likely that the research-evidence message will be heard and seriously weighed by policy makers, and less likely that it can be countered by the false equivalence of ideologically driven reports. It should be of value to funders to sponsor studies that ultimately result in discovering consistent effects in multiple settings using varied methodologies. This is, of course, a necessary but not sufficient condition. One need only look at the issue of climate change to see that actual and demonstrable consensus among scientists with respect to both documentation and analysis, in Brown's terms, need not translate into acceptance and receptivity by either the public or its elected officials (Oreskes 2004).

For issues of public salience—major, nontechnical dimensions of policy making and reform—one of the most significant threats to the influence of research evidence is when policy approaches derived from it fly in the face of what satisfies the “ordinary knowledge” of the public and the elected officials sensitive to their constituents' reactions (Schick 1991). Myriad features of President Clinton's complicated Health Security Act, for example, were highly informed by specialists in or conversant with the research community. The plan overall arguably worked on paper, but it never made sense to the public or even to the activists in numerous reform-oriented advocacy groups (Peterson 2017: chap. 10). A similar claim could be made about President Obama's ACA (Peterson 2017: chaps. 11, 12). That is not the fault, per se, of the research community, but it does suggest that policy ideas, to be politically resilient, have to be attuned to ordinary knowledge and that more should be done to educate not only policy makers but also their constituents (Peterson 1997).

Perhaps the gravest threat today to research evidence having a policy impact among elected officials in the public sector is the current state of ideological and partisan polarization in American politics, both at the national level and in many of the states, and among party affiliates in the public. In such a divisive environment, where mythmaking has become a growth industry, serving as ammunition risks becoming the only utility of evidence, however derived. Expertise-bound institutions like CBO, even with its long history of bipartisan credibility and legitimacy, find themselves under assault (House 2017). Studies that enjoy nominal bipartisan support are not protected from dismissal by policy makers because relative moderates of either party lack credibility within their own partisan ranks. There is no obvious solution for this problem, and certainly not one that derives from the research community itself.

Mark A. Peterson is professor of public policy, political science, and law at the UCLA Luskin School of Public Affairs. A specialist on the American policy process, an elected member of the National Academy of Social Insurance, and former editor of JHPPL, he has been a legislative assistant for health policy in the US Senate, chaired the National Advisory Committee for the Robert Wood Johnson Foundation's Changes in Health Care Financing and Organization program, served on the National Academy of Social Insurance's Study Panel on Medicare and Markets, and is now on the Internal Advisory Board for the UCLA Clinical and Translational Science Institute.

markap@ucla.edu

Acknowledgments

I owe considerable thanks to Michael Gluck, codirector of the AcademyHealth Translation and Dissemination Institute, and the participants in Academy Health's Workshop on Improving the Translation and Dissemination of Health Services Research. This article benefited enormously from comments by Sherry Glied, guest editor of this issue; an anonymous external reviewer; and Jonathan Engel and other participants at the JHPPL special issue conference held at the NYU Wagner Graduate School of Public Service.

This article originated in a paper commissioned for the 2014 AcademyHealth Workshop on Improving the Translation and Dissemination of Health Services Research, supported by the Robert Wood Johnson Foundation and Kaiser Permanente.

Notes

1.

Sarah Collins, the Commonwealth Fund, at the Journal of Health Politics, Policy and Law/NYU Robert F. Wagner Graduate School of Public Services conference on Policy Analysis and the Politics of Health Policy: Scholarship, Knowledge Translation, and Policymaking, New York, May 2, 2017.

2.

www.gao.gov/browse/topic/Health_Care; accessed April 12, 2017.

6.

Among state legislatures there is considerable variation in polarization, with some net increase overall, which can lead to more policy-maker resistance to nonpartisan research (Shor and McCarty 2011: 546–47; Hird 2005: 202).

References

References
Arnold
R. Douglas
.
1990
.
The Logic of Congressional Action
.
New Haven, CT
:
Yale University Press
.
Avorn
Jerry
.
2009
. “
Debate about Funding Comparative-Effectiveness Research
.”
New England Journal of Medicine
360
, no.
19
:
1927
29
.
Baker
G. Ross
,
Ginsburg
Liane
, and
Langley
Ann
.
2004
. “
An Organizational Science Perspective on Evidence-Based Decision-Making
.” In
Using Knowledge and Evidence in Health Care: Multidisciplinary Perspectives
, edited by
Lemieux-Charles
Louise
and
Champagne
François
,
86
114
.
Toronto
:
University of Toronto Press
.
Bardach
Eugene
.
2011
.
A Practical Guide for Policy Analysis: The Eightfold Path to More Effective Problem Solving
.
4th
ed.
Washington, DC
:
CQ
.
Barlett
Bruce
.
2011
. “
Gingrich and the Destruction of Congressional Expertise
.”
New York Times
,
November
29
. economix.blogs.nytimes.com/2011/11/29/gingrich-and-the-destruction-of-congressional-expertise/?_php = true&_type = blogs&_r = 0.
Belkin
Gary S.
1997
. “
The Technocratic Wish: Making Sense and Finding Power in the ‘Managed’ Medical Market Place
.”
Journal of Health Politics, Policy and Law
22
, no.
2
:
509
32
.
Bimber
Bruce
.
1996
.
The Politics of Expertise in Congress: The Rise and Fall of the Office of Technology Assessment
.
Albany
:
State University of New York Press
.
Blendon
Robert J.
, and
Steelfisher
Gillian K.
2009
. “
Commentary: Understanding the Underlying Politics of Health Care Policy Decision Making
.”
Health Services Research
44
, no.
4
:
1137
43
.
Blumenthal
David
, and
Morone
James A.
2009
.
The Heart of Power: Health and Politics in the Oval Office
.
Berkeley
:
University of California Press
.
Brill
Steven
.
2015
.
America's Bitter Pill: Money, Politics, Backroom Deals, and the Fight to Fix Our Broken Healthcare System
.
New York
:
Random House
.
Brown
Lawrence D.
1991
. “
Knowledge and Power: Health Services Research as a Political Resource
.” In
Health Services Research: Key to Health Policy
, edited by
Ginzberg
Eli
,
20
45
.
Cambridge, MA
:
Harvard University Press
.
Bump
Philip
.
2014
. “
Americans for Prosperity May Be America's Third-Biggest Political Party
.”
Washington Post
,
June
19
.
Bundorf
M. Kate
.
2012
.
Consumer-Directed Health Plans: Do They Deliver?
Research Synthesis Report No. 24,
October
.
Princeton, NJ
:
Robert Wood Johnson Foundation
.
Carlsen
Audrey
, and
Park
Haeyoun
.
2017
. “
Which Party Was More Secretive in Working on Its Health Care Plan?
New York Times
,
July
10
. www.nytimes.com/interactive/2017/07/10/us/republican-health-care-process.html?_r=0.
CBO (Congressional Budget Office)
.
2017
. “
Cost Estimate, American Health Care Act
.”
March
13
.
Washington, DC
:
CBO
.
Coburn
Andrew F.
1998
. “
The Role of Health Services Research in Developing State Health Policy
.”
Health Affairs
17
, no.
1
:
139
51
.
Cox
Gary W.
, and
McCubbins
Matthew D.
2007
.
Legislative Leviathan: Party Government in the House
.
New York
:
Cambridge University Press
.
Curry
James M.
2015
.
Legislating in the Dark: Information and Power in the House of Representatives
.
Chicago
:
University of Chicago Press
.
Davies
Huw T. O.
,
Nutley
Sandra M.
,
Walter
Isabel
, and
Wilkinson
J.
2002
.
Making It Happen: Developing Understanding of Research Utilisation and EBP Implementation
.
St. Andrews, UK
:
Research Unit for Research Utilisation, University of St. Andrews
.
Deering
Christopher J.
, and
Smith
Steven S.
1997
.
Committees in Congress
.
Washington, DC
:
CQ
.
Drezner
Daniel W.
2017
.
The Ideas Industry: How Pessimists, Partisans, and Plutocrats Are Transforming the Marketplace of Ideas
.
New York
:
Oxford University Press
.
DuMont
Kim
.
2015
. “
Leveraging Knowledge: Taking Stock of the William T. Grant Foundation's Use of Research Evidence Grants Portfolio
.”
William T. Grant Foundation
,
November
23
. wtgrantfoundation.org/resource/leveraging-knowledge-taking-stock-of-the-william-t-grant-foundations-use-of-research-evidence-grants-portfolio.
Entman
Robert M.
1993
. “
Framing: Toward Clarification of a Fractured Paradigm
.”
Journal of Communication
43
, no.
4
:
51
58
.
Esterling
Kevin M.
2004
.
The Political Economy of Expertise: Information and Efficiency in American National Politics
.
Ann Arbor
:
University of Michigan Press
.
Fischer
Frank
.
2003
.
Reframing Public Policy: Discursive Politics and Deliberative Practices
.
New York
:
Oxford University Press
.
Ford
Kevin J.
,
Schmitt
Neal
,
Schechtman
Susan L.
,
Hults
Brian M.
, and
Doherty
Mary L.
1988
. “
Process Tracing Methods: Contributions, Problems, and Neglected Research Questions
.”
Organizational Behavior and Human Decision Processes
43
, no.
1
:
75
117
.
Frumkin
Peter
, and
Francis
Kimberly
.
2015
. “
Constructing Effectiveness: The Emergence of the Evaluation Research Industry
.” In
LBJ's Neglected Legacy: How Lyndon Johnson Reshaped Domestic Policy and Government
, edited by
Wilson
Robert H.
,
Glickman
Norman J.
, and
Lynn
Laurence E.
Jr.
,
397
425
.
Austin
:
University of Texas Press
.
George
Alexander L.
, and
McKeown
Timothy J.
1985
. “
Case Studies and Theories of Organizational Decision Making
.”
Advances in Information Processing in Organizations
2
:
21
58
.
Ghose
Tia
.
2013
. “
‘Just a Theory’: Seven Misused Science Words
.”
Scientific American
,
April
2
. www.scientificamerican.com/article/just-a-theory-7-misused-science-words/.
Gilliam
Franklin D.
Jr.
, and
Bales
Susan Nall
.
2003
. “
Strategic Frame Analysis and Youth Development: How Communications Research Engages the Public
.” In
Handbook of Applied Developmental Science: Promoting Positive Child, Adolescent, and Family Development through Research, Policies, and Programs
, edited by
Lerner
Richard M.
,
Jacobs
Francine
, and
Wertlieb
Donald
,
421
36
.
Thousand Oaks, CA
:
Sage
.
Gilliam
Franklin D.
Jr.
, and
Bales
Susan Nall
.
2004
. “
Framing Early Childhood Development: Strategic Communications and Public Preferences
.”
Building State Early Childhood Comprehensive Systems Series
, No. 7. https://eric.ed.gov/?id=ED496843 (accessed
January
2
,
2018
).
Ginsburg
Paul B.
2008
.
High and Rising Health Care Costs: Demystifying US Health Care Spending
. Research Synthesis Report No. 16,
October
.
Princeton, NJ
:
Robert Wood Johnson Foundation
.
Gold
Marsha
.
2009
. “
Pathways to the Use of Health Services Research in Policy
.”
Health Services Research
44
, no.
4
:
1111
36
.
Gold
Marsha
.
2015
. “
Translation and Dissemination of Health Services Research for Health Policy: A Review of Available Infrastructure and Evolving Tools
.”
AcademyHealth
,
April
. www.academyhealth.org/files/publications/files/FileDownloads/LessonsProjectHSRPolicy.pdf (accessed
January
2
,
2018
).
Goldstein
Amy
, and
Snell
Kelsey
.
2017
. “
Senate GOP Health-Care Bill Appears in Deeper Trouble following New CBO Report
.”
Washington Post
,
June
26
.
Greenberg
Sherri R.
2012
. “
Congress and Social Media
.”
October
22
.
Austin
:
Lyndon B. Johnson School of Public Affairs, University of Texas at Austin
.
Greenlick
Merwyn R.
,
Goldberg
Bruce
,
Lopes
Phil
, and
Tallon
James
.
2005
. “
Health Policy Roundtable—View from the State Legislature: Translating Research into Policy
.”
Health Services Research
40
, no.
2
:
337
46
.
Hall
Peter
.
1993
. “
Policy Paradigms, Social Learning, and the State: The Case of Economic Policymaking in Britain
.”
Comparative Politics
25
, no.
3
:
275
96
.
Haynes
Abby S.
,
Gillespie
James A.
,
Derrick
Gemma E.
,
Hall
Wayne D.
,
Redman
Sally
,
Chapman
Simon
, and
Sturk
Heidi
.
2011
. “
Galvanizers, Guides, Champions, and Shields: The Many Ways that Policymakers Use Public Health Researchers
.”
Milbank Quarterly
89
, no.
4
:
564
98
.
Heclo
Hugh
.
1974
.
Modern Social Politics in Britain and Sweden
.
New Haven, CT
:
Yale University Press
.
Hertel-Fernandez
Alexander
.
2014
. “
Who Passes Business's ‘Model Bills’? Policy Capacity and Corporate Influence in US State Politics
.”
Perspectives on Politics
12
, no.
3
:
582
602
.
Hill
Elizabeth G.
2003
. “
Non-partisan Analysis in a Partisan World
.”
Journal of Policy Analysis and Management
22
, no.
2
:
307
10
.
Hird
John A.
2005
.
Power, Knowledge, and Politics: Policy Analysis in the States
.
Washington, DC
:
Georgetown University Press
.
Hird
John A.
2017
. “
How Effective Is Policy Analysis?
” In
Does Policy Analysis Matter?
, edited by
Friedman
Lee S.
,
44
84
.
Oakland, CA
:
University of California Press
.
House
Billy
.
2017
. “
Republicans Picked the CBO Chief. Now They're Attacking the Office over Obamacare
.”
Bloomberg Politics
,
March
14
. www.bloomberg.com/news/articles/2017-03-14/republicans-focus-fire-on-cbo-over-dire-health-care-estimate (accessed
January
2
,
2018
).
Huberman
Michael
.
1989
. “
Predicting Conceptual Effects in Research Utilisation: Looking with Both Eyes
.”
Knowledge in Society
2
, no.
3
:
6
24
.
Jacobs
Lawrence R.
, and
Skocpol
Theda
.
2010
.
Health Care Reform and American Politics: What Everyone Needs to Know
.
New York
:
Oxford University Press
.
Jacobson
Nora
,
Butterill
Dale
, and
Goering
Paula
.
2004
. “
Organizational Factors That Influence University-Based Researchers' Engagement in Knowledge Transfer Activities
.”
Science Communication
25
, no.
3
:
246
59
.
Jacoby
Susan
.
2008
.
The Age of American Unreason
.
New York
:
Pantheon
.
Jewell
Christopher J
, and
Bero
Lisa A.
2008
. “
‘Developing Good Taste in Evidence’: Facilitators of and Hindrances to Evidence-Informed Health Policymaking in State Government
.”
Milbank Quarterly
86
, no.
2
:
177
208
.
Jones
David K.
2017
.
Exchange Politics: Opposing Obamacare in Battleground States
.
New York
:
Oxford University Press
.
Joyce
Philip G.
2011
.
The Congressional Budget Office: Honest Numbers, Power, and Policymaking
.
Washington, DC
:
Georgetown University Press
.
Keller
Anne C.
2009
.
Science in Environmental Policy: The Politics of Objective Advice
.
Cambridge, MA
:
MIT Press
.
Kingdon
John W.
1995
.
Agendas, Alternatives, and Public Policies
.
New York
:
HarperCollins
.
Kousser
Thad
.
2005
.
Term Limits and the Dismantling of State Legislative Professionalism
.
New York
:
Cambridge University Press
.
Lakoff
George
, and
Johnson
Mark
.
1980
.
Metaphors We Live By
.
Chicago
:
University of Chicago Press
.
Lasswell
Harold D.
1936
.
Politics: Who Gets What, When, and How
.
New York
:
Whittlesey House
.
Laugesen
Miriam J.
2016
.
Fixing Medical Prices: How Physicians Are Paid
.
Cambridge, MA
:
Harvard University Press
.
Lavis
John N.
2004
. “
A Political Science Perspective on Evidence-Based Decision Making
.” In
Using Knowledge and Evidence in Health Care: Multidisciplinary Perspectives
, edited by
Lemieux-Charles
Louise
and
Champagne
François
,
70
85
.
Toronto
:
University of Toronto Press
.
Lavis
John N.
2006
. “
Research, Public Policymaking, and Knowledge-Translation Processes: Canadian Efforts to Build Bridges
.”
Journal of Continuing Education in the Health Professions
26
, no.
1
:
37
45
.
Leary
Warren E.
1995
. “
Congress's Science Agency Prepares to Close Its Doors
.”
New York Times
,
September
24
.
Legislative Analyst's Office
. n.d. “
About Our Office
.” www.lao.ca.gov/About (accessed
October
1
,
2017
).
Levey
Noam N.
, and
Mascaro
Lisa
.
2017
. “
Latest GOP Obamacare Repeal Effort on Verge of Collapse as Third Republican Comes Out against Bill
.”
Los Angeles Times
,
September
25
.
Lomas
Jonathan
.
2000a
. “
Connecting Research and Policy
.”
ISUMA Canadian Journal of Policy Research
1
, no.
1
:
140
44
.
Lomas
Jonathan
.
2000b
. “
Using ‘Linkage and Exchange’ to Move Research into Policy at a Canadian Foundation
.”
Health Affairs
19
, no.
3
:
236
40
.
Lomas
Jonathan
.
2005
. “
Using Research to Inform Healthcare Managers' and Policy Makers' Questions: From Summative to Interpretive Synthesis
.”
Health Policy
1
, no.
1
:
55
71
.
Lynn
Laurence E.
Jr.
1989
. “
Policy Analysis in the Bureaucracy: How New? How Effective?
Journal of Policy Analysis and Management
8
, no.
3
:
373
77
.
Majumdar
Sumit R.
, and
Soumeral
Stephen B.
2009
. “
The Unhealthy State of Health Policy Research
.”
Health Affairs
29
, no.
5
:
w900
w908
. .
Malbin
Michael J.
1980
.
Unelected Representatives: Congressional Staff and the Future of Representative Government
.
New York
:
Basic Books
.
Matthews
Dylan
.
2017
. “
The Republican War on the CBO, Explained
.”
Vox
,
July
19
. www.vox.com/policy-and-politics/2017/7/19/15967224/congressional-budget-office-cbo-war-explained.
Mayer
Jane
.
2010
. “
Covert Operations
.”
New Yorker
,
August
30
.
McGann
James G.
2016
.
2016 Global Go To Think Tank Index Report
.
Think Tanks and Civil Societies Program, Lauder Institute, University of Pennsylvania
. repository.upenn.edu/cgi/viewcontent.cgi?article = 1011&context = think_tanks.
Medvetz
Thomas
.
2012
.
Think Tanks in America
.
Chicago
:
University of Chicago Press
.
Meyer
Jack A.
,
Alteras
Tonya T.
, and
Adams
Karen Bentz
.
2006
.
Toward a More Effective Use of Research in State Policymaking
.
New York
:
Commonwealth Fund
.
Moat
Kaelan A.
,
Lavis
John N.
, and
Abelson
Julia
.
2013
. “
How Contexts and Issues Influence the Use of Policy-Relevant Research Syntheses: A Critical Interpretive Synthesis
.”
Milbank Quarterly
91
, no.
3
:
604
48
.
MPR (Mathematica Policy Research) and APPAM (Association for Public Policy Analysis and Management)
.
2012
. “
Allies, Adversaries, or Strange Bedfellows? The Relationship between Research, Politics, and Policy
.”
Webinar
,
Washington, DC
,
July
18
.
NCSL (National Conference of State Legislatures)
.
2014
. “
Full- and Part-Time Legislatures
.”
June
1
. www.ncsl.org/research/about-state-legislatures/full-and-part-time-legislatures.aspx.
Nichols
Tom
.
2017
.
The Death of Expertise: The Campaign against Established Knowledge and Why It Matters
.
New York
:
Oxford University Press
.
Obar
Jonathan A.
,
Zube
Paul
, and
Lampe
Clifford
.
2012
. “
Advocacy 2.0: An Analysis of How Advocacy Groups in the United States Perceive and Use Social Media as Tools for Facilitating Civic Engagement and Collective Action
.”
Journal of Information Policy
2
:
1
25
.
Oberlander
Jonathan
.
2003
.
The Political Life of Medicare
.
Chicago
:
University of Chicago Press
.
Oliver
Thomas R.
1993
. “
Analysis, Advice, and Congressional Leadership: The Physician Payment Review Commission and the Politics of Medicare
.”
Journal of Health Politics, Policy and Law
18
, no.
1
:
113
73
.
Oliver
Thomas R.
, and
Paul-Shaheen
Pamela A.
1997
. “
Translating Ideas into Actions: Entrepreneurial Leadership in State Health Care Reforms
.”
Journal of Health Politics, Policy and Law
22
, no.
3
:
721
89
.
Olson
Mancur
.
1965
.
The Logic of Collective Action: Public Goods and the Theory of Groups
.
Cambridge, MA
:
Harvard University Press
.
Oreskes
Naomi
.
2004
. “
The Scientific Consensus on Climate Change
.”
Science
306
:
1686
.
Parsons
Wayne
.
2002
. “
From Muddling Through to Muddling Up—Evidence Based Policy-Making and the Modernisation of British Government
.”
Public Policy and Administration
17
, no.
3
:
43
60
.
Patashnik
Eric M.
, and
Peck
Justin
.
2017
. “
Can Congress Do Policy Analysis?
” In
Does Policy Analysis Matter?
, edited by
Friedman
Lee S.
,
85
154
.
Oakland
:
University of California Press
.
Pawson
Ray
.
2006
.
Evidence-Based Policy: A Realist Perspective
.
Thousand Oaks, CA
:
Sage
.
Peterson
Mark A.
1990
.
Legislating Together: The White House and Capitol Hill from Eisenhower to Reagan
.
Cambridge, MA
:
Harvard University Press
.
Peterson
Mark A.
1995
. “
How Health Policy Information Is Used in Congress
.” In
Intensive Care: How Congress Shapes Health Policy
, edited by
Mann
Thomas E.
and
Ornstein
Norman J.
,
79
125
.
Washington, DC
:
American Enterprise Institute and Brookings Institution
.
Peterson
Mark A.
1997
. “
The Limits of Social Learning: Translating Analysis into Action
.”
Journal of Health Politics, Policy and Law
22
, no.
4
:
1077
1114
.
Peterson
Mark A.
2017
.
Hardball Politics, Hobbled Policy: Contexts, Choices, and Consequences in U.S. Health Reform
. Manuscript in preparation.
Peterson
Mark A.
,
Leibowitz
Arleen A.
, and
King
A. J.
2017
. “
The HIV Policy Chain: Ensuring the Development of Evidence-Based HIV Policies
.” Unpublished manuscript.
Quirk
Paul J.
, and
Binder
Sarah A.
2005
. “
Introduction: Congress and American Democracy: Institutions and Performance
.” In
Institutions of American Democracy: The Legislative Branch
, edited by
Quirk
Paul J.
and
Binder
Sarah A.
,
xix
xxix
.
New York
:
Oxford University Press
.
Rich
Andrew
.
2005
.
Think Tanks, Public Policy, and the Politics of Expertise
.
New York
:
Cambridge University Press
.
Roberts
Nancy C.
, and
King
Paula J.
1996
.
Transforming Public Policy: Dynamics of Policy Entrepreneurship and Innovation
.
San Francisco
:
Jossey-Bass
.
Rosenthal
Alan
.
2009
.
Engines of Democracy: Politics and Policy Making in State Legislatures
.
Washington, DC
:
CQ
.
Rosenthal
Alan
.
2013
.
The Best Job in Politics: Exploring How Governors Succeed as Policy Leaders
.
Washington, DC
:
CQ
.
Sabatier
Paul
, and
Whiteman
David
.
1985
. “
Legislative Decision Making and Substantive Information: Models of Information Flow
.”
Legislative Studies Quarterly
10
, no.
3
:
395
421
.
Schick
Allen
.
1991
. “
Informed Legislation: Policy Research versus Ordinary Knowledge
.” In
Knowledge, Power and the Congress
, edited by
Robinson
William H.
and
Wellborn
Clay H.
,
99
119
.
Washington, DC
:
Congressional Quarterly
.
Schur
Claudia L.
,
Berk
Marc L.
,
Silver
Lauren E.
,
Yegian
Jill M.
, and
O'Grady
Michael J.
2009
. “
Connect the Ivory Tower to Main Street: Setting Research Priorities for Real-World Impact
.”
Health Affairs
28
, no.
5
:
w886
w899
. .
Shor
Boris
, and
McCarty
Nolan
.
2011
. “
The Ideological Mapping of American Legislatures
.”
American Political Science Review
105
, no.
3
:
530
51
.
Skocpol
Theda
.
1997
.
Boomerang: Health Care Reform and the Turn against Government
.
New York
:
Norton
.
Skocpol
Theda
, and
Williamson
Vanessa
.
2013
.
The Tea Party and the Remaking of Republican Conservatism
.
New York
:
Oxford University Press
.
Smith
David G.
1992
.
Paying for Medicare: The Politics of Reform
.
New York
:
Aldine de Gruyter
.
Squire
Peverill
.
2012
.
The Evolution of American Legislatures: Colonies, Territories, and States, 1619–2009
.
Ann Arbor
:
University of Michigan Press
.
Stone
Deborah A.
2012
.
Policy Paradox: The Art of Political Decision Making
.
3rd
ed.
New York
:
Norton
.
Tinetti
Mary E.
2012
. “
The Retreat from Advanced Care Planning
.”
Journal of the American Medical Association
307
, no.
9
:
915
16
.
Walter
Isabel
,
Nutley
Sandra M.
, and
Davies
Huw O. T.
2005
. “
What Works to Promote Evidence-Based Practice? A Cross-Sector Review
.”
Evidence and Policy
1
, no.
3
:
355
64
.
Weaver
R. Kent
.
1989
. “
The Changing World of Think Tanks
.”
PS: Political Science and Politics
22
:
563
78
.
Weaver
R. Kent
.
2000
.
Ending Welfare as We Know It
.
Washington, DC
:
Brookings Institution
.
Weaver
R. Kent
, and
Rockman
Bert A.
, eds.
1992
.
Do Institutions Matter? Government Capabilities in the United States and Abroad
.
Washington, DC
:
Brookings Institution
.
Weiss
Carol H.
1979
. “
The Many Meanings of Research Utilization
.”
Public Administration Review
39
, no.
5
:
426
31
.
Weiss
Carol H.
1989
. “
Congressional Committees as Users of Analysis
.”
Journal of Policy Analysis and Management
8
, no.
3
:
411
41
.
Weiss
Carol Hirschon
.
1999
. “
The Interface between Evaluation and Public Policy
.”
Evaluation
5
, no.
4
:
468
86
.
Weissert
Carol S.
, and
Weissert
William G.
2000
. “
State Legislative Staff Influence in Health Policy Making
.”
Journal of Health Politics Policy and Law
25
, no.
6
:
1121
48
.
Weissert
William G.
, and
Weissert
Carol S.
2012
.
Governing Health: The Politics of Health Policy
.
4th
ed.
Baltimore
:
Johns Hopkins University Press
.
Westen
Drew
.
2007
.
The Political Brain: The Role of Emotion in Deciding the Fate of the Nation
.
Cambridge, MA
:
Public Affairs
.
White
Kathleen M.
, and
Dudley-Brown
Sharon
.
2011
.
Translation of Evidence into Nursing and Health Care Practice
.
New York
:
Springer Publishing
.
Whiteman
David
.
1995
.
Communication in Congress: Members, Staff, and the Search for Information
.
Lawrence
:
University Press of Kansas
.
Wildavsky
Aaron
.
1979
.
Speaking Truth to Power
.
Boston
:
Little, Brown
.