Abstract
If the words “our product is doubt” characterize the production of ignorance, in turn, the production of nonproblem could be encapsulated by “our product is silence.” This article looks behind both these words and this concept of nonproblems, drawing attention to public policy mechanisms whose effect (whether explicitly intended or not) is to reduce the attention paid to a given problem, resulting in public inaction, not taking charge of a problem. It highlights the role in those dynamics of two factors: the scientific instruments attempting to quantify environmental and occupational health issues and the scientific expertise in the field of the regulation of chemicals. Downstream of these regulatory processes, the use of science‐based regulatory instruments implicitly steers regulatory policies in a direction that results in tolerance of certain risks (rendering them acceptable) and can lead to public inaction.
The phrase “Our product is doubt” has become emblematic of industrial strategies aimed at generating ignorance to avoid the production of scientific knowledge about the dangers of toxic products like tobacco (Proctor 2011). It is the title of a book that has played an important role in the public denunciation of such strategies (Michaels 2008), setting the standard for much thinking about ignorance not only in science studies but also in the social sciences more broadly (Oreskes and Conway 2010). As work on ignorance continues to develop (Gross and McGoey 2023), this article aims to draw attention to logics that, while similar, diverge from the logics of doubt production and ignorance, which are restricted to the production (or nonproduction) of knowledge. It proposes looking at mechanisms operating at other stages of public policy, which are increasingly important in strategies designed by industry or public decision-makers to avoid certain issues (Henry et al. 2021).
This article focuses on what we might term the production of nonproblems which could in turn be encapsulated by the words “Our product is silence” (Henry 2021). It looks behind these words and this concept, drawing attention to public policy mechanisms whose effect (whether explicitly intended or not) is to reduce the attention paid to a given problem, resulting in a form of public inaction, of not taking charge of a problem. Within the research on the construction of public problems and public policy, this work is interested not in the most-analyzed mechanisms that lead to the emergence of public problems and their agenda setting by political institutions (Spector and Kitsuse 1977; Cobb and Elder 1972) but rather in their less visible flipside. This study is an extension of work on nondecision, which has stressed the importance of paying as much attention to decisions that are not made, and policies that are not implemented, as we do to those that are (Bachrach and Baratz 1962). Beyond this opposition between decision and nondecision, this article seeks to understand how some forms of public intervention and some instruments used to implement public policies lead to forms of inaction—and must be analyzed as such.
The construction of nonproblems draws on a variety of logics, but in this article, I would like to focus on factors related to scientific knowledge, and the forms of ignorance linked to it. During mobilizations and processes of constructing public problems, the technical and scientific construction of an issue can become an obstacle for social movements. This is also the case when lack of knowledge about an issue makes it difficult to build the connections between a problem and its causes that are often the first steps in defining a situation as a public problem (Felstiner, Abel, and Sarat 1980). In the analysis of public inaction, the scientific and technical dimensions are therefore often incorporated into the policy instruments used to implement public policies. These instruments incorporate a definition of the issues that prioritizes certain dimensions while obscuring others (Lascoumes and Le Galès 2007). This article aims to understand how the scientific tools used to define some issues also contribute to define their low level of priority.
The first part of this article focuses on forms of knowledge produced by scientists themselves to help provide fresh information to policymakers. In order to quantify certain health issues that are difficult to measure, scientists attempt to build instruments designed to overcome those difficulties. In some cases, however, these instruments serve only to extend an ignorance that scientists seem incapable of overcoming, conferring scientific credibility on the perpetuation of this very ignorance. The second part looks at how the regulation of chemicals constitutes a space in which transfer and dialogue between science and policy is at its most regular—and this has specific effects on scientific knowledge production and public policies. Both the scientific knowledge that industry produces and the forms of ignorance it induces play a major role in the expertise phase of regulation; in the process, the industrial origins of this knowledge are whitewashed and the knowledge thus acquires fresh legitimacy. Downstream of these regulatory processes, the use of science-based regulatory instruments (such as limit values) implicitly steers regulatory policies in a direction that results in tolerance of certain risks (rendering them acceptable), and this can lead to forms of public inaction.
Beyond the Ignorance-Knowledge Boundaries: Population-Attributable Fractions in Epidemiology
Some studies focusing on the production of ignorance have sought to highlight how regulatory processes may be based not just on the absence of knowledge (Rayner 2012; McGoey 2012) but also on specific forms of knowledge (Kleinman and Suryanarayanan 2013). In this broader sense, ignorance refers more precisely to specific forms of knowledge. Suryanarayanan and Kleinman showed that the epistemic forms of toxicology promoted by pesticide producers focused research on some risk types (often the most immediate) while maintaining forms of uncertainty on others—particularly risks that are either longer-term or at lower doses (which are always more difficult to measure) (Suryanarayanan and Kleinman 2017). This work is quite similar to that of Scott Frickel showing institutionalized ignorance resulting from the post-Katrina pollution risk assessment in New Orleans (Frickel and Edwards 2014).
Attempts to measure occupational health outcomes shed new light on these issues.1 Though epidemiology seems the discipline most likely to quantify the burden of occupational disease, it actually meets this challenge only partially, as I will show using the example of occupational (and indirectly environmental) cancers. Their quantification is the subject of enduring controversy, both in France and internationally. Historically, workers exposed to high doses of toxic substances have been especially subject to observation by epidemiologists. William Hueper, a pioneer of the discipline, played an important role in the development of occupational epidemiology in the United States from the 1930s onward, and was the first director of the Environmental Cancer Section at the National Cancer Institute, from 1938 to 1964 (Sellers 1997a). In fact, “With the notable exception of tobacco smoking, most of the other carcinogens that were recognized during the 19th to mid-20th centuries were discovered through [studies of workers]” (Loomis et al. 2018: 593). However, since the 1950s, the development of risk factor epidemiology has led to the expansion of new kinds of investigation (notably cohort studies) that are better suited to the measurement of risks affecting very large populations (such as tobacco-, alcohol-, or diet-related diseases) than occupational risks, which are localized to specific groups. Epidemiologists have referred to this shift as the beginning of “modern epidemiology” (Rothman, Greenland, and Lash 2012), which promoted the idea that the main role of epidemiology was to identify the risk factors that affected the health of a large proportion of a population (Aronowitz 1998; Berlivet 2005; Giroux 2013).
Thus, despite the criticism leveled at it, the main method for quantifying occupational cancers gradually became one in which scientists calculated the fractions of cancers attributable to known occupational carcinogens in a population. Rather than developing studies based on places where workers were actually exposed, the approach uses quantitative evaluation based indirectly on knowledge produced by other epidemiological studies. To calculate this population-attributable fraction (PAF), two elements must be known: the relative risk associated with exposure to a product (i.e., the additional proportion of cases of a disease that will be observed, compared to an unexposed population) and the prevalence of exposure within a population (i.e., the proportion of people exposed to this product in a population) (Counil and Henry 2019). In the 1970s, when this method was becoming institutionalized and environmental cancers were central public concerns, some actors tried to draw attention to this burden of environment- and work-related cancers. These debates happened in both scientific and political contexts and led to major controversies. For example, one controversy in the United States began with a report (issued by several scientists from different federal health agencies) which stated that at least 20 percent of all cancers were work related (Bridbord et al. 1978). This report, referred to as the OSHA Paper, led unions and environmental activists to mobilize, prompting debates about the levels of occupational cancers and fueling the scientific controversy (Proctor 1995; Jasanoff 1990: 29–32). To put an end to the dispute, Congress asked two preeminent epidemiologists, Richard Doll and Richard Peto, to write a report on the causes of cancers. It set the figure for the proportion of cancers attributable to work at 4 percent, a statistic that became a permanent fixture in the epidemiological community (Doll and Peto 1981).
Though the debate is far less heated in France, disparities persist in terms of the evaluation of the role played by occupational and environmental factors in the appearance of cancers. Some scientific studies minimize the role of occupational and environmental factors by stressing other factors (such as alcohol, tobacco, or diet)—which do indeed cause a greater number of cancers. Published by the International Agency for Research on Cancer (IARC) and disseminated in French in a short version by the Académie des sciences in 2007, one report stressed the role of tobacco and alcohol while significantly downplaying occupational and environmental factors by limiting the study to classified carcinogens and making the most restrictive assumptions (World Health Organization and International Agency for Research on Cancer 2007). French specialists in occupational and environmental risks criticized this report, arguing that “the report and subsequent public communication suggest that environmental and occupational risks are negligible contributors to cancer” (Salines et al. 2007: 424; see also Goldberg and Imbernon 2008).
In 2015, the French National Cancer Institute asked the IARC to carry out a new study (ultimately published in 2018). Its goal was to avoid criticism by developing a more in-depth methodological approach that would prove more difficult to challenge. As before, this was a general study evaluating the attributable fraction of cancer linked to avoidable causes (Soerjomataram et al. 2018). This time around, however, every cause of cancer was addressed by a specific group of experts and reported in a scientific article. In the case of the study dedicated to the evaluation of occupational cancers, the results were quite similar to the previous study, with 2.3 percent of all cancers attributed to occupational exposure (3.9 percent in men and 0.4 percent in women) (Marant Micallef et al. 2019). For several reasons, these figures provoked neither scientific nor public debate; nevertheless, it is important to figure out how these numbers were produced.
Even though this was one of the most comprehensive studies to date, its results reveal how the knowledge produced on these issues can be analyzed as a kind of ignorance. To calculate an attributable fraction, there must be a causal relationship between exposure to a product and the occurrence of cancer. A scientifically established convention is to limit such calculations to products whose carcinogenicity has been classified by the IARC (in most cases as a confirmed carcinogen, Class 1, though in some other kinds of publication there is the option of adding probable carcinogens, Class 2A). While agents classified by the IARC are relatively few compared to the number of products used in industry (118 in Class 1 and 78 in Class 2A at the time of publication of the 2018 study), only twenty-three agents of the Class 1 carcinogens were suitable for use by the scientists in their study. For forty-eight Class 1 agents, this exclusion owed to the fact that they are not present in the workplace. All of the other agents were excluded due to a lack of data that could lead to the calculation of an attributable fraction (Marant Micallef et al. 2018). The twenty-three agents that were deemed suitable are agents that have been known to be carcinogenic for several decades—such as asbestos, benzene, silica, and other products used in the chemical industry since the early twentieth century (Marant Micallef et al. 2019). Thus, what the IARC study presented as the number of cancers linked to occupational exposures, and estimated as 2.3 percent of all cancers, is in fact only the number of cancers linked to the twenty-three products that definitely are carcinogenic to humans, and for which the exposure data and relative risk are well documented scientifically: a drop in the ocean among the tens of thousands of chemicals used daily in industry.
Beyond Nondecisions: Regulation as a Laundering Area for Industrial Ignorance and a Way of Legitimizing Public Inaction
The study of the regulatory apparatus for occupational and environmental hazards shows the extent to which both ignorance and industrial knowledge are part and parcel of the functioning of these regulatory systems (Boullier and Henry 2022). The entire regulatory system is based on knowledge that incorporates the ignorance that has become necessary to public policy implementation. Ignorance about the properties of products can no longer be considered exceptional; on the contrary, it has become the norm for the governance and administration of dangerous substances. Contrary to common perceptions, the regulation of chemicals is not aimed exclusively at protecting consumers or workers; rather, it seeks to strike a balance between protecting public health and protecting economic interests (Vogel 2013). During the final months of negotiations on the main European regulation of chemicals, the REACH regulation (for Registration, Evaluation, Authorisation, and Restriction of Chemicals) eventually adopted in 2006, this could be seen in the intense industrial maneuvering between the European Commission and the French, British, and German governments (Corporate Europe Observatory 2005). Industrial influence is therefore reflected in the regulations adopted, both in terms of how health data are obtained (which ultimately makes it possible to avoid controlling use of the toxic molecules that are at the heart of economic activities), and in the various forms of constraint imposed on the experts responsible for carrying out risk assessments—in particular their dependence on studies produced and financed by industry.
The main innovation introduced by the REACH regulation was to oblige manufacturers to submit data concerning the toxicity of the products they want to bring to market: “No data, no market.” However, attempts made by regulatory agencies to centralize industrial data reveal limitations so significant that the ability of this regulation to achieve its objectives must be called into question. For example, it is not mandatory for manufacturers to submit all the data from the studies they have produced. They are only required to submit their summaries; this makes it difficult (if not impossible) for regulatory agencies to scientifically evaluate the quality of these studies and their results. A recent study by a German federal institute in the field of risk assessment has shown that a third of the dossiers submitted for the chemicals most widely used in Europe are not compliant with the REACH regulation, due to “missing data,” “undocumented scientific validity of the model,” or “lack of justification for the tests” (Oertel et al. 2018). Ultimately, these large collections of data arising out of industrial studies also serve to drown out relevant information in a continuous stream of information of all kinds, which contributes to producing a specific kind of ignorance linked to this profusion of knowledge.
Furthermore, numerous investigations have shown that industry directly influences the work of regulatory expertise, insofar as it is responsible for producing and/or funding the studies used to conduct risk assessments. This situation, which is widely acknowledged as biasing the results of clinical studies in the pharmaceutical sector (Sismondo 2018), is equally problematic (though less familiar) in the chemical sector. Ethnographic observation of the European expert committee responsible for assessing the risks associated with these products and setting occupational exposure limits has made it possible to see the impact of industrial knowledge on the expertise process.2 This was particularly true of formaldehyde—a product widely used across various industrial sectors. While several thousand scientific articles have established the various dangers linked to this product, discussions among the experts responsible for setting exposure limit values for this toxic substance during the 2000s essentially focused on two studies—both directly financed by industry—which exposed first twenty-one and then forty-one human volunteers in exposure chambers to a range of formaldehyde doses. These studies, developed precisely to influence the work of EU experts, ultimately played a role that far exceeds their significance in the scientific literature. However, if one reads the final report of this expert committee, one sees more than two hundred scientific studies cited in support of the proposed new limit value (European Commission et al. 2017). It is impossible to differentiate the studies according to their mode of financing or their more or less central role in the work of the expert group. Only ethnographic observation reveals the pivotal role given to industry-funded ad hoc experimental studies (which are expensive and often impossible to contradict) as a way of preventing the adoption of the protective regulations industry considers too costly to implement. The European expert committee's reliance on data from industry-funded work gives industry science a central place in the regulatory process. Yet, once published in the final expert report, its industrial origin almost disappears as it is only mentioned in the acknowledgment of the original scientific article (Mueller, Bruckner, and Triebig 2013). What we are witnessing, then, is a change in the status of these data; in becoming an integral part of the regulatory process, they are also granted the same legitimacy as all the other scientific studies used in the assessment.
The narrowness of chemical regulation can also be seen in the types of instruments used to implement occupational exposure regulation. Indeed, the occupational exposure limit values set by these expert committees are emblematic of a type of public intervention whose effects are long-standing but poorly understood and underinvestigated by social scientists. Their use in the field of occupational risks became widespread in the second half of the twentieth century to control chemical exposures in the workplace—either via state regulations or through guideline values used by employers. Recourse to this type of instrument is possible only because a specific discipline set itself the goal of measuring the effects of chemical pollutants on human health. At the start of the twentieth century, toxicology was mainly concerned with occupational exposures. Because these exposures were at a fairly high level at the time, the effects on exposed workers could easily be observed (Sellers 1997b). One of the lists most widely used today is that of the American Conference of Governmental Industrial Hygienists (ACGIH), a professional association of industrial hygienists in the United States.
Properly applied, limit values are relatively effective in protecting against immediate reactions (intoxication, immediate skin or respiratory reactions, and so on), but this is not the case for toxins having longer-term effects and/or producing effects at lower doses. In the case of carcinogens, for which it is impossible to determine a threshold below which there would be no risk, limit values can only decrease, not eradicate the number of cancers that will occur in an exposed population. For that population, risk persists at exposures below the existing regulatory values. The scientific data used to establish these values theoretically anticipate the number of cancers expected at the exposure level. They tolerate, rather than protect, workers’ exposure to known carcinogens. Calculated as the number of cancers per one thousand, ten thousand, or one hundred thousand workers, the establishment of certain limit values may maintain high levels of risk: thus, the lowering of the occupational exposure limit value in 2012 in France for hexavalent chromium (from 50 to 1 μg/m3) maintains a cancer risk of 1 percent among exposed workers. This is an extremely high risk since, assuming exposure over an entire career, one worker in one hundred exposed to this dose will develop lung cancer (Agence nationale de sécurité sanitaire de l'alimentation, de l'environnement et du travail 2011).
The long-term effect of the use of limit values is to legitimize exposures which (because they occur below these regulatory values) appear to be covered by the regulations. As they appear within a regulatory framework that grants them a certain normality, it is more difficult for workers and their representatives to criticize these exposures. Furthermore, the technical nature of the subject (and the scientific knowledge required to debate the level at which these values are set) is beyond the reach of union representatives or other activists. Unless they can mobilize people capable of engaging with scientific arguments, they are forced to rely on scientific experts whose knowledge is shaped by industry. While the choice of limit values must be informed by scientific data, those values remain social and political compromises between different interests. More broadly, even though the use of thresholds is increasingly being called into question, it has become an increasingly common modality in toxic risk management. Some dangers have no thresholds, as we have seen in the case of carcinogens. The effects of some products, such as endocrine disruptors, can even be increased at low doses in certain situations or at certain biological moments (childhood, pregnancy, and so on). Nonetheless, these thresholds and limit values continue to be used in a wide variety of areas because they allow certain products that are essential to industry to avoid being banned.
• • • • •
Research on ignorance does not completely exhaust the questions raised by the social sciences regarding the relationship between science, the state, industry, and public policy, and new avenues of research must be broadened in order to renew these questions from the point of view of public policy research. The mechanisms highlighted in this article illustrate how the logics of knowledge and ignorance production have extensions in the construction of public problems and public policies. I have first shown how certain forms of knowledge based on specific methods and limited data reduce the dimensions of a problem and contribute to its nonemergence. I have then shown how the use of science-based policy instruments that involve industrial data leads to forms of public intervention that reduce the scope of a problem. Bringing two research perspectives on ignorance and nondecision together leads us to revisit the question of the weight of economic actors and lobbying during public policymaking. Focusing on the structural and long-lasting consequences of inequalities enables us not to emphasize issues of individual deviance or corruption, but to understand why these mechanisms of constructing nonproblems are so remarkably effective. Seeking to produce social silence on an issue over the long term is often far more efficient than opposing the proponents of a reform during controversies.
Notes
These reflections on attributable fractions in epidemiology are based on an ongoing research project carried out jointly with Emilie Counil, an epidemiologist at the French Institute for Demographic Studies.
I was invited as an observer by the chairman of the European Scientific Committee on Occupational Exposure Limits during the years 2014–15 and was able to attend and observe their meetings during this period.