Abstract
With the release of large language models such as GPT-4, the push for regulation of artificial intelligence has accelerated the world over. Proponents of different regulatory strategies argue that AI systems should be regulated like nuclear weapons posing catastrophic risk (especially at the frontiers of technical capability); like consumer products posing a range of risks for the user; like pharmaceuticals requiring a robust prerelease regulatory apparatus; and/or like environmental pollution to which the law responds with a variety of tools. This thinkpiece outlines the shape and limitations of particular analogies proposed for use in the regulation of AI, suggesting that AI law and policy will undoubtedly have to borrow from many precedents without committing to any single one.
With the release of large language models (LLMs) such as GPT-4—systems that are sometimes labeled as “foundation models” or “frontier systems” and/or marketed as “generative AI”—the push for regulation of AI has accelerated the world over. To be sure, the law already has its grip on AI to some degree. Existing public law applies to entities that break the law using an AI tool (FTC 2023). Private contract law allocates responsibility for AI risks through the supply chain. Copyright law applies to regulate the use of copyrighted material as training data and the copyrightability of generative AI outputs (Ziporli 2023). But how the law grows from these applicable foundations to new legal affordances will depend on something else—quite possibly on which analogies to other regulatory regimes are most persuasive. As I write in October 2023, there is a global competition of ideas for the best regulatory analogy to apply to AI systems. Proponents of different regulatory strategies argue that AI systems should be regulated like nuclear weapons posing catastrophic risk (especially at the frontiers of technical capability); like consumer products posing a range of risks for the user; like pharmaceuticals requiring a robust prerelease regulatory apparatus; and/or like environmental pollution to which the law responds with a variety of tools.
As this thinkpiece will demonstrate, none of these analogies is exactly apt because of the particular character and usages of the technologies in question. Yet each can do significant work in setting the direction of AI regulation. Whatever analogy a regulator adopts will shape how these systems function and who bears the costs of their direct and indirect harms. The application of regulatory analogies depends on what harms and risks are in scope. In what follows, this thinkpiece outlines the shape and limitations of particular analogies proposed for use in the regulation of AI, suggesting that AI law and policy will undoubtedly have to borrow from many precedents without committing to any single one.
Perceived Existential Risk: Analogies to Nuclear Weapons
For a number of reasons, while harms to dignity, equality, autonomy, and democracy have been central to policy examination of AI for some time, influential technologists such as Stuart Russell and Peter Norvig (2016) and, more recently, the signatories of a much-publicized letter urging a six-month pause on larger AI models (“Future of Life” 2023), have focused attention on the specter of existential risk to human civilization (see also Lauren M. E. Goodlad and Matthew Stone's introduction to this special issue). This is the risk that an AI system will escape from the control of human beings and begin to achieve its “own” objectives (whether because such objectives were inadvertently incentivized by design or because they emerged from the complex and opaque web of AI system connections) (Russell and Norvig 2016). Freed from human control, such rogue technology might then optimize for particular goals even if that meant large-scale destruction of human lives and/or habitats. Purveyors of the existential risk theory of AI have analogized the risk of AI to the risk of nuclear weapons, using this analogy to recommend international coordination around AI containment and licensing (Marcus and Reuel 2023; Widder, West, and Whittaker 2023).
Scholars have criticized this notion of extreme risk for its distraction from already existing harms and associations with neo-eugenic ideas (e.g., Gebru et al. 2023; Torres 2023; Hanna and Bender 2023). Even if AI posed civilization-ending threats commensurate to those of nuclear weapons, data-intensive computational systems diverge from nuclear weapons in their material and organizational characteristics. AI technology is much more diffuse and decentralized than is nuclear technology. Although the training of LLMs and other data-intensive technologies requires massive data sets and computation (also known as “compute”), these resources are much more plentiful than enriched uranium. Thus, whereas few people have the capacity to build nuclear weapons, many more people—and potentially automated systems themselves—can build (or replicate) large models.
Another difference between AI and nuclear technology is that, in the current state of the art, private entities and not governments are the principal actors producing the best-known AI models. Many, like OpenAI's GPT-4, are then integrated into a wide range of products that adapt the underlying models for specialized purposes. Some versions of these models are to various degrees open-source, meaning that they have been developed in a collaborative process reflecting the input of many technologists and under the control of none. Hence, even if concerns over rogue AI are justified, a slow-moving international regulatory effort is unlikely to successfully control the development and release of foundation models, much less their downstream derivatives. Among the most important early efforts to regulate the design and development of large-scale and highly capable AI models is the Biden administration's executive order on “safe, secure, and trustworthy development and use of artificial intelligence,” which requires certain foundation model developers to provide the government with information prior to model release.1
Existing Harms and Risks: Analogies to Products, Pharmaceuticals, and Environment
The immediate harms of large models are not potential human extinction but rather actually existing harms that denigrate human and planetary flourishing.2 Present risks inspire different kinds of regulatory analogies. Importantly, actually existing risks can be further divided into those that are the result of improper system design—for example, unintended discrimination against groups of people—and those that are the predicted negative externalities of best-in-class AI system design—for example, excessive use of water and energy resources, as well as job displacement.
One of the most influential analogies in the field of AI regulation so far is to product safety regulation. Here, the focus is on the risks of inappropriate or defective system design and implementation. The EU AI Act treats AI systems as products and follows a risk-based approach to implementing requirements on AI based on the supposed risks they pose (Kaminski 2021). Systems that are considered high-risk (based on assumptions about their deployment contexts) must be tested and examined before being released to lessen the risk of harm. These would include systems developed for use in the educational, transportation, financial, or health care settings that are likely to impact individual life opportunities. Other systems—for example, a song recognition system—are subject to requirements for making certain characteristics transparent and subject to outside inspection (European Parliament 2024). The EU plans to pair these sets of regulatory requirements with a liability scheme that will assign responsibility for harms caused by AI systems to various actors in the value chain (Hacker 2022). In effect, this regulatory regime treats AI systems as products whose safety regulators can ensure ex ante, with residual risk being handled through private law after harms occur.
The challenge to this analogy is that, unlike a toaster or toy, an AI system and its component parts (e.g., data sets, software architectures, learning algorithms, user prompts) are not tractable “things.” They are not stable in that they may adapt to new inputs, new rounds of training, “fine tuning,” or human reinforcement. Standards of acceptable performance (including vulnerability to malicious use) are difficult to come by. That is especially true for LLMs—which are referred to as “foundation models,” in part because their downstream uses and risks are not fully knowable (Bommasani et al. 2022). EU regulators have addressed LLMs in the EU AI Act in light of the reality that developers cannot predict all of their deployment contexts (European Parliament 2024).
To be sure, product safety objectives are relevant to AI systems, especially those with narrowly defined uses such as decision-making systems intended to identify benefits fraud or assess credit risk. Such systems can in theory be improved with developer and deployer due diligence, but progress so far has been slow (e.g., Raji et al. 2022). Most promisingly, required processes around transparency and explainability could increase the tractability of these systems. However, given the comparative plasticity of LLMs, a regulatory apparatus designed to prescreen for harms in special purpose algorithmic products through a mandated regime of audits and assessments is unlikely to achieve the same level of harm reduction as have product safety screenings.
Another preclearance regulatory analogy comes from the domain of pharmaceutical products. Commentators such as Andrew Tutt (2016) have urged that technology developers should submit their AI models and data sets for rigorous and compulsory evaluation. Just as drugs undergo extensive clinical trials to assess their safety and efficacy, so AI systems should be subject to thorough testing and validation. By analogy, the deployers of these systems are like the prescribing physicians. Transparency, accountability, and independent oversight by some sort of government agency should give deployers the confidence that the systems in their hands will not cause harm (so long as they are using the systems “on label”). Aspects of this paradigm shape proposals that AI developers should submit to a licensing procedure (Microsoft 2023; OpenAI 2023).
The analogy captures the delicate risk assessments required for new automated technologies, as for new drugs, with public safety possibly weighing on either side of the balance when it comes to granting permission for an innovation. A regulator that moves too slowly could stall the introduction of life-saving technology. But a regulator that moves too fast could harm people and lose trust. Another similarity between AI and pharmaceutical regulation is that it could take a catastrophe before the regulatory system takes shape. The impetus for the creation of the Food and Drug Administration in the United States was a series of public health crises in the late nineteenth and early twentieth centuries, including the widespread sale of adulterated and mislabeled products (see, e.g., Carpenter 2010). These calamities reduced public tolerance for “permissionless innovation.”
While the pharmaceutical and AI industries may share certain characteristics, including the possibility of causing widespread harm, the nature of AI development remains significantly different from the nature of modern pharmaceutical development. The barriers to entry for many systems are lower and the means of production more diffuse. There may be as of this writing only a handful of proprietary large language models, but there are countless applications built on top of these “foundations” and hundreds of open source models of various kinds. All may at some point create harms or risks of some unforeseen kind. It is hard to imagine a regulator being able to control release of systems for any but the most compliant developers.
Perhaps the most apt regulatory analogy for AI systems comes from the complex field of environmental regulation. Indeed, analogies to “poison” and “collapse” are entering the discourse around advanced AI systems (see also Sylvie Delacroix's thinkpiece as well as Goodlad and Stone's introduction, both in this special issue). To poison an LLM may be an act of intentional sabotage, such as through an adversarial attack on the training data or machine learning process (Oprea, Singhal, and Vassilev 2022). More closely related to the environmental analogy would be the case of an entity inadvertently introducing poison into an LLM by training a new AI model on the “synthetic” data produced by earlier models, which data might be inaccurate or biased (Shumailov et al. 2023). One model training on the outputs of another is the equivalent to monoculture or, as one commentator colorfully put it, cannibalism (Paul 2023). AI systems training other systems create an AI ecosystem with intricate webs of cause and effect. Like a chemical that is useful for a specific purpose but causes environmental damage, an AI system may cause systemic harms even though it is performing its function as designed and as claimed. These harms would be a “negative externality” of a system that could otherwise be beneficial and fit for purpose. Many concerns about the systemic disruptions that AI systems may cause, and the associated pain for individuals and communities, connect to negative externalities. For example, there are concerns that automated technologies that are deployed as expected will end employment for huge numbers of people faster than new employment patterns can be established, erode human autonomy and equality, and vest new forms of cognitive and economic power in the very few while rendering the many dependent and vulnerable (Russell 2019).
Viewed broadly, the regulation of environmental risks combines ex ante prohibitions (such as bans on toxic substances), a flexible definition of “unreasonable” harms (which emerges from ongoing case histories involving liability for damages), required performance standards to reduce expected harms (e.g., from car emissions), and socio-technical efforts to end harmful technological path dependencies (such as reliance on fossil fuels). This mix of regulatory modalities may be just what is in store for AI systems. As with environmental harms, AI risks can pose challenges across regulatory systems. AI policy, like environmental policy, may also impact systems of subsidy for alternatives, incentives for institutional change, and global cooperation.
Notes
Exec. Order No. 14110, 88 Fed. Reg. 75191 (November 1, 2023).
Researchers in this interdisciplinary arena are too numerous to name but include, for example, Cathy O'Neil (2016), Joy Buolamwini and Timnit Gebru (2019), Emily M. Bender et al. (2021), Abeba Birhane et al. (2023), and Mél Hogan (in this special issue).