Abstract

With the release of large language models such as GPT-4, the push for regulation of artificial intelligence has accelerated the world over. Proponents of different regulatory strategies argue that AI systems should be regulated like nuclear weapons posing catastrophic risk (especially at the frontiers of technical capability); like consumer products posing a range of risks for the user; like pharmaceuticals requiring a robust prerelease regulatory apparatus; and/or like environmental pollution to which the law responds with a variety of tools. This thinkpiece outlines the shape and limitations of particular analogies proposed for use in the regulation of AI, suggesting that AI law and policy will undoubtedly have to borrow from many precedents without committing to any single one.

With the release of large language models (LLMs) such as GPT-4—systems that are sometimes labeled as “foundation models” or “frontier systems” and/or marketed as “generative AI”—the push for regulation of AI has accelerated the world over. To be sure, the law already has its grip on AI to some degree. Existing public law applies to entities that break the law using an AI tool (FTC 2023). Private contract law allocates responsibility for AI risks through the supply chain. Copyright law applies to regulate the use of copyrighted material as training data and the copyrightability of generative AI outputs (Ziporli 2023). But how the law grows from these applicable foundations to new legal affordances will depend on something else—quite possibly on which analogies to other regulatory regimes are most persuasive. As I write in October 2023, there is a global competition of ideas for the best regulatory analogy to apply to AI systems. Proponents of different regulatory strategies argue that AI systems should be regulated like nuclear weapons posing catastrophic risk (especially at the frontiers of technical capability); like consumer products posing a range of risks for the user; like pharmaceuticals requiring a robust prerelease regulatory apparatus; and/or like environmental pollution to which the law responds with a variety of tools.

As this thinkpiece will demonstrate, none of these analogies is exactly apt because of the particular character and usages of the technologies in question. Yet each can do significant work in setting the direction of AI regulation. Whatever analogy a regulator adopts will shape how these systems function and who bears the costs of their direct and indirect harms. The application of regulatory analogies depends on what harms and risks are in scope. In what follows, this thinkpiece outlines the shape and limitations of particular analogies proposed for use in the regulation of AI, suggesting that AI law and policy will undoubtedly have to borrow from many precedents without committing to any single one.

Perceived Existential Risk: Analogies to Nuclear Weapons

For a number of reasons, while harms to dignity, equality, autonomy, and democracy have been central to policy examination of AI for some time, influential technologists such as Stuart Russell and Peter Norvig (2016) and, more recently, the signatories of a much-publicized letter urging a six-month pause on larger AI models (“Future of Life” 2023), have focused attention on the specter of existential risk to human civilization (see also Lauren M. E. Goodlad and Matthew Stone's introduction to this special issue). This is the risk that an AI system will escape from the control of human beings and begin to achieve its “own” objectives (whether because such objectives were inadvertently incentivized by design or because they emerged from the complex and opaque web of AI system connections) (Russell and Norvig 2016). Freed from human control, such rogue technology might then optimize for particular goals even if that meant large-scale destruction of human lives and/or habitats. Purveyors of the existential risk theory of AI have analogized the risk of AI to the risk of nuclear weapons, using this analogy to recommend international coordination around AI containment and licensing (Marcus and Reuel 2023; Widder, West, and Whittaker 2023).

Scholars have criticized this notion of extreme risk for its distraction from already existing harms and associations with neo-eugenic ideas (e.g., Gebru et al. 2023; Torres 2023; Hanna and Bender 2023). Even if AI posed civilization-ending threats commensurate to those of nuclear weapons, data-intensive computational systems diverge from nuclear weapons in their material and organizational characteristics. AI technology is much more diffuse and decentralized than is nuclear technology. Although the training of LLMs and other data-intensive technologies requires massive data sets and computation (also known as “compute”), these resources are much more plentiful than enriched uranium. Thus, whereas few people have the capacity to build nuclear weapons, many more people—and potentially automated systems themselves—can build (or replicate) large models.

Another difference between AI and nuclear technology is that, in the current state of the art, private entities and not governments are the principal actors producing the best-known AI models. Many, like OpenAI's GPT-4, are then integrated into a wide range of products that adapt the underlying models for specialized purposes. Some versions of these models are to various degrees open-source, meaning that they have been developed in a collaborative process reflecting the input of many technologists and under the control of none. Hence, even if concerns over rogue AI are justified, a slow-moving international regulatory effort is unlikely to successfully control the development and release of foundation models, much less their downstream derivatives. Among the most important early efforts to regulate the design and development of large-scale and highly capable AI models is the Biden administration's executive order on “safe, secure, and trustworthy development and use of artificial intelligence,” which requires certain foundation model developers to provide the government with information prior to model release.1

Existing Harms and Risks: Analogies to Products, Pharmaceuticals, and Environment

The immediate harms of large models are not potential human extinction but rather actually existing harms that denigrate human and planetary flourishing.2 Present risks inspire different kinds of regulatory analogies. Importantly, actually existing risks can be further divided into those that are the result of improper system design—for example, unintended discrimination against groups of people—and those that are the predicted negative externalities of best-in-class AI system design—for example, excessive use of water and energy resources, as well as job displacement.

One of the most influential analogies in the field of AI regulation so far is to product safety regulation. Here, the focus is on the risks of inappropriate or defective system design and implementation. The EU AI Act treats AI systems as products and follows a risk-based approach to implementing requirements on AI based on the supposed risks they pose (Kaminski 2021). Systems that are considered high-risk (based on assumptions about their deployment contexts) must be tested and examined before being released to lessen the risk of harm. These would include systems developed for use in the educational, transportation, financial, or health care settings that are likely to impact individual life opportunities. Other systems—for example, a song recognition system—are subject to requirements for making certain characteristics transparent and subject to outside inspection (European Parliament 2024). The EU plans to pair these sets of regulatory requirements with a liability scheme that will assign responsibility for harms caused by AI systems to various actors in the value chain (Hacker 2022). In effect, this regulatory regime treats AI systems as products whose safety regulators can ensure ex ante, with residual risk being handled through private law after harms occur.

The challenge to this analogy is that, unlike a toaster or toy, an AI system and its component parts (e.g., data sets, software architectures, learning algorithms, user prompts) are not tractable “things.” They are not stable in that they may adapt to new inputs, new rounds of training, “fine tuning,” or human reinforcement. Standards of acceptable performance (including vulnerability to malicious use) are difficult to come by. That is especially true for LLMs—which are referred to as “foundation models,” in part because their downstream uses and risks are not fully knowable (Bommasani et al. 2022). EU regulators have addressed LLMs in the EU AI Act in light of the reality that developers cannot predict all of their deployment contexts (European Parliament 2024).

To be sure, product safety objectives are relevant to AI systems, especially those with narrowly defined uses such as decision-making systems intended to identify benefits fraud or assess credit risk. Such systems can in theory be improved with developer and deployer due diligence, but progress so far has been slow (e.g., Raji et al. 2022). Most promisingly, required processes around transparency and explainability could increase the tractability of these systems. However, given the comparative plasticity of LLMs, a regulatory apparatus designed to prescreen for harms in special purpose algorithmic products through a mandated regime of audits and assessments is unlikely to achieve the same level of harm reduction as have product safety screenings.

Another preclearance regulatory analogy comes from the domain of pharmaceutical products. Commentators such as Andrew Tutt (2016) have urged that technology developers should submit their AI models and data sets for rigorous and compulsory evaluation. Just as drugs undergo extensive clinical trials to assess their safety and efficacy, so AI systems should be subject to thorough testing and validation. By analogy, the deployers of these systems are like the prescribing physicians. Transparency, accountability, and independent oversight by some sort of government agency should give deployers the confidence that the systems in their hands will not cause harm (so long as they are using the systems “on label”). Aspects of this paradigm shape proposals that AI developers should submit to a licensing procedure (Microsoft 2023; OpenAI 2023).

The analogy captures the delicate risk assessments required for new automated technologies, as for new drugs, with public safety possibly weighing on either side of the balance when it comes to granting permission for an innovation. A regulator that moves too slowly could stall the introduction of life-saving technology. But a regulator that moves too fast could harm people and lose trust. Another similarity between AI and pharmaceutical regulation is that it could take a catastrophe before the regulatory system takes shape. The impetus for the creation of the Food and Drug Administration in the United States was a series of public health crises in the late nineteenth and early twentieth centuries, including the widespread sale of adulterated and mislabeled products (see, e.g., Carpenter 2010). These calamities reduced public tolerance for “permissionless innovation.”

While the pharmaceutical and AI industries may share certain characteristics, including the possibility of causing widespread harm, the nature of AI development remains significantly different from the nature of modern pharmaceutical development. The barriers to entry for many systems are lower and the means of production more diffuse. There may be as of this writing only a handful of proprietary large language models, but there are countless applications built on top of these “foundations” and hundreds of open source models of various kinds. All may at some point create harms or risks of some unforeseen kind. It is hard to imagine a regulator being able to control release of systems for any but the most compliant developers.

Perhaps the most apt regulatory analogy for AI systems comes from the complex field of environmental regulation. Indeed, analogies to “poison” and “collapse” are entering the discourse around advanced AI systems (see also Sylvie Delacroix's thinkpiece as well as Goodlad and Stone's introduction, both in this special issue). To poison an LLM may be an act of intentional sabotage, such as through an adversarial attack on the training data or machine learning process (Oprea, Singhal, and Vassilev 2022). More closely related to the environmental analogy would be the case of an entity inadvertently introducing poison into an LLM by training a new AI model on the “synthetic” data produced by earlier models, which data might be inaccurate or biased (Shumailov et al. 2023). One model training on the outputs of another is the equivalent to monoculture or, as one commentator colorfully put it, cannibalism (Paul 2023). AI systems training other systems create an AI ecosystem with intricate webs of cause and effect. Like a chemical that is useful for a specific purpose but causes environmental damage, an AI system may cause systemic harms even though it is performing its function as designed and as claimed. These harms would be a “negative externality” of a system that could otherwise be beneficial and fit for purpose. Many concerns about the systemic disruptions that AI systems may cause, and the associated pain for individuals and communities, connect to negative externalities. For example, there are concerns that automated technologies that are deployed as expected will end employment for huge numbers of people faster than new employment patterns can be established, erode human autonomy and equality, and vest new forms of cognitive and economic power in the very few while rendering the many dependent and vulnerable (Russell 2019).

Viewed broadly, the regulation of environmental risks combines ex ante prohibitions (such as bans on toxic substances), a flexible definition of “unreasonable” harms (which emerges from ongoing case histories involving liability for damages), required performance standards to reduce expected harms (e.g., from car emissions), and socio-technical efforts to end harmful technological path dependencies (such as reliance on fossil fuels). This mix of regulatory modalities may be just what is in store for AI systems. As with environmental harms, AI risks can pose challenges across regulatory systems. AI policy, like environmental policy, may also impact systems of subsidy for alternatives, incentives for institutional change, and global cooperation.

Notes

1.

Exec. Order No. 14110, 88 Fed. Reg. 75191 (November 1, 2023).

2.

Researchers in this interdisciplinary arena are too numerous to name but include, for example, Cathy O'Neil (2016), Joy Buolamwini and Timnit Gebru (2019), Emily M. Bender et al. (2021), Abeba Birhane et al. (2023), and Mél Hogan (in this special issue).

Works Cited

Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell.
2021
. “
On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
.” In
FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency
,
610
23
.
New York
:
Association for Computing Machinery
. https://doi.org/10.1145/3442188.3445922.
Birhane, Abeba, Atoosa Kasirzadeh, David Leslie, and Sandra Wachter.
2023
. “
Science in the Age of Large Language Models
.”
Nature Reviews Physics
5
:
277
80
. https://doi.org/10.1038/s42254-023-00581-4.
Bommasani, Rishi, et al.
2022
. “
On the Opportunities and Risks of Foundation Models
.” Preprint, last revised July 12. https://doi.org/10.48550/arXiv.2108.07258.
Buolamwini, Joy, and Timnit Gebru.
2018
. “
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
.”
Proceedings of Machine Learning Research
81
:
1
15
. https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf.
Carpenter, Daniel.
2010
.
Reputation and Power: Organizational Image and Pharmaceutical Regulation at the FDA
.
Princeton, NJ
:
Princeton University Press
.
European Parliament.
2024
. “
Proposal for a Regulation of the European Parliament and of the Council on Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts
” (COM(2021)0206–C9-0146/2021–2021/0106(COD)).
March
13
.
Future of Life
.
2023
. “
Pause Giant AI Experiments: An Open Letter
.”
March
22
. https://futureoflife.org/open-letter/pause-giant-ai-experiments/.
FTC (Federal Trade Commission)
.
2023
. “
FTC Chair Khan and Officials from DOJ, CFPB, and EEOC Release Joint Statement on AI
.”
April
25
. https://www.ftc.gov/news-events/news/press-releases/2023/04/ftc-chair-khan-officials-doj-cfpb-eeoc-release-joint-statement-ai.
Gebru, Timnit, Emily M. Bender, Angelina McMillan-Major, and Margaret Mitchell.
2023
. “
Statement from the Listed Authors of Stochastic Parrots on the ‘AI Pause’ Letter
.” DAIR Institute,
March
31
. https://www.dair-institute.org/blog/letter-statement-March2023/.
Hacker, Philipp.
2022
. “
The European AI Liability Directives: Critique of a Half-Hearted Approach and Lessons for the Future
.” Preprint, last revised November 25. http://dx.doi.org/10.2139/ssrn.4279796.
Hanna, Alex, and Emily M. Bender.
2023
. “
AI Causes Real Harm. Let's Focus on That Over the End of Humanity Hype
.”
Scientific American
,
August
12
. https://www.scientificamerican.com/article/we-need-to-focus-on-ais-real-harms-not-imaginary-existential-risks/.
Kaminski, Margot.
2021
. “
Regulating the Risks of AI
.”
Boston University Law Review
103
:
1347
411
. https://www.bu.edu/bulawreview/files/2023/11/KAMINSKI.pdf.
Marcus, Gary, and Anka Reuel.
2023
. “
The World Needs an International Agency for Artificial Intelligence, Say Two AI Experts
.”
Economist
,
April
18
. https://www.economist.com/by-invitation/2023/04/18/the-world-needs-an-international-agency-for-artificial-intelligence-say-two-ai-experts.
Microsoft
.
2023
. “
Governing AI: A Blueprint for the Future
.”
May
25
. https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW14Gtw.
O'Neil, Cathy.
2016
.
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
.
London
:
Lane
.
OpenAI
.
2023
. “
Moving AI Governance Forward
.”
June
23
. https://openai.com/index/moving-ai-governance-forward/.
Oprea, Alina, Anoop Singhal, and Apostol Vassilev.
2022
. “
Poisoning Attacks against Machine Learning: Can Machine Learning be Trustworthy?
” Computer (IEEE Computer), [online], https://doi.org/10.1109/MC.2022.3190787.
Paul, Kari.
2023
. “
Robot Takeover? Not Quite. Here's What AI Doomsday Would Look Like
.”
Guardian
,
June
3
. https://www.theguardian.com/technology/2023/jun/03/ai-danger-doomsday-chatgpt-robots-fears.
Raji, Inioluwa Deborah, Elizabeth Kumar, Aaron Horowitz, and Andrew Selbst.
2022
. “
The Fallacy of AI Functionality
.”
Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22)
,
959
72
. https://doi.org/10.1145/3531146.3533158.
Russell, Stuart.
2019
.
Human Compatible: Artificial Intelligence and the Problem of Control
.
New York
:
Penguin Random House
.
Shumailov, Ilia, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, and Ross Anderson.
2023
. “
The Curse of Recursion: Training on Generated Data Makes Models Forget
.” Preprint, last revised May 31. https://doi.org/10.48550/arXiv.2305.17493.
Torres, Phil.
2023
. “
Existential Risks: A Philosophical Analysis
.”
Inquiry
66
, no.
4
:
614
39
. DOI: .
Tutt, Andrew.
2017
. “
An FDA for Algorithms
.”
Administrative Law Review
69
:
83
123
. https://dx.doi.org/10.2139/ssrn.2747994.
Widder, David Gray, Sarah West, and Meredith Whittaker.
2023
. “
Open (for Business): Big Tech, Concentrated Power, and the Political Economy of Open AI
.” Preprint, submitted August 17. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4543807.
Ziporli, Christopher T.
2023
. “
Generative Artificial Intelligence and Copyright Law
.” Congressional Research Service Legal Sidebar LSB10922. Updated September 29. https://crsreports.congress.gov/product/pdf/LSB/LSB10922.