Abstract

With the release of large language models such as GPT-4, the push for regulation of artificial intelligence has accelerated the world over. Proponents of different regulatory strategies argue that AI systems should be regulated like nuclear weapons posing catastrophic risk (especially at the frontiers of technical capability); like consumer products posing a range of risks for the user; like pharmaceuticals requiring a robust prerelease regulatory apparatus; and/or like environmental pollution to which the law responds with a variety of tools. This thinkpiece outlines the shape and limitations of particular analogies proposed for use in the regulation of AI, suggesting that AI law and policy will undoubtedly have to borrow from many precedents without committing to any single one.

You do not currently have access to this content.