Beyond Section 230: Principles for AI Governance
Beyond Section 230: Principles for AI Governance

[H]istory repeats itself, and that’s just how it goes.
— J. Cole1
It is often said that “a lie can travel halfway around the world while the truth is still putting on its shoes.”2 Some say the phrase belongs to Mark Twain.3 Or, perhaps Jonathan Swift.4 Others say Winston Churchill.5 This confusion reminds us that sometimes we can’t really pin down how, why, or from whom a statement was made. Yet, that neither stops falsehoods from spreading, nor prevents people from blindly accepting them. The truth is: Once a falsehood enters the consciousness, no matter how small, it takes on a life of its own. And just as the butterfly flaps its wings, those falsehoods’ effects can be felt around the world.
The advent of commercial generative artificial intelligence (GAI) has catalyzed the way humans interact with technology, and vice versa. In the internet era, platforms like Myspace, Facebook, and Twitter were able to flourish, in part, because of § 230 of the Communications Decency Act.6 Congress and the courts foreclosed publisher and distributor liability for internet platforms, insulating a nascent industry from defamation lawsuits due to the explosive rise of unverified user-generated content.7 However, while the statute has provided many benefits, some have argued that limiting the platforms’ liability has had unintended, deleterious effects on people and society more broadly through the spread of harmful falsehoods.8
In some ways, it seems regulators have learned from that lesson. GAI’s near ubiquity has spawned a bevy of regulatory efforts seeking to rein in the technology — everything from executive orders promoting responsible innovation9 to states mandating inclusion of GAI content provenance data.10 But, if § 230’s unintended effects have taught us anything in terms of regulating online speech platforms, it is that we need regulations that focus on promoting accountability, transparency, and democracy. Based on existing literature, this Chapter is meant to be a holistic, yet non-exhaustive, introduction to such an approach.
This Chapter proceeds in four sections. Section A examines § 230 as a cautionary tale, using Facebook as the paradigmatic example of how emerging technologies can create harm without adequate government oversight. Section B discusses commercially available GAI systems and the risks they pose to users and the public writ large. Section C reviews various legal, legislative, and ethical approaches to GAI governance using the lessons learned from section A. And finally, section D discusses the First Amendment considerations courts and policymakers should think through for GAI regulations.