Beyond Section 230: Principles for AI Governance
[H]istory repeats itself, and that’s just how it goes.
— J. Cole
It is often said that “a lie can travel halfway around the world while the truth is still putting on its shoes.” Some say the phrase belongs to Mark Twain. Or, perhaps Jonathan Swift. Others say Winston Churchill. This confusion reminds us that sometimes we can’t really pin down how, why, or from whom a statement was made. Yet, that neither stops falsehoods from spreading, nor prevents people from blindly accepting them. The truth is: Once a falsehood enters the consciousness, no matter how small, it takes on a life of its own. And just as the butterfly flaps its wings, those falsehoods’ effects can be felt around the world.
The advent of commercial generative artificial intelligence (GAI) has catalyzed the way humans interact with technology, and vice versa. In the internet era, platforms like Myspace, Facebook, and Twitter were able to flourish, in part, because of § 230 of the Communications Decency Act. Congress and the courts foreclosed publisher and distributor liability for internet platforms, insulating a nascent industry from defamation lawsuits due to the explosive rise of unverified user-generated content. However, while the statute has provided many benefits, some have argued that limiting the platforms’ liability has had unintended, deleterious effects on people and society more broadly through the spread of harmful falsehoods.
In some ways, it seems regulators have learned from that lesson. GAI’s near ubiquity has spawned a bevy of regulatory efforts seeking to rein in the technology — everything from executive orders promoting responsible innovation to states mandating inclusion of GAI content provenance data. But, if § 230’s unintended effects have taught us anything in terms of regulating online speech platforms, it is that we need regulations that focus on promoting accountability, transparency, and democracy. Based on existing literature, this Chapter is meant to be a holistic, yet non-exhaustive, introduction to such an approach.
This Chapter proceeds in four sections. Section A examines § 230 as a cautionary tale, using Facebook as the paradigmatic example of how emerging technologies can create harm without adequate government oversight. Section B discusses commercially available GAI systems and the risks they pose to users and the public writ large. Section C reviews various legal, legislative, and ethical approaches to GAI governance using the lessons learned from section A. And finally, section D discusses the First Amendment considerations courts and policymakers should think through for GAI regulations.
A. Section 230: A Cautionary Tale
Section 230 needs no introduction. This section is not a recounting of the statute’s history, or even its direct application to GAI. Instead, it frames § 230 as a cautionary tale of the externalities that can arise from courts’ and regulators’ failure to fully appreciate the risks of emerging technologies. The resulting lessons should inform regulatory approaches to GAI.
1. Section 230: From Prodigy to Gonzalez. — At the dawn of the internet age, Congress enacted § 230 to protect a nascent industry from ruinous litigation due to platform companies’ regulating third-party content hosted on their sites. Congress was responding to Stratton Oakmont, Inc. v. Prodigy Services Co., where a New York court found that an internet company was liable for hosting defamatory content uploaded by its users because it “exercised sufficient editorial control over” that content. Recognizing the chilling effects Prodigy would have on speech and commerce, Congress enacted § 230 to foreclose lawsuits that held platforms liable as the “publisher[s] or speaker[s]” of third-party content. Courts later cemented Congress’s wish. In Zeran v. America Online, Inc., the Fourth Circuit held that § 230 immunized AOL from a defamation suit that sought to impose liability for third-party content, despite the company knowing the content was defamatory. Zeran’s holding ultimately charted the course for the statute’s interpretation for the better part of the next three decades.
Zeran no doubt laid the path for the modern internet, but some also see it as § 230’s “original sin.” The holding has been criticized for “ignor[ing] [§ 230’s] text and history” and for lacking critical engagement with Congress resulting in an era of, frankly, absurd rulings shielding platform companies when they knowingly facilitate harm. In 2023, the Supreme Court had the chance to review the statute for the first time in Gonzalez v. Google, LLC and determine whether platforms could be held liable for harms linked to algorithmically promoted terrorist content, consistent with § 230’s immunity. Yet, the Court was reluctant to disrupt Zeran’s decades of precedent, ultimately leaving the law undisturbed.
By not considering how § 230’s immunity shield, augmented by Zeran’s broad interpretation, may have outgrown its original purpose, Congress and the courts left users without sufficient pathways for accountability when platforms negligently or knowingly caused harm. Facebook is a clear example of how some platforms have avoided liability for hosting, amplifying, and curating harmful content due, at least in part, to § 230’s protections.
2. The Unintended Consequences of Section 230. — This section uses Facebook as a case study to examine how § 230’s protections have shaped the platform’s growth while imposing its darker impacts on users, society, and democracy. For nearly a decade, the platform has been accused of everything from amplifying harmful content to undermining trust in democracy and serves as a prime example of how loosely regulated, emerging technologies like GAI could enable intolerable societal risks.
(a) Inadequate Accountability Measures. — Take Force v. Facebook, Inc., a case involving “terrorist attacks by Hamas against five Americans in Israel.” The plaintiffs alleged that Facebook failed to remove Hamas’s “pages and associated content” after the terrorist group used the platform to encourage murder and other acts of terrorism in Israel. Moreover, the plaintiffs alleged that Facebook “directed [this] content” to the would-be perpetrators’ “personalized newsfeeds.”
Facebook successfully invoked a § 230 defense. The court explained that the company’s alleged conduct is precisely what § 230 immunizes. According to the court, the platform’s (automatic) use of its algorithms to direct terrorist content to users is simply the “arranging and distributing [of] third-party information” — that is, publishing — and holding otherwise would upend protections for internet companies performing similar functions.
In a partial dissent, Chief Judge Katzmann appeared incredulous that § 230 shields platforms from harms linked to algorithmic curation. He argued that § 230’s protections are only triggered when a claim alleges that the platform is the “publisher of specific third-party content.” But by suggesting friends, groups, and events, the company sends its own messages to users, not simply another user’s content. Chief Judge Katzmann explained that this sort of publishing is outside the “editorial functions that [§ 230] immunizes.”
Force highlights a significant issue with the lack of attention to Zeran’s consequences. Its broad interpretation has left individuals without recourse for harms caused by algorithmic curation in novel situations.
(b) Lack of Transparency. — Section 230 allows platform defen-dants to dismiss lawsuits before discovery. Typically, this means that when a platform knowingly amplifies falsehoods or fails to adequately protect users from harmful content, victims of a platform’s malfeasance are denied the opportunity to litigate their claims on the merits. Platforms’ opaque operations, protected in part by § 230, have thus fueled scandals like those revealed in the Facebook Files.
Facebook is a good example of how a lack of transparency can lead to harm. Internal documents revealed that the company changed its algorithmic ranking system in 2018 to promote “meaningful social interactions” (MSIs). But the change also rewarded “[m]isinformation, toxicity, and other violent content.” And when company researchers brought potential solutions to Facebook’s senior leaders, the company did not pursue them, choosing instead to prioritize the company’s growth initiatives.
Facebook’s lack of transparency, enabled by courts’ interpretations of § 230, prevents victims of platforms’ misinformation and toxicity from holding the company accountable. Absent discovery, the platform’s moral and legal culpability for prioritizing profit over safety remains obscured.
(c) Destabilizing Democracy. — The lead-up to the January 6th insurrection represents yet another failure to mitigate harm, in part due to Facebook’s reliance on § 230’s protections. President Donald Trump and his coconspirators hatched the lie that the 2020 Presidential Election was stolen, and it spread like wildfire on platforms like Facebook. Before the election, Facebook had deployed “break glass” measures like labeling misinformation by political figures and cracking down on harmful Facebook groups to mitigate the effects of misinformation. However, after the election, the company rolled back several of these measures just as the “Stop the Steal” movement was proliferating on Facebook, again “prioritizing platform growth over safety.”
After the election, a “startling[]” percentage of political content viewed by users contained forms of election denialism, with many comments displaying “combustible . . . misinformation.” Internal documents confirm Facebook knew insurrectionists were coordinating on the platform, but its enforcement was inadequate. A report by the January 6th Select Committee later revealed that Facebook’s reluctance to enforce its policies was driven by its user growth goals and fear of reprisal from the political right.
Ultimately, Facebook’s prioritizing of its own self-interest, despite overtures about its duty to protect democracy, led to a moment where the United States almost lost control of its most fundamental institutions. Without adequate government oversight, the company allowed falsehoods to spread unchecked, eroding public trust and fueling political instability.
3. Takeaways from Section 230’s Cautionary Tale. — Judicial deference, coupled with Congress’s failure to seriously engage with the risks posed by emerging technologies, is the cautionary tale of § 230. Leaving the courts to unthinkingly broaden the statute, absent legislative or regulatory oversight, has arguably contributed to consequences that extended beyond Congress’s original intent. This section serves as a reminder of the danger of failing to keep a pulse on the societal impacts of rapidly evolving technologies and points toward three principles for governing GAI.
First, accountability should be central to any regulatory scheme. Force demonstrates how well-intentioned laws for emerging technologies can leave users without access to justice when they lack adequate oversight and accountability. Second, transparency is important because technology companies, like social media and GAI platforms, are increasingly ubiquitous, yet the current regulatory environment — perhaps due to lawmakers’ lack of understanding — allows these platforms to self-regulate in ways that obscure their potential legal and moral culpability. Lastly, given the power of large technology companies, we should be mindful of how they affect our democracy. January 6th illustrates the dire consequences of unchecked lies and platform companies’ failure to act against misinformation.
The lessons from § 230’s unintended consequences show why prioritizing accountability, transparency, and democracy are table stakes for emerging technology regulation. But GAI presents unique challenges that go beyond § 230. Understanding these challenges and how they can erode trust and exacerbate real-world harms is an important precursor to vindicating the lessons set out above. The next section explores GAI’s risks and challenges, setting the stage for an approach that promotes accountability, transparency, and democratic values.
B. GAI and the Propagation of Harmful Falsehoods
Like social media, GAI promises many benefits — ranging from drug discovery to content creation; but it also presents unique risks. GAI disrupts traditional lines of accountability among platforms, developers, and users, threatening existing notions of liability against those actors for false or misleading speech. Its propensity to “hallucinate” false information and its use to create deepfakes have the potential to cause harmful falsehoods to spread at an unprecedented scale. And while lawmakers have taken notice, the lack of comprehensive legislation is worrying. Based on existing literature, section B explores these issues in greater depth.
As an overview, GAI refers to technology that generates content (output) based on user queries (inputs). GAI tools owned by companies like OpenAI and Google generate output by predicting patterns from vast datasets scraped from the internet. These tools produce content that seems human-like but, in fact, is entirely shaped by their training data and statistical predictions of the words most likely to follow in sequence. They neither (automatically) verify the truth of the statements they generate nor possess the capacity to reflect on their inability to do so. This enables the anthropomorphized interactions we have with GAI yet introduces significant risks.
One unique risk is GAI’s ability to generate false or misleading content without accountability. Hallucinations — the generation of false output — are a major issue. They are not necessarily the result of intentional deception by the GAI tool or its developers, but rather reflect the quality and (in)completeness of training data, incorrect learning patterns, and biases in training. This means they may inadvertently propagate very real-sounding (harmful) falsehoods, which is particularly problematic when these falsehoods, for example, defame individuals. What is more, typically, no one person is responsible. While plaintiffs can allege intent to generate defamatory content or rely on negligence torts, it may still be difficult to hold anyone (a person, chatbot, or company) accountable.
Deepfakes pose another unique risk. They are highly realistic, synthetic forms of media that replace a person in the media with another person’s likeness and usually depict the person in an image or video doing or saying things they never did. There are popular use cases of this technology, from Kendrick Lamar’s morphing into Kanye West to Jordan Peele’s terrifyingly accurate portrayal of former President Obama. While deepfakes have gained notoriety for their use in popular media, they have also been a challenge to our sense of reality in public life — from political disinformation to some of the most heinous forms of nonconsensual intimate imagery. Deepfakes portraying individuals engaging in illegal or immoral behavior could cause irreparable harm to that individual’s reputation, even if the content is quickly debunked.
The harm hallucinations and deepfakes can cause is exacerbated by how quickly GAI-produced content can flood through social media platforms. GAI can produce high volumes of seemingly real content almost instantly and is already posing a challenge for social platforms’ moderation teams. Machine-generated falsehoods paired with the viral nature of social media may leave little opportunity for victims to effectively mitigate resulting harms. Ultimately, the increasing accessibility and availability of these tools is a powerful threat to truth and the integrity of public discourse.
GAI’s unique risks pose problems for transparency and democracy more broadly due in part to the lack of a comprehensive regulatory regime to provide accountability. While lawmakers have met to address many of the issues of GAI, many approaches have stalled at the state or federal level, are more concepts than actual plans, or rely on voluntary, rather than mandatory, compliance. Since GAI is poised to drastically transform the information ecosystem, this piecemeal approach is concerning and risks leaving GAI companies to regulate themselves, echoing § 230’s issues.
These parallels highlight the urgent need to tackle GAI’s distinct challenges based on the principles laid out in section A. Section C explores how applying these principles might inform regulations addressing GAI’s risks.
C. Applying Lessons from Section 230 to Future GAI Regulation
As GAI becomes ubiquitous, legal frameworks must evolve to address its harms. A comprehensive national approach is ideal, but if AI regulation becomes a partisan issue, it could make near-term action unlikely. Section C examines existing legal doctrines and principles based on the lessons from § 230’s unintended consequences, highlighting legal remedies for GAI harms, transparency efforts, and ways to align GAI with democratic values.
1. Potential Legal Remedies. — Accountability should be the foundation of any regulatory scheme for GAI. As section A argues, § 230 ushered in an era of “move fast and break things,” leaving individuals without adequate recourse for harm they experienced. While courts will likely find that § 230’s immunity does not extend to GAI, without a comprehensive regulatory framework, individuals may still struggle in the search for accountability. Below is a review of how traditional tort doctrines — such as defamation, products liability, and public nuisance — might evolve as remedies for GAI-related harms, drawing on existing literature.
(a) Defamation. — Defamation offers a logical starting point for plaintiffs seeking accountability for harmful falsehoods generated by GAI because the tort provides remedies for reputational harm caused by negligent or reckless publication of false speech. However, attributing intent to GAI is difficult because these tools do not think like humans. They are thoughtless people “pleaser[s]” and cannot evaluate their output. This raises questions about whether GAI can truly act with malice in defaming a public figure or negligently harm a private person by failing to verify its output as a “reasonable person” would. Developers may point to this conundrum and disclaimers noting GAI’s unreliability as defenses to defamation claims.
Plaintiffs may bypass those defenses, at least at the motion to dismiss stage. In Walters v. OpenAI, L.L.C., Mark Walters, a radio host, filed a defamation suit against OpenAI in a Georgia state court. He alleged that ChatGPT defamed him when a journalist prompted it to describe a lawsuit the journalist was reporting on. In response, ChatGPT claimed that Walters was the subject of the suit, “accused of defrauding and embezzling funds from [a nonprofit foundation].” OpenAI moved to dismiss, arguing that the journalist could not have understood Chat-GPT’s statements as defamatory because he knew they were false and the chatbot’s disclosures indicated its outputs required human verification. OpenAI also argued Walters’s claim failed because, as a public figure, he did not adequately allege actual malice. Ultimately, the court denied OpenAI’s motion.
The order, though short, suggests these defenses were unavailing. OpenAI’s claim that the journalist knew or should have known ChatGPT’s statements were false does not address the key issue: The question is not whether the recipient knew the statement was false, but whether ChatGPT’s output could reasonably be understood as a factual assertion. Despite its disclaimers, OpenAI actively promotes Chat-GPT’s reliability, inducing users to treat its outputs as fact. Moreover, as to the intent standard, while Walters conceded that he is a public figure and must plead actual malice, the court appeared to accept his claim that, given Walters’s notoriety, OpenAI should have known ChatGPT’s statements were false, even though its developers did not produce the statement.
Time will tell how this case will play out, especially given the Supreme Court’s opinions in New York Times Co. v. Sullivan and Gertz v. Robert Welch, Inc.; but as it stands, plaintiffs may find courts willing to entertain defamation claims against GAI companies.
(b) Products Liability. — Products liability is another way plaintiffs might seek accountability for GAI-related harms. Scholars suggest that plaintiffs may have success styling their defamation claims as products liability lawsuits since these claims are focused on the failure to adopt reasonable measures in GAI training or deployment to mitigate foreseeable harms. For example, a plaintiff may allege the defendant did not exercise reasonable care in designing a GAI tool or in warning users about its risks, resulting in foreseeable harm. Or, the same plaintiff may allege that, despite the defendant’s exercise of reasonable care in a GAI tool’s design, the manufacturing or warnings were still defective. These claims are particularly relevant in cases where GAI hallucinations cause reputational harm.
The success of products liability claims in recent cases against online speech platforms suggests courts are amenable to this theory. In Lemmon v. Snap, Inc., the parents of teens killed in a car crash filed a wrongful death suit against Snap, alleging the company’s speedometer filter was negligently designed because it encouraged users to drive at excessive speeds and rewarded the behavior. The Ninth Circuit ruled that the lawsuit did not seek to hold Snap liable as the “speaker” of third-party content but because it “violat[ed] its distinct duty to design a reasonably safe product.” And, in Anderson v. TikTok, Inc., the Third Circuit held that TikTok’s algorithmic promotion of the “Blackout Challenge,” which led to a minor’s death, was not protected by § 230, because the plaintiff’s defective design claims sought to hold TikTok liable for its own “expressive activity” — the curation and dissemination of harmful content — rather than third-party speech.
This approach is not without its challenges. First is the issue of whether GAI platforms are even considered products. Although there is no consensus, many courts have not considered software a product, partly due to its intangibility. Another issue is the First Amendment and the Court’s opinions in Sullivan and Gertz. Plaintiffs may try to circumvent these standards by claiming inadequate disclosures or negligence in a model’s design, but courts may either reject the idea that GAI developers had the requisite intent, or fail “to impose strict liability for the provision of ideas or information, even when it results in serious harm.” Finally, courts typically balance a product’s risk and utility when evaluating product liability claims by determining whether its “utility outweighs its inherent risk of harm.” And so, while human review could mitigate GAI hallucinations, courts might find this solution prohibitively expensive and likely to impair the software’s functionality. Ultimately, these claims will test courts’ ability to adapt legal doctrines to address the unique challenges posed by GAI while balancing the trade-offs between accountability and innovation.
(c) Public Nuisance. — Public nuisance claims present a novel, underutilized path for plaintiffs suffering GAI-related harms. They provide a remedy where there “is an unreasonable [and significant] interference with a right common to the general public,” particularly in the realm of health and safety. Public nuisance claims have been levied against polluters, the tobacco industry, and opioid manufacturers and distributors. They are especially valuable for addressing societal harms where “standard regulatory tools have failed or been exploited.”
Public nuisance claims have been filed against social media platforms in recent years. In In re Social Media Adolescent Addiction/Personal Injury Products Liability Litigation, hundreds of lawsuits were consolidated in the Northern District of California against platforms like Facebook and Snapchat. The litigants argued those platforms created a public nuisance by “design[ing] their [sites] to foster compulsive use and addiction in minors,” harming their mental and physical health. The platforms countered, arguing, inter alia, that the plaintiffs’ claims lacked the required nexus between the defendants’ conduct and land use and did not involve a public right. The court disagreed, finding that most states no longer limit public nuisance claims to land use and that the platforms’ interference with public health and safety mirrored harms caused by e-cigarette and opioid manufacturers. Ultimately, the court allowed the lawsuit to proceed, permitting most of the plaintiffs to seek abatement of the platforms’ actions.
In re Social Media shows how courts may adapt doctrines like public nuisance to address GAI-related harms. Framing GAI harms as unreasonable interferences with public rights could allow litigants to hold developers liable for contributing to societal injuries. While applying public nuisance law to GAI presents challenges, like convincing courts these lawsuits are within the ambit of the doctrine, it offers another framework for addressing GAI’s societal risks.
2. Transparency as Accountability. — Transparency around GAI training data, output, and provenance is essential for managing the societal risks of GAI throughout its lifecycle. Below are potential approaches to implementing transparency frameworks based in part on laws that have been proposed or enacted to mitigate the societal risks of GAI.
(a) Training Data Quality. — GAI developers should take reasonable steps to ensure GAI models are trained using well-sourced, quality data to mitigate the potential for false or misleading outputs. They should consider whether the dataset maintains low quality content and whether it can filter out questionable sources. Are the data about specific people linked to reliable public records? The National Institute of Standards and Technology’s (NIST) AI Risk Management Framework provides a path for establishing standards to reduce hallucinations and ensure accountability in GAI development. The framework promotes transparency and integrity through robust data governance practices. To strengthen it, regulators should require that developers certify compliance with NIST’s framework or that they have similar programs in place that uphold rigorous data transparency and reliability standards.
(b) Output Verification. — GAI tools should have some method to verify their outputs to limit the spread of false information. Retrieval-augmented generation (RAG) is an example that some researchers have proposed to minimize hallucinations. Through RAG, a GAI model references a specific, curated database of related documentation to verify its knowledge on a subject before it generates output, and some industry leaders have claimed that it can “reduce[] hallucinations to nearly zero.” While it has its drawbacks, requiring GAI developers to implement reasonable output verification methods like RAG — and other comparable techniques — can reduce the likelihood of individuals being harmed by false information. Increasingly, laws like the EU’s AI Act require AI systems to clearly document their risk mitiga-tion methods for high-risk AI output. Implementing these measures not only ensures compliance but also helps curb the societal risks of misinformation.
(c) Labels & Disclosures. — Clear and conspicuous labels and disclosures are essential to curbing the spread of false or misleading GAI output. Experts generally agree GAI output should have disclosures that disclaim reliability, as research shows such warnings reduce user trust in misleading information. Lawmakers have proposed measures like overlaying watermarks and appending AI-generated content labels to GAI-produced content. For example, California’s AI Transparency Act will require GAI developers to enable content provenance in AI-generated media and provide users with tools to determine when such media has been developed by GAI. It also requires GAI developers to offer the option to include a disclosure on any content produced by the GAI. By requiring AI developers to embed labels in GAI output, legislation like California’s could aid in reducing the spread of falsehoods and holding developers accountable for content their tools produce.
3. GAI for Democracy. — Up to this point, there has been very little discussion of how GAI can enable democracy. As section A demonstrates, strong democratic principles are important in the development of GAI, lest these tools are coopted to sow dissent and propagate falsehoods for personal political gain. “AI alignment” can bring GAI systems “in line with human intentions and values,” allowing developers to embed principles like democracy and civic empowerment in AI’s core functioning to mitigate the risks of misinformation and foster trust in the democratic process.
Today, many organizations are harnessing AI to disrupt political misinformation, reflecting AI alignment principles in both design and deployment. Some organizations have used tools like ChatGPT for open-source investigations, mapping verified images of the war in Gaza. Others have employed methods like watermarking or fingerprinting during generation of synthetic media to identify artificial content and help individuals better identify when GAI may be false or misleading. These actions demonstrate that a healthy democracy needs a well-functioning information ecosystem, and AI alignment helps these tools prioritize authenticity and adherence to democratic values.
GAI can also be a source of civic engagement by reshaping the way we think about deliberative democracy — the use of reasoned debate to find common ground on complex political issues. Taiwan’s vTaiwan platform exemplifies this by using machine-learning software to aggregate citizens’ responses to pressing issues like telemedicine and ride-sharing, with the government acting on more than 80% of the issues discussed. Others have successfully explored using AI facilitators to host large-scale deliberations that democratize conversations and enable broader and more meaningful participation among different stakeholders. While challenges like algorithmic bias and detecting nuance remain, democratically aligned GAI offers the potential to strengthen information integrity and make civic engagement more inclusive and participatory.
D. The Marketplace and Its Discontents
No conversation about GAI would be complete without discussing the First Amendment, as the doctrine casts a large shadow over GAI regulation. Constitutional questions about speech protections are central to this debate, and while addressing every nuance is not feasible here, it is essential to cover a few key points. Though GAI tools can cause significant harm, their owners and users likely have some First Amendment protections. But the Supreme Court’s “Lochnerian” interpretation of the First Amendment has made it difficult to regulate harmful speech. Some argue for caution and modest improvements to the so-called “marketplace of ideas,” but this Chapter and other thinkers argue that we can walk and chew gum at the same time. GAI-produced speech should not be protected in the way human-produced speech is, given the software’s lack of intent, comprehension, and autonomy. Instead, the legal and ethical approaches discussed in section C should be reviewed under intermediate scrutiny because they address GAI’s risks while incidentally burdening core free speech values. To support that doctrinal approach, this section will explain why and how the theoretical underpinnings of the First Amendment do not apply to GAI and will ultimately propose a lens through which to properly scrutinize GAI accountability and transparency measures.
In Liar in a Crowded Theater: Freedom of Speech in a World of Misinformation, Professor Jeff Kosseff examines the First Amendment’s evolution in protecting falsehoods. The law protects lies about whether one received the Congressional Medal of Honor, misleading statements about a political adversary’s actions, and rap lyrics that may stretch the truth. The gravamen of Kosseff’s argument, in part informed by the marketplace theory of free speech, is that, despite the real-world harms caused by mis- and disinformation during events like the January 6th Capitol riot, “[r]egulation and liability are not terribly effective ways to address the harms of false information.” The Supreme Court largely echoes this view, also favoring variations of the marketplace theory, which assume “[t]he remedy for speech that is false is speech that is true.” While Kosseff acknowledges the marketplace’s limitations throughout the book, he advocates for strengthening Americans’ ability to debate truth through non-governmental interventions like enhanced civics and media-literacy education.
But this view is misguided when applied to GAI. As this Chapter highlights, GAI poses unique challenges to the information ecosystem, such as the spread of false information and the technology’s propensity to blur the lines between fact and fiction. Unregulated, these risks could lead to both an explosive rise of false information in the theoretical marketplace and a host of physical harms in the real world. What value is there in speech that assists a child’s suicide? How can GAI “improve our own thinking both as individuals and as a [n]ation” when there are those who would use it to undermine our political autonomy? The First Amendment should not give democracy the means to subvert itself by “convert[ing] the [law] into a suicide pact.” A more thoughtful approach could address the risks posed by this technology by considering the purpose of the First Amendment alongside GAI’s lack of intent, capacity for meaningful engagement, and potential for rampant disinformation and harm to the democratic process.
First, courts and regulators must recognize that GAI is “[n]ot like us” and therefore should not receive the same protections as humans. The purpose of the First Amendment is to protect core forms of human expression, but it is unclear if GAI has the potential to express anything, as it does not have “morality, intelligence or ideas.” Some argue that GAI is code-based and is protected as an expression of its designers, or alternatively as a form of corporate speech. Perhaps it is true that the technology produces something resembling speech, but that “speech” does not appear to belong to anyone. GAI trains itself to “mimic billions of interrelated statistical regularities,” the outputs of which are not tied to anyone’s “thoughts, beliefs, [or] chosen messages.” Unlike the code-like expression protected in video games and software programs, GAI is not designed to execute a particular message, but to respond to every message. Even still, others argue the First Amendment might protect GAI-produced content because the law is agnostic toward who is speaking and cares more about the speech itself. This is inaccurate, of course, but more to the point, courts have never ascribed constitutional rights to inanimate objects. Attributing human-like protections to GAI would be akin to giving rights to iPhones, game consoles, or any entity lacking the ability to engage in the meaningful exchange of ideas that the First Amendment is designed to safeguard. At best, GAI represents a shadow of human speech, but not its form.
Further, some scholars suggest that speech rights for AI output are better understood through the rights of users and receivers of its services, but that leaves questions unanswered. Certainly, courts have protected the rights of users, and so it is plausible that GAI outputs could be protected speech, but that does not happen automatically. Prompting a GAI chatbot for information only becomes the user’s speech once the user adopts it. This is because when GAI produces con-tent, it may “not convey ideas that the user had previously considered or would endorse.” Unlike other human-directed, speech-enabling tools such as video cameras or microphones, GAI chatbots take user prompts and generate their own content. If a user asked GPT, “Who is the greatest rapper alive?,” and it returned “Drake,” not only would that be an obvious hallucination, but it might not be that user’s speech.
In contrast, receivers of GAI output may have a First Amendment right to obtain it. The Court has explained that listeners have rights to receive prescription drug advertisements, corporate speech on matters of public concern, and even foreign propaganda. But “the interest is purely in listening,” a right that is weakly protected when the source of information lacks First Amendment protections. The right to listen finds some of its source in the marketplace theory, which is predicated on human speakers exchanging ideas. And where the First Amendment rights of speakers have been limited in the marketplace, the Court has simultaneously limited listeners’ rights to receive that speech. Simply put, listeners of GAI-produced content are not receiving messages from a thinking human with First Amendment protections. These distinctions highlight the need to thoughtfully balance users’ and listeners’ rights with the maxim that First Amendment protections are rooted in safeguarding human expression.
The “User/Listener” framework described above is helpful for understanding GAI systems for what they are: conduits for expression. That is because, much like television facilitates the spread of information, these tools facilitate expression. In this light, there may be hope for regulating GAI without regard to the content it produces, but as a tool for “transmitting speech, or a medium for symbolically expressing it.”
Content-neutral laws that seek to regulate speech-enabling technology, but not expression itself, would likely trigger intermediate scrutiny — and survive. For example, under United States v. O’Brien, laws regulating GAI can withstand constitutional scrutiny if they are within the government’s power, the government’s interest is substantial and unrelated to the suppression of speech, and the regulation does not burden more speech than necessary. Moreover, in Zauderer v. Office of Disciplinary Counsel, the Court explained that the government can compel speakers to provide sufficient factual and not unduly burdensome “warning[s] or disclaimers[s]” on commercial speech to remove any opportunities for “confusion or deception.” Importantly, the Court noted that the assessment of such speech may “require resolution of exceedingly complex and technical . . . issues.” And finally, the Court, in a pair of decisions both captioned Turner Broadcasting System, Inc. v. FCC, upheld content-neutral laws meant to regulate cable operators under intermediate scrutiny, reasoning that the government’s regulations were unrelated to speech suppression, but instead a means of ensuring fair competition among a diversity of viewpoints in the marketplace of ideas.
If this framework holds, laws like those suggested in section C might present a path forward. For example, laws mandating provenance measures like AI watermarking or requiring prominently placed disclosures about the unreliability of GAI-produced content could prevail under intermediate scrutiny. The government has a substantial interest in maintaining a well-informed citizenry and can do that by ensuring that speech that is misleading or prone to inaccuracies is also accompanied by factual, noncontroversial information to mitigate potential deception. These content-neutral regulations would promote the operational reliability of GAI systems, remove the government from deciding “truth,” and better equip the public to choose between conflicting viewpoints.
However, regulating GAI to prevent the spread of misinformation may strike some as content based, and thus subject the government’s efforts to strict scrutiny. Leave aside the argument that GAI content is not in and of itself First Amendment–protected speech. A statute that appears content neutral on its face but that cannot be “justified without reference to the regulated speech” would likely require narrow tailoring and a sufficiently compelling governmental interest. The Court has explained on various occasions that false speech, absent the requisite knowledge and some form of cognizable harm, is still protected speech (despite “[c]alculated falsehood[s]” being both valueless and disruptive to the marketplace of ideas). And so, regulations that focus on eliciting truth may carry the warning sign of the government “favoring some ideas over others.”
Transparency-related laws that are content based may survive if they are viewpoint neutral. In City of Renton v. Playtime Theatres, Inc., the Supreme Court reviewed a zoning ordinance prohibiting adult movie theaters within 1,000 feet of a home, church, park, or school because of the alleged crime associated with the theaters and to preserve quality of life in the city. The Court recognized that while the ordinance was a time, place, and manner restriction on speech, it did not “fit neatly into . . . the ‘content-neutral’ category,” partly because of its reference to a specific category of (presumably disfavored) speakers. Nevertheless, the Court held that the law’s “‘predominate’ intent” was directed not at the theaters’ content, “but rather at the secondary effects of such theaters on the surrounding community.” Regulating the secondary effects of speech on a community, in a viewpoint-neutral manner that left alternatives for speech open, demonstrated narrow tailoring and a substantial government interest. The ordinance was reviewed under intermediate scrutiny and ultimately upheld.
In the same way, arguably content-based laws requiring GAI companies to take measures to limit the spread of misinformation should not trigger strict scrutiny. The Court in Renton explained that legislatures have significant latitude to solve “admittedly serious problems” — like lowering the quality of life in the community or facilitating crime — when it comes to the secondary effects of speech so long as any laws are viewpoint neutral. The proposals in section C related to certifying quality training data and ensuring output verifications seek to do exactly that: mitigate the secondary effects of false or misleading speech that can harm everyday people by mandating, at the very least, viewpoint-neutral risk mitigations for emerging technologies that may otherwise have none.
The spread of false information developed by GAI is indeed a very serious problem because of its ability to erode trust and undermine the marketplace of ideas. The First Amendment protects real humans engaging with one another — not shadows or hallucinations. And as this Chapter has detailed, those risks are manifesting in all aspects of public life, from schools to the news to elections. Whether it’s GAI systems generating defamatory content, spreading misinformation, or producing outputs that jeopardize the integrity of democratic processes, the societal impacts are real and potentially destabilizing. Addressing these problems is feasible by focusing on their ripple effects with minimal suppression of protected speech.
Conclusion
When it comes to GAI, everything old might be new again. GAI can give us the freedom to create and express ourselves in ways older communications technologies never could. But it brings similar perils. Section 230’s cautionary tale should be a warning to courts and regulators asleep at the wheel as GAI becomes ubiquitous. Its unique risks and potential harms echo the challenges individuals continue to face in the era of § 230’s lax regulatory regime. And while lawmakers seem more alert this go-around, the lack of comprehensive action is alarming. Truth and democracy are under assault by powerful actors hoping to bend both to their own political ends. Balancing GAI’s risks with a thoughtful approach to free speech values will be challenging, but is something we need to get right. And fast.
The post Beyond Section 230: Principles for AI Governance appeared first on Harvard Law Review.