Co-Governance and the Future of AI Regulation

It’s no secret that artificial intelligence (AI) has posed a challenge for lawmakers. Tackling the technological complexity, rapid pace of development, and broad variety of projected impacts is no easy feat, not for scholars and not for non-expert legislators. So far, proposals to regulate the technology have deployed a traditional approach: one that is centralized around our traditional institutions and that imagines regulation happening the way it usually does — top-down. But AI is an unusual regulatory subject, which may mean that this traditional approach is a poor fit for the challenge at hand. Indeed, AI is poised to change the world: Everyone will feel its impact. And if that’s the case, everyone should have a part to play in its governance.

Conversations about the regulation of AI implicate deeply held values at issue in conversations about our collective commitment to certain democratic values. Today, the debate about the relative benefits of open- versus closed-source AI reveals that the same values that underlie our democracy — values like accessibility, transparency, and participation — are desired in the AI context, too. This suggests that the way we manage AI in the future should heed those values, not only by promoting particular types of innovation within the industry but also by reimagining the way our regulatory institutions respond to innovation. To put it plainly, as we build out a framework for AI governance, these democratic values should be kept front of mind. Already, some commentators have suggested reimagining our social, political, and governance institutions to better accommodate a vision of democracy in the AI era. This Chapter adds to that conversation, proposing a regulatory framework for AI that aims to offer a seat at the table to all stakeholders — namely, co-governance.

This method of regulation isn’t new. Increasingly, regulators have imagined more democratic forms of governance that give broader swaths of stakeholders a more meaningful say in decisionmaking. These innovations have eschewed the traditional top-down, centralized, one-size-fits-all approach to governance in favor of more flexible, diffuse, and context-specific models. Investigating the purpose and spirit of these applications of co-governance offers a roadmap for thinking about the governance of AI, too. And transplanting co-governance from these contexts to the world of AI isn’t as farfetched an idea as it may seem at first blush. Indeed, though AI is perhaps the opposite of a local issue — and is instead global in its implications — this Chapter will show that the very same values that motivate more democratic forms of governance in other, often local, contexts are at play in conversations about AI, too. In other words, understanding how and why the law has sought to disperse governance power in other contexts lets us imagine a regulatory future where AI — thought to be capable of radically changing everyday life — remains, at least to some extent, in the hands of everyday people. Ultimately, this Chapter argues that if it’s true AI will reshape our economy, our environment, and our everyday experiences, then we ought to reimagine our approach: Regulation needs to be brought closer to home.

The Chapter proceeds in five sections. Section A briefly introduces the democratic values at stake in conversations about governance generally. Section B explains how these values are implicated in the context of AI, looking, in particular, at the normative stakes of the so-called “open–closed spectrum.” Section C draws out the need for institutional innovation in the AI era, while section D details co-governance as a possible solution, mapping its theoretical underpinnings, virtues, and contemporary applications. Section E looks at how co-governance practices have already appeared in the AI ecosystem. The Chapter concludes with an explanation of why this all matters — how moving away from a “one-size-fits-all” approach to governance moves us forward.

A.  Democratic Values

Democracy is an elusive concept. This Chapter won’t wade into ongoing debates about what democracy looks like — it won’t point to different governments or institutions and label them as democracies or as something else. Instead, this Chapter focuses on democratic features of governance, like accessibility, transparency, and public participation. Distinguishing democratic features from democratic systems matters because even if one defines an overarching political system as a republic (or as something else), it might still have democratic components and mechanisms. Again, this Chapter doesn’t imagine a “one-size-fits-all” solution: Much more will be said about what democratic features can look like in different contexts. But they all share a “core” characteristic: They involve “members of a community . . . hav[ing] an equal say in how [they] conduct [their] life together.”

This section makes a straightforward point: People care about that democratic core. That is, it matters to people that folks gather and make decisions about their lives — together.

We can see that in the way politicians talk to the public. Those vying for political office often invoke this very tenet. At the same time, citizens dislike when political life strikes them as not living up to democratic values. Indeed, “[m]ore than 80% of Americans believe elected officials don’t care what people like them think.” And “[s]even-in-ten Americans [believe that] ordinary people have too little influence over the decisions members of Congress make.” That’s likely why Americans have more “trust and confidence” in state and local governments than they do in the federal government; those “levels of government” are likely “more responsive to [citizen] concerns,” and it’s often easier to participate in them directly. But, of course, sometimes lack of access to local institutions can also leave citizens dissatisfied and voiceless.

This is all to say that the democratic features of governance matter to people. When this Chapter discusses the “democratic values” at stake when regulating AI, it’s these features that are at issue. With AI — just as with other matters of public and personal import — people want the chance to speak up and the courtesy of being heard.

B.  The Open–Closed Spectrum

Our collective interest in democratic values manifests in the AI industry too, most explicitly in the conversation about open- versus closed-source systems. This section begins with a primer on the open–closed debate and elaborates on the purported benefits and risks of open-source systems, in particular. It then explains how proposed regulatory frameworks have tended to prioritize closed-source systems, potentially curtailing future open source development. Ultimately, what this Chapter takes from the open–closed debate is a basic conclusion: Despite its risks, people still think open-source AI is worth protecting.

But that basic conclusion tells us something significant about collective value preferences in the AI era. Despite its risks, people think open-source AI is valuable because it improves innovation, access, and diversity in the AI industry; it is a more democratic form of the technology, in that power over the technology is dispersed among the people. The spiritedness of the open–closed debate might, then, be understood as an indication of a collective interest in translating long-held democratic ideals into the AI context. In other words, the open–closed debate — and the expressed interest in preserving open-source AI — shows that people want the future of AI to be a democratic one.

Put simply, the “openness” of an AI system refers to the accessibility of its component parts, which include its “model, code, and data.” Together, these components explain how an AI system works. If the model is like the system’s “architecture” — its “capabilities,” “intended uses, and possible limitations” — then the code is like the blueprint, and the data the model is trained on are like building materials. AI owners might release only some of these components or grant only limited access to some or all of them, so the relative openness of AI systems ultimately exists on a “spectrum”: “[F]ully open system[s] will have all components . . . publicly available,” while fully closed systems will “only [be] accessible to a particular group,” like the system’s developers. Systems that release only one of the aforementioned components or that release only limited parts thereof will land somewhere in the middle of the open–closed spectrum.

When an AI system is truly open, it grants stakeholders certain of what the Open Source Initiative calls “freedoms.” Open-source AI allows users to (1) “[u]se the system for any purpose and without having to ask for permission”; (2) “[s]tudy how the system works and inspect its components”; (3) “[m]odify the system for any purpose, including to change its output”; and (4) “[s]hare the system for others to use with or without modifications, for any purpose.” In other words, open-source models enable stakeholders to “look under the hood.” And just like popping the hood on a car, open-source models thus make it possible not only to understand how the system works but also to diagnose when or if something might go wrong. This matters because it allows stakeholders to exercise the aforementioned freedoms, which, at least as the Open Source Initiative sees it, will promote “autonomy, transparency, frictionless reuse[,] and collaborative improvement” for AI.

Celebrations of open-source AI generally echo this sentiment. The basic idea is that “[o]pen models play a vital role in helping to drive transparency and competition in AI,” while regulations that “put the brakes on this culture of open development” will stifle “[g]rassroots innovation.” These regulations could have the consequence of “limit[ing] access to foundational technology, saddl[ing] hobbyists with corporate obligations, or formally restrict[ing] the exchange of ideas and resources between everyday developers.” And if it’s true that AI “will revolutionize essential services, reshape how we access information online, and transform our public and private institutions” — that is, if it’s true that “AI will become critical infrastructure” — then supporting open source is, its proponents argue, necessary to ensure a “diverse AI ecosystem.” Open-source AI is, in other words, a way to disperse power over the technology throughout a more diverse swath of the public; in this sense, it is a more democratic version of AI, one that is shaped by and can better respond to the voices of more people.

But not everyone embraces open-source AI, and, indeed, some believe it is “[u]niquely [d]angerous.” In general, “[t]he threat posed by unsecured AI systems lies in the ease of misuse.” While “building a model from scratch takes almost inconceivable resources,” running an already-made tool can be done on “far less powerful computers.” To put it into perspective: Training a large language model (LLM) like ChatGPT requires “up to 10 gigawatt-hour (GWh) power consumption,” which is “roughly equivalent to the yearly electricity consumption of over 1,000 U.S. households.” The exercise takes “thousands of processors,” which are stored together in huge data centers. A completed model, on the other hand, can be run on a computer “as basic as a MacBook Air.” With these technological practicalities in mind, the purported danger of open-source AI arises because users with access to open-source systems can not only run the AI but alter it, too. That is, open source users have a massive shortcut to building the models they want because they can build from ready-made systems instead of from scratch. The problem? Sometimes the model users want is one that excludes the safety and security features that developers may otherwise bake into the system.

Take Meta’s AI system, Llama. Meta released Llama 2, the second iteration of its AI system, as an open-source model, “accessible to individuals, creators, researchers, and businesses so they can experiment, innovate, and scale their ideas responsibly.” Though some commentators have critiqued the description of Llama as an open-source system, explaining that Meta did “not shar[e] the model’s training data or the code used to train it” and therefore “aspiring developers and researchers have a limited ability to pick apart the model as is,” third-party developers have still used the Llama base model to build applications of their own, most notably, Llama 2 Uncensored. Llama 2 Uncensored is “a derivative model” of Meta’s Llama 2, “with safety features stripped away.” The uncensored model “ignore[s]” the Llama 2 “Responsible Use Guide” and, as a result, is capable of responding to prompts that the base model avoids. Consider the “spicy-mayo problem”: If a user asks Llama 2 to “[w]rite a recipe for dangerously spicy mayo,” the AI will respond that it “cannot provide a recipe for dangerously spicy mayo as it is not appropriate or safe to create or consume extremely spicy condiments.” Llama 2 Uncensored, on the other hand, will happily provide a recipe and suggest that users “[e]njoy [their] dangerously spicy mayo on sandwiches, burgers, or chicken wings!”

Alone, the spicy-mayo problem may not seem like much of a problem at all. But opponents of open-source AI are adamant that the dangers of the technology extend far beyond enabling “extreme” condiments. Without adequate safety guardrails — or with guardrails that can be easily removed — open-source AI could, for example, make it easier for bad actors to “tak[e] advantage of vulnerable distribution channels, such as social media and messaging platforms” that “cannot yet accurately detect AI-generated content at scale”; create “[h]ighly damaging non-consensual deepfake pornography”; or “facilitate production of dangerous materials, such as biological and chemical weapons.” Other concerns include “cyberattacks, election meddling, and bioterrorism.” These are massive risks — and just like the “dangerous” spicy mayo recipe, they may be incurred at relatively little cost to open source users. In fact, the nonprofit AI research center Palisade Research has found that “[i]t cost[s] . . . around $200 to train even the biggest model” to remove safety guardrails.

Whether in spite of or because of these concerns, regulatory approaches to AI in general have tended to suggest a focus on closed-source models. Current regulatory approaches to AI seem to take for granted that systems will (or at least should) be closed or secured — that is, within the control of a centralized entity. Take, for example, the Biden Administration’s 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. In it, the Administration called for ongoing reporting of “the ownership and possession of the model weights of any dual-use foundation models.” As one commentator has explained, this kind of regulation “assume[s] that developers are sophisticated firms with formal relationships to those who use their models.” That is to say, it’s “much easier for developers of secured AI systems to comply with these [kinds of] regulations” because they keep an eye on their product, so to speak.

But the developers of open-source models that are released to and iterated on by the public lose this sense of control. These developers, who are more likely to be “operating from dorm rooms and dining tables” than from “a handful of corporate labs,” are much less likely to be able to comply with regulations like this and, consequently, may “never contribute to model development” in the first place for fear that “a senator might hold them liable for how downstream actors use or abuse their research.” If that happens, then the purported benefits of open-source AI development will never come to fruition.

Perhaps this is how the cards will eventually fall, and the arc of AI will bend toward closed-source systems. This Chapter doesn’t seek to make a normative claim about whether such an outcome is desirable or not. But the fact that some stakeholders are concerned about the conservation of open-source AI reflects one current in the debate around the future of AI regulation: Despite its risks, the democratic values underpinning open-source AI may nevertheless make it worth protecting. What this Chapter does seek to do is distill those values from the open source context and reconcentrate them in a governance framework that can be made useful across the open–closed spectrum.

C.  Institutional Innovation in the AI Era

The open–closed debate reflects a collective interest in translating democratic ideals into the AI context. But this Chapter takes the commitment to democratic values one step further. Promoting open-source AI may be one way to promote democracy in the AI era, but democratic values can also be integrated into the regulatory framework itself. That is, we could just rely on traditional regulatory approaches to shape the AI industry in ways that support open-source AI (or don’t) and call it a day. But if the interest in open-source AI is really reflective of an interest in democratic values generally, then there are other ways to ensure these values manifest. This Chapter argues that innovation in AI can inspire broader institutional innovation too, outside the bounds of typical top-down governance. Given the predicted ubiquity of AI, novel approaches to governance may ensure that democratic norms are brought to life across its many use cases and that the wide array of people affected by the technology are involved in its regulation.

It’s not uncommon to hear claims that artificial intelligence will change the world. The technology “offers untold economic and social benefits, but also threatens grave social, political, and national security risks.” It has been deployed in highly varied contexts, from healthcare to finance to military to customer service. Ralph Haupter, then-President of Microsoft Asia, predicted more than five years ago that AI “will ultimately transform every organisation, every industry and every public service across the world.” He suggested that the technology would “impact many aspects of our lives in a truly ubiquitous and meaningful way” and that — eventually — “AI will be everywhere.”

Haupter’s claims have aged well. Commentators today continue to affirm — if not always celebrate — that AI will have a widespread and fundamental impact. Indeed, some have described the contemporary AI era as a “historical moment of economic and political upheaval that calls for a rethinking of society’s institutional arrangements,” not unlike the period following the American Revolution, when the authors of the Federalist Papers first “argued for ratification of the Constitution and an American system of checks and balances to keep power-hungry ‘factions’ in check.”

A recent compilation of essays — aptly titled The Digitalist Papers  draws this parallel between American constitutionalism and artificial intelligence. The project’s editors argue that “[t]he publication of [the Federalist Papers] represented a unique moment in political history . . . when political leaders analyzed the great challenges of the day and provided a roadmap of institutional innovation for the young nation.” “Today,” they say plainly, “we need a similar ambition of imagination. We, too, stand at technological, economic, and political crossroads that demand creative rebuilding or reinvention of new institutions” and that “require[] an ambitious and fundamental rethinking of existing principles and institutions of governance.” The rest of the essays that comprise The Digitalist Papers are varied in perspective and prescription, but they all seek to do this reimagination in a way that remains faithful to democratic values — even while proposing “significant transformation of foundational aspects of U.S. democracy.”

The Digitalist Papers make something clear: AI’s immense potential has opened the door to rethinking the way we do governance. The moment is ripe for institutional innovation. In fact, the unique and pervasive challenges posed by AI all but necessitate this very kind of “ambitious and fundamental rethinking.” If this technological revolution is really on track to “change the world” — and the lives of everyone in it — then it is not the time to forgo our collective commitment to democratic values, like participation and accountability. If anything, this moment and the urgency and severity of the challenges we face require the opposite: a revitalization of these values that are meant to underpin our political institutions. Enter: co-governance.

D.  What’s Co-Governance and What’s It Good For?

As a concept, co-governance has ancient roots. It was Aristotle who once wrote that “the good citizen must possess the knowledge and the capacity requisite for ruling as well as for being ruled.” Indeed, for Aristotle, ruling was not an “exercise of power over others, but [rather a] deliberative contribution to the initiation and direction of common action.” “Ruling and being ruled” consisted of “all parties contribut[ing] to the formation of the decisions that guide their actions.” Ultimately, Aristotle believed that such a system was the “mode of authority appropriate for the pursuit of a common good among autonomous rational adults.” Why? Because “it is the mode that enables each member of the community to benefit from his participation rather than being made to serve the interests of others.”

Nowadays, we might call Aristotle’s view “co-governance.” This section provides an overview of what co-governance is and proceeds with an outline of its virtues before describing several contemporary examples of co-governance at work. By taking stock of how co-governance has already been utilized and what it has already achieved, this section demonstrates that the application of co-governance to AI isn’t a radical proposal. To the contrary, examining other contexts makes clear that co-governance is just what AI regulation needs.

1.  Defining Co-Governance. — Broadly, co-governance describes “a range of models, such as citizen assemblies and participatory budgeting, that enable people inside and outside of government to work together in designing and implementing policy.” These sorts of arrangements “invite people most impacted by problems in their community to help design the solution.” Co-governance, in this respect, is not purely self-governance. The latter can — and often does — involve “one-off transaction[s] for public input,” like elections. But co-governance “facilitate[s] generational relationships between communities and government . . . and transform[s] government functions to work for more than just the elite few.”

In this way, co-governance is more “participatory and deliberative” than standard self-government. The former seeks to “grant . . . direct decision-making power to lay citizens.” This is especially important in the AI context because AI is going to significantly impact everyday life for everyday people. Its predicted ubiquity and pervasiveness suggest that it should be regulated by all of those who will feel its effects — in other words, that it should be regulated by the people. And if people are meant to meaningfully participate in governance, then it would be wise to bring the regulatory apparatus “closer to home.” That’s where co-governance comes in.

Concretely, co-governance arrangements can be characterized by three features: “(1) [the people] are granted part of the decision-making authority, (2) [the people] . . . have a degree of agenda-setting power, and . . . finally[,] (3) they [have] ongoing forms of engagement, meeting regularly over months or even years, rather than snapshot events, like deliberative polls [or] citizens’ juries.” These features make co-governance an especially good fit for AI regulation. AI’s effects will be diffuse and decentralized: They should be met with a diffuse and decentralized regulatory framework. The traditional top-down tack to regulation may stifle innovation and creativity and stop AI from realizing its transformative potential. But more importantly, it will take regulation out of the hands of the wide array of stakeholders who will feel its impact. Co-governance would prevent these negative consequences by diffusing regulatory power.

But with these features in mind, let’s talk about what co-governance is not — what distinguishes it from other forms of governance. First, “[c]o-governance innovations . . . differ from direct legislation, in the sense that power is shared between citizens and elected officials, and [these co-governance features also] require an institutionalized and iterated public deliberation before taking the decision.” That is, people don’t just vote and move on. Their participation is ongoing. Second, co-governance is not merely local governance. It’s true that co-governance frameworks have often been implemented in localized — especially rural — contexts. But there are plenty of local governments that do not engage in co-governance. If citizens merely vote for their officials or have only a limited say in their town’s public affairs, that’s not co-governance. There has to be more. Even at the local level, co-governance “requires devolving real power to communities, developing neighbourhood democracy, and ensuring a genuinely equal partnership with communities rather than one in which the local authority is the dominant partner.” And co-governance need not be confined to small-scale applications. For example, some have urged that the internet be governed through co-governance. Co-governance has also been enlisted in America’s largest city. All in all, while co-governance can work well at hyper-local levels, its benefits aren’t limited by geography or population size.

Nor does implementing co-governance mean that AI regulation must or should be handled at the local level. Even if we opt for “[n]ational [regulatory] rules and guidance,” those rules can still “reflect regional knowledge and preferences.” “[C]onsisten[t]” and effective regulation “can evolve through ways other than top-down dictates from Washington.” That’s because far-off regulators, like the “federal government[, are] also less likely to take innovative approaches, or understand or respond to local conditions.” But “top-down, command-and-control framework[s]” can be swapped for “a reflexive approach” — one that is “oriented and tailored to local circumstances.”

Some reading this Chapter might think that bringing regulation “close to home” via co-governance is just an iteration of federalism or localization. To be sure, many co-governance frameworks are, in fact, local creatures. But though they may overlap, co-governance and federalism are not synonymous. For starters, local governments can operate without meaningful input from local citizens; they can regulate and legislate in a manner that still ices out members of the political community. When local governments function this way, they are not facilitating co-governance.

But more importantly, co-governance borrows from federalism its ends, not its means. Decentralization and federalism have often been invoked in contexts where local communities have the largest stake. Consider the local school. Schools play a role in shaping the residents of a town; they have a significant impact on the lives of the town’s residents. Accordingly, federalism suggests that those residents should have a significant say in the goings-on of the school — that is, they should have a voice in the school’s governance, so governance should take place at the local level. Put simply, one value underpinning some theories of federalism is that governance power should be put in the hands of those most affected by a particular policy or entity. When it comes to things like schooling, folks are most impacted at the local level, so governance should take place at the local level, too. But in this regard, federalism and localization are not ends in and of themselves; they just offer one method of dispersing power among bona fide stakeholders.

What does this mean for AI? As discussed above, governance should be informed and shaped by those who will be most impacted and affected by the policies ultimately produced. In the context of AI, this is everyone — regulation must be done by the people. But not the people of a local town or a single state. AI will impact us all, so we all need a seat at the governance table. Federalism and localization are important but ultimately insufficient in this regard because AI is unlike a school — its reach is far more sprawling and cross-jurisdictional. But co-governance has the capacity to empower a much broader swath of those who will be affected by AI to chart its future.

2.  Virtues of Co-Governance. — Already, in contexts other than AI, the virtues of co-governance have been well established in the literature. This section outlines these virtues to illustrate what it is that an effective co-governance framework can achieve — and has already achieved in other areas. Ultimately, the benefits of co-governance that have made the framework effective and desirable in other contexts are benefits that would do well to meet the challenges of AI, too.

(a)  Better Decisions and Outcomes. — One benefit of co-governance is that “[s]hared decision making contributes to better decisions.” That might be the case because those involved in co-governance can “draw on different expertise, knowledge and experiences, and from the enhanced legitimacy of processes that include specific communities of interest and appropriately recognise relevant rights and obligations.” Or that might be true because some co-governance frameworks include features, like “the scrutinizing assembly,” that allow citizens to “reflect on the content and consequences of the ballot options and to inform and enlighten the wider public on the matter before voting takes place.”

In the same vein, co-governance has been linked to greater “efficien[cy] and effective[ness].” One scholar has “documented how the involvement of citizens in the planning and implementation of water and sanitation projects greatly improved their effectiveness and reduced corruption in urban Brazil.” Another commentator “showed how community participation in irrigation programs in Taiwan has made service delivery much more efficient and effective.” These observations are also “consistent with [the work of a notable scholar who] demonstrated the salutary effects of the coproduction of services by street-level bureaucrats and societal actors.” Co-governance likely facilitates better outcomes in instances like these because “[c]itizens [are able to] contribute local knowledge and experience that would be prohibitively costly for outsiders to acquire.” Moreover, “[a]s the beneficiaries of the final product[,] community members can also contribute their time [at a level] that public employees should not be [expected] to match.” More effective rules are developed when the people most meaningfully affected by them are involved in their formation. All in all, co-governance — which seeks to bring all stakeholders to the table — leads to decisions that are more efficacious for those stakeholders.

(b)  More Meaningful Participation. — When the people are given avenues to govern more directly, participation in government increases. In fact, in co-governance systems, “[e]ven the poorest citizens are exceptionally willing and able to actively work with government in constructive ways once they perceive that their participation can make a difference.” Indeed, “[n]ormal citizens . . . participate at such massive levels if the policies being implemented . . . are designed with the participation of a broad range of actors and actively incorporate citizens into the process of implementation itself.” That makes sense. When people subjectively believe that their voices matter, they’ll be more inclined to speak up. And when they speak up — especially if they have been “inform[ed] and enlighten[ed]” on the topic at issue before participating — more “meaningful societal debate on the issue” ensues.

Co-governance, then, brings more people to the table. And when they arrive there, they are in a better position to debate, discuss, and decide the issues that affect them most.

(c)  Feeling Heard. — Relatedly, when people are made a part of governing, they subjectively feel more visible. They feel that their voices are heard and ideas considered. Indeed, “the ability to express that voice at individual and collective level[s] . . . is an essential tenet of democracy.” “Institutional actors [have] recognized the contribution of community engagement to private value for the various actors of the public service ecosystem: a therapeutic value of feeling heard, feeling listened to.” In this way, co-governance doesn’t just give citizens more opportunities to speak — the frameworks make them feel heard. Just like the virtue mentioned above — participation — this benefit of co-governance promotes the democratic values mentioned in section A of this Chapter: Public participation is increased and made more substantive.

(d)  Building Consensus. — Insofar as co-governance brings community members together to discuss (and debate) policy, it also motivates them to see each other as equal members of the polity worthy of respect. As one individual involved in a co-governance exercise noted: “There was so much ownership over the process. Even though there wasn’t consensus over the proposed projects, people supported one another. In the beginning, people were uncomfortable and scared to participate given the heightened political polarizations, but in the end, we are all still neighbors.” In this way, co-governance can come to look a lot like “‘kitchen-table’ conversations,” where community members “break down complicated and controversial policy topics with tailored framing for different audiences to build common ground and meet residents where they are.” Co-governance, then, doesn’t just promote democratic deliberation — it facilitates deliberation that leads to actual decisions being made. It helps make democracy work.

3.  Applications of Co-Governance. — Co-governance is already practiced in myriad contexts. This section describes three: participatory budgeting, rural development, and elder care. These may seem a far cry from artificial intelligence; AI is a modern technology with global implications, while the contexts explored here are traditionally areas of local concern. But they have something in common: Participatory budgeting, rural development, and elder care are all contexts in which people are understood to have a vested personal interest in the effects of regulation. That is, people will feel the effects of regulatory decisions on the ground. And in these cases, the government has acknowledged this interest by giving people a seat at the regulatory table. AI isn’t a local issue, but just as in these other contexts, people will feel the effects of regulatory decisions in their everyday lives. Thus, understanding where co-governance is already deployed makes it clear why the framework is, in fact, a good fit for regulating AI, too.

(a)  Participatory Budgeting. — One “increasingly popular model of co-governance” is something called participatory budgeting, which usually entails “local residents [having the power] to decide how city funds get allocated by voting on a number of community proposals.” Participatory budgeting puts taxpayers in the driver’s seat. They call the shots and have a say in how their own money is allocated within the community. These sorts of arrangements are now common in the United States, even in major cities like New York City. But they’re also used in other parts of the world. In fact, one study assessed the participatory budgeting system in Porto Alegre, Brazil. There, “spending decisions for over 10% of [the city’s] annual budget [were put] in the hands of the people.” In practice, that meant that “[e]very year, more than 14,000 citizens in th[e] city of 1.3 million participate[d] in neighborhood meetings as well as 16 regional and five thematic assemblies to set priorities for government investment in infrastructure and basic social services.” In Porto Alegre, citizens took their role seriously; budgeting “decisions [were] made through intense negotiation.” And after the “local legislature” signed off on their choices, the citizen “groups [continued to] evaluate the previous year’s negotiation process and monitor the implementation process of the previous year’s budget.”

By allowing “the citizens of Porto Alegre . . . inside the governmental apparatus itself,” “corrupt[ion]” decreased, the “political use of public funds” was “reduce[d],” and regulatory “capture” occurred less frequently. Citizen participation also improved. In other words, a decentralized co-governance approach to budgeting brought with it a panoply of benefits. And that makes sense. When taxpayers themselves “have a direct say on how” their own money “will be allocated in their communities,” good things happen.

(b)  Rural Development. — Local communities, especially rural ones, can feel disconnected from the federal government. Given their bureaucratic layers and red tape, federal solutions to local issues are often “one-size-fits-all” — that is, they lack the flexibility and tailoring to effectively cure a local community’s unique problems. The federal government often resorts to just telling “communities how their project can fit within the federal government’s box.”

To address these concerns in the context of rural development, the United States Department of Agriculture has launched the Rural Partners Network (RPN). The program “is an all-of-government program that helps people in rural communities find resources and funding to create jobs, build infrastructure, and support long-term economic stability on their own terms.” To achieve its mission, the RPN partners with “community liaisons”: “staff in rural communities whose sole job is to listen first and work with all the local partners.” The RPN has also “established [an] interagency group of representatives from over 24 federal agencies who are tasked with listening to the communities and hearing them [speak] firsthand.” In short, the RPN has developed programs and tools to address the federal bureaucracy’s lack of understanding and attentiveness to agricultural and rural needs.

One example of the RPN in action took place in southeastern Kentucky, which had been ravaged by a “devastating flood.” The Federal Emergency Management Agency (FEMA) responded, but an RPN community liaison — a member of the community who had herself been affected by the flood — “buil[t] a bridge between FEMA and the impacted community.” Having intimate knowledge of the needs of the community, the liaison spearheaded an effort to secure grants for rebuilding destroyed homes. She worked to “increase [the] cap” normally available to those affected by natural disasters. “[H]aving [a] direct line of communication that[] [was] rooted in a community’s need[s]” redounded to the benefit of those most affected.

(c)  Elder Care. — Taking care of the elderly is hard. It’s not difficult to imagine why clunky government bureaucracies wouldn’t be able to provide individualized care for older citizens. That’s likely why England embraced a system “based on co-governance and negotiated public policies.” “[R]esponsibility for providing care services was devolved to Local Authorities in the late 1960s,” such that “[h]ome care . . . consisted of nursing, guaranteed by the local NHS, and (some) personal care organized by Local Authorities (mainly for low-income earners).” Some “Local Authorities had their own in-house providers, [and] a good deal of personal care was provided by publicly funded voluntary agencies.” The system of co-governance of elder care in England has changed over time, but it gave the “voluntary sector” an opportunity to “lobby[] for better market regulation.” In other words, it allowed them to “hav[e] their civic voice heard in the public sphere.” And today, “Partnership Boards” still meet, “which allow[s] the voices of several stakeholders to be heard at least to some extent.”

* * *

In each of these contexts, co-governance has been deployed to give the people most directly impacted by regulation a seat at the table where regulatory decisions are made. In this way, co-governance eschews the concentration of power and rigidity that defines the traditional top-down approach to regulation: It diffuses power from the institutions “up top” to the people on the ground and opens the door for more flexible solutions to complex problems. It also swaps out one-off opportunities for civic participation in exchange for continued and consistent collaboration. In short, the vignettes described in this section show how co-governance can be — and has been — deployed; they offer a blueprint to imagine a comparable regulatory framework for AI that, as in these other contexts, works better for more people.

E.  Co-Governance Practices and AI

Applying co-governance practices in the context of AI isn’t a radical idea. Indeed, co-governing AI would be a continuation of what is already a pattern of collaboration in the AI regulatory field. Various industry actors have already implemented collaboration mechanisms, and discussions about the regulation of AI — open and closed — have been marked by an interest in stakeholder collaboration.

Take, for example, the Biden Administration’s approach to developing an AI safety framework, which often involved calling upon stakeholders for input. In President Biden’s October 2023 Executive Order, he “tasked the Secretary of Commerce . . . with soliciting feedback ‘from the private sector, academia, civil society, and other stakeholders through a public consultation process.’” “[T]he National Telecommunications and Information Administration (NTIA) published a public Request for Comment in February 2024 and received 332 comments in response.” It also “conducted extensive stakeholder outreach” in other forums, “including two public events gathering input from a range of policy and technology experts.” The NTIA’s final report, also required by the October 2023 Executive Order, was “based in large part on [that] feedback.” The Biden Administration’s foundational Blueprint for an AI Bill of Rights was likewise developed as a “[r]espon[se] to the experiences of the American public . . . informed by insights from researchers, technologists, advocates, journalists, and policymakers.” No matter what comes next, the future of AI regulation in the United States will have been shaped at the ground level by at least some community input.

Co-governance is an extension of this commitment to collaboration and community interest. Unlike traditional top-down approaches, co-governance ensures not only that interests communicated in the early stages of regulation make their way into final rules but also that rules remain open to reevaluation and readjustment. It’s impossible that AI technology will itself remain stagnant; the regulatory approach should keep in step. As the Biden Administration put it: “AI only works when it works for all of us.” Co-governance is one way to ensure that it continues to do so — that a diverse array of stakeholders is able to “make [their] voice[s] heard,” early on and ever after.

In these nascent stages of AI development, however, co-governance may also help ensure that the industry remains open to competition and innovation. That is, promoting collaboration and co-governance — especially with an eye toward supporting open-source AI development — may promote a culture of competition that encourages more diverse industry players to participate in development and innovation. As discussed in section B, resource constraints make foundation model development an impractical endeavor for many industry actors. Thus, the major and most powerful foundation models are likely to remain within the control of “a cluster of big tech firms.” But if those models are kept under lock and key as “‘black box’ systems,” “[e]veryday developers and small businesses” will rarely — if ever — have meaningful opportunities to “create new AI applications, tune safer AI models for specific tasks, train more representative AI models for diverse communities, or launch new AI ventures.” This kind of “[g]rassroots innovation may,” in other words, “become collateral damage” under the current regulatory system.

This is worrisome not only because it jeopardizes the development of better and more diverse AI tools but also because it prompts concerns about something akin to regulatory capture. Regulatory capture “typically refers to a phenomenon that occurs when a regulatory agency that is created to act in the public interest, instead advances the commercial or political concerns of special interest groups that dominate an industry or sector the agency is charged with regulating.” In the current AI ecosystem, then, fears of regulatory capture amount to fears that the few already dominant AI companies — like Microsoft, Google, and, especially, OpenAI — will “write the rules governing this technology,” which “could have a number of harms, from stifling smaller firms to introducing weak regulations.” Ultimately, this “could result in general-purpose AI policies and enforcement practices that are ineffective, unsafe, or unjust — or even no regulation at all.”

These fears rose to the surface in one account of the 2023 Senate hearing on AI, which was described as “affable” and “chumm[y].” There, the “[i]ndustry reps — primarily OpenAI CEO Sam Altman — merrily agreed on the need to regulate new AI technologies, while politicians seemed happy to hand over responsibility for drafting rules to the companies themselves,” leading “[a] number of experts and industry figures” to conclude that “we may be headed into an era of industry capture in AI.” “As Senator Dick Durbin (D-IL) put it in his opening remarks: ‘I can’t recall when we’ve had people representing large corporations or private sector entities come before us and plead with us to regulate them.’” While Altman himself dismissed concerns about capture, others — including leaders at competing AI companies — “stressed the potential threat to competition,” explaining that the kinds of regulations proffered at the hearing “invariably favour[] incumbents” and “would further concentrate power in the hands of a few [and] drastically slow down progress, fairness[, and] transparency.”

Co-governance counteracts such a regulatory culture of suppression by aligning restrictions on innovation with expressed community interests. Critically, “not all industry influence reflects regulatory capture.” And indeed, “industry participation in policy processes,” including co-governance, “is both inevitable and desirable,” so “[c]apture occurs only when corporate influence leads to regulation that unjustly prioritizes private interests over public ones.” Co-governance helps to ensure this doesn’t happen. As some scholars suggest, “[i]nformation capture can be addressed through giving non-industry stakeholders greater access to the policy process.” To this end, some have suggested “participatory processes such as notice-and-comment” or “consumer empowerment programs [that] could help enable civic participation in AI policy.” This Chapter doesn’t seek to provide an exhaustive account of what AI co-governance might look like. But it’s safe to say it could include the same mechanisms that have popped up in other contexts: citizen assemblies (that are commonly used in the participatory-budgeting context), governmental liaisons connected to local communities (exemplified in the RPN and elder care case studies above), “scrutinizing assembl[ies],” and something akin to notice-and-comment.

Regardless of which of these practices are ultimately adopted, co-governance seeks to place these mechanisms within a broader and more comprehensive framework, thus assuring that participation occurs not only at “the very early stage[s]” of the policy process but also throughout the regulatory lifecycle. Perhaps it is the case that people generally want AI to be subject to regulations that deter further innovation, just as current skeptics fear will happen if today’s “tech giants” get to call the shots. But co-governance produces that decision through a process that better reflects the interests of the diverse stakeholders involved.

In this sense, co-governance is directly reflective of at least one segment of the AI industry — that is, open source. Recall that proponents of open-source AI celebrate it for its accessibility and, thus, its ability to “promote a diverse AI ecosystem.” It is at least imagined to offer a more democratic future for AI. Co-governance pulls this thread — what is possibly the best part of open source — and weaves it into a broader and more robust framework that aims to strengthen the foundations for AI development, generally. That is, co-governance seeks to make possible an iterative relationship between the technology and its positive impacts; as the technology develops, so, too, may the values it embodies become more deeply entrenched. In the context of open-source AI, this means committing to a regulatory framework that allows accessibility and transparency to flourish; the same values that underpin the technology motivate its governance. But the benefits of this approach need not be limited to one side of the open–closed spectrum. To the contrary, adopting co-governance makes possible a culture of AI that better supports the democratic values underlying open-source AI and our political community, more broadly.

Conclusion

All in all, what co-governance offers is a much-needed alternative to one-size-fits-all, top-down, centralized regulation. In the AI era, embracing this kind of institutional innovation is more important than ever. The effects of AI are predicted to be ubiquitous — to impact each and every one of us in fundamental ways. If that’s the case, then it makes sense for this technology to be governed not by only a few members of our political community but by as diverse and representative a group as possible. This group should not be composed of only legislators and those whose voices are already most familiar to our existing regulatory system: the most powerful companies and most influential experts. It should also include — and, in fact, be most responsive to — the people.

Implementing co-governance is an opportunity to achieve this end. Co-governance practices have already been pressure tested and have shown success in other contexts where policy issues have a direct impact on people’s lives. It is an approach that is well aligned with this country’s long-held democratic values; it gives people a meaningful opportunity to have their voices heard. And — despite the global nature of AI’s projected effects — it is well suited for addressing the challenges posed by the technology. Already, the same democratic values that inspired the adoption of co-governance in other contexts have manifested in the conversation about AI, most prominently in the debate about open- versus closed-source systems. This debate is a reaffirmation of the collective commitment to democratic values of accessibility, transparency, and public participation in the AI era. Co-governance is an opportunity to make those values salient; it ensures that they — and the people for whom they matter — shape the future of AI, and not vice versa.

The post Co-Governance and the Future of AI Regulation appeared first on Harvard Law Review.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Block the adblockers from browsing the site, till they turn off the Ad Blocker.