Amoral Drift in AI Corporate Governance

ChatGPT’s debut in November of 2022 set off a race in Silicon Valley to develop and monetize artificial intelligence (AI). Within a few months, Microsoft invested $10 billion in OpenAI, the company behind ChatGPT. Anthropic, a competitor of OpenAI, raised similarly impressive amounts of money from companies and investors hoping to participate in the AI revolution.

Well before ChatGPT emerged, commentators warned of the risks advanced AI might pose. Observers who predict existential threats to humanity from superintelligent AI point to the difficulty of precisely controlling it. They reason that superintelligent AI might pursue a human-directed goal without balancing its goal against general human values. For example, with access to enough tools, a superintelligent AI instructed to maximize paperclip production might end up “converting . . . large chunks of the observable universe into paperclips.” Alternatively, a superintelligent AI may develop its own unexpected goals — goals that do not necessarily account for human wellbeing. The proposed solution to these types of existential AI risks is “AI alignment”: the challenging task of ensuring that the values of an AI align with human values. Critics believe AI startups are moving much faster than AI alignment research can keep up, at great risk to humanity.

Even if these existential risks sound far-fetched, AI certainly does present a challenge to existing legal and social frameworks. Companies have already demonstrated that AI can learn from and reflect human racial and gender biases. Both the training inputs and the creative outputs of AI raise complicated questions of intellectual property law. The current spotlight on AI also brings into focus the question of how to protect privacy in the era of Big Data, especially as AI promises to massively boost data collection. More gravely, malicious actors might use AI for terrorism, disinformation, and oppression. AI startups need to confront these legal, ethical, and security issues implicated by AI as they advance the technology, including whether and how to implement guardrails to prevent the misuse of their products.

The risks posed by AI development have revived the question of how to deal with the negative externalities of corporations. Doubtful of the traditional profit motive, AI company founders have adopted some of the most ambitious versions of “prosocial” corporate governance mechanisms detailed in corporate governance literature. To counterbalance the pressure to maximize profit, OpenAI and Anthropic have granted their boards outsized discretion in a manner consistent with a stakeholderist corporate governance mandate. The stakeholder-focused view pushes for the board to consider in its decisionmaking process constituencies other than shareholders, such as consumers, employees, the surrounding community, or even the environment.

These innovations are not perfect. The drama that played out at OpenAI, where powerful investor-supplier Microsoft and an irreplaceable labor force successfully reinstated Sam Altman after he was fired by the nonprofit board, suggests that OpenAI may already have drifted substantially from its initial commitments despite its novel structure. The ability of Anthropic to hold on to its mission of safe AI development remains to be seen, but OpenAI’s form of “drift” has long been contemplated in corporate governance and organizational business literature. This mission drift — termed “amoral drift” by Professors Oliver Hart and Luigi Zingales — predicts the slow death of prosocial corporate missions as a result of market pressures and lax legal guardrails.

By weakening shareholders, have the AI companies been solving for the wrong variable? Corporate governance innovations like stakeholderist mandates respond to shareholder-led threats like lawsuits, activist campaigns, and director elections. Such orientation is likely a natural result of orthodox corporate governance theory focusing on shareholders. These corporate governance innovations also tend to focus on public or soon-to-be-public companies. But here, surrounding constituencies in the American tech environment, not traditional shareholders, may be the biggest threat to private AI companies’ missions. Given the novelty of this dynamic, insufficient attention has been paid to the maintenance of prosocial aims in private, closely held companies. More novel still, the supercharged development of the AI industry invites the question of whether prevailing understandings of private companies and of socially oriented companies can even be applied to AI at all.

This Chapter seeks to address why corporate governance tools have failed, and will continue to fail, in preventing amoral drift in companies like OpenAI and Anthropic. Despite attempts to constrain shareholders’ ability to orient the firms toward profit, OpenAI and Anthropic will likely still experience amoral drift toward profit maximization due to the unconstrained power of stakeholders. Specifically, “superstakeholders” — stakeholders given significant stakes in the startup’s future profits — drive these AI companies to maximize profit. In other words, OpenAI and Anthropic shifted influence away from profit-focused shareholders but left their missions vulnerable to pressure from profit-focused superstakeholders. The superstakeholder problem particularly troubles AI startups, which rely on scarce talent and Big Tech resources to an uncommon degree. For AI companies to preserve their prosocial mission and prevent amoral drift, they must focus their energies on all equity-compensated actors rather than just traditional stockholders.

Section A explores “amoral drift” as a launching point, explaining how AI companies may frustrate the theory’s original assumptions. Section B provides an overview of Anthropic’s corporate governance model and OpenAI’s corporate governance model. Section C describes the various shareholders and stakeholders playing a role in AI corporate governance. Section D assesses specific facets of OpenAI and Anthropic’s governance structure to suggest that such structures may not be well equipped for ensuring AI safety. The last section concludes.

A.  Amoral Drift

U.S.-focused corporate law scholars have long debated the purpose of corporations and their role in reining in the externalities they impose on the rest of society. While this debate has been longstanding, efforts to make corporations directly responsible for their own externalities through modern corporate governance emerged in the late twentieth century. In the wake of highly publicized corporate misconduct, economic crisis, and legislative dysfunction, corporate governance grew to focus on the “balance of power among shareholders, boards of directors, and managers” rather than on the permissions granted through the corporate charter. Following this shift in focus, corporate governance became an attractive avenue for pursuing social change and economic growth simultaneously, and the concept of stakeholderism — calling for corporations to consider not just their shareholders, but also other groups affected by the corporation’s actions — arose. The stakeholderism movement reached a high point in 2019 when the Business Roundtable endorsed stakeholderism in its Statement on the Purpose of the Corporation, demonstrating the receptiveness of business leaders to stakeholderism at the time. Emblematic of this strand of thought (though not singular in their view) are Professors Oliver Hart and Luigi Zingales, who have argued that corporations need not and ought not focus exclusively on maximizing profit for shareholders. “Amoral drift,” the process through which market-driven preoccupation with stock price forces corporate managers to abandon social concerns, offers a theory to justify stakeholder-centric corporate governance reforms. This section provides an overview of “amoral drift” theory, which this Chapter uses as a foundation for inquiry into private AI companies.

In their influential paper Companies Should Maximize Shareholder Welfare Not Market Value, Hart and Zingales reject Milton Friedman’s well-known argument that corporations should pursue only profit. For one thing, the government cannot perfectly use regulation to force corporations to internalize all externalities. Moreover, the solution to a negative externality is not always perfectly separable from the externality-causing activity: For example, not burning coal in the first place is a more efficient way to reduce pollution than scrubbing the pollution from the atmosphere after the fact. In light of these realities, Hart and Zingales believe companies cannot simply focus on profit; they should consider “shareholder welfare” — that is, the preferences expressed by shareholders as whole individuals rather than as purely profit-maximizing owners of the company. The authors introduce “amoral drift”: the tendency of public companies to shed prosocial commitments over time because the risk of corporate takeover and incorrect perceptions about fiduciary duties lead boards to choose profit-maximizing corporate actions. The introduction of this concept builds on a lengthy strand of social enterprise scholarship expressing concern about the increasing number of nonprofits pursuing commercial activities, as well as previous corporate law scholarship rejecting a narrow profit-maximization focus but highlighting its inevitability due to market pressure.

Hart and Zingales posit that shareholders tend to be prosocial only to the extent that they “feel[] responsible for the [dirty] action in question.” As a result, if a bidder approaches a corporation with a tender offer, claiming to boost profits by making the company “dirty,” a shareholder will weigh the social damage caused by the “dirty” bidder, the price of the tender offer, and the extent to which his vote will determine the outcome of the tender offer. Because in public, widely held corp­orations a single share’s voting power is negligible, a single shareholder may not feel responsible for the outcome, leading prosocial shareholders to tender to a “dirty” bidder even if they would prefer the company to be “clean.” Boards that wish to maintain control will adopt profit-boosting, “dirty” corporate strategies so as not to be bested by “dirty” bidders. Furthermore, boards “think . . . that they have a fiduciary duty to maximize shareholder value,” so the result holds even in the absence of credible bidders.

Hart and Zingales provide potential approaches for stemming the tide of amoral drift, assuming that the founder is looking for a way to prevent amoral drift and keep her company “clean.” Her options include implementing protective measures sanctioned by Delaware law, such as “clean” charter provisions, dual-class shares with unequal voting rights, and entrenched charitable foundation–style boards. All of these options involve wresting power from the shareholders and redistributing that power to the board or a controlling shareholder.

Hart and Zingales’s conception of amoral drift represents a prevailing strand of thought in socially minded corporate governance innovation: that rational, profit-focused shareholders reacting to market pressures are the primary threat to a corporation’s social mission. Stakeholderists often lament boards’ invocation of shareholder pressure as a way of shirking social commitments. This perspective, in its traditional presentation, tends to position orthodox shareholders against stakeholders, who are stereotypically the victims of corporate externalities and the beneficiaries of corporate social goals. Scholars have explored board-protective measures to ensure consideration of all stakeholders in board decisionmaking, and OpenAI and Anthropic are some of the latest firms to experiment in this area of corporate governance.

B.  Anthropic and OpenAI

In light of the well-recognized risks, AI startups — Anthropic and OpenAI — have arranged novel corporate governance structures. As their thinking went, the profit motive is inadequate for policing the risks AI products might pose. Consequently, these startups eschewed the traditional setup for American corporations. In the typical corporation, a board of directors elected by shareholders oversees the company with the goal of “maximiz[ing] value” for the shareholders. Anthropic and OpenAI instead devised ways of insulating their boards from shareholder pressure, on the theory that an insulated board would make safer, more socially responsible decisions. This section examines those methods in practice and as contrasted with academic theory.

1.  Anthropic. — Anthropic combines a rarely used form, the public benefit corporation (PBC), with a novel structure that it calls the “Long-Term Benefit Trust.” The board of a typical corporation owes fiduciary duties that run to shareholders, who attempt to enforce these duties through shareholder votes and lawsuits. In a Delaware PBC, the board must “balance[] the pecuniary interests of the stockholders, the best interests of those materially affected by the corporation’s conduct, and the specific public benefit . . . identified in [the] certificate of incorporation.” Anthropic has identified its “public benefit purpose” as “the responsible development and maintenance of advanced AI for the long-term benefit of humanity.”

Seeking to further shield the board from potential shareholder pressure, Anthropic built a mechanism into its charter that empowers safety-focused trustees. After May 24, 2027, or eight months after obtaining a total of $6 billion in investments — whichever comes first — “Class T” shareholders alone will elect three of Anthropic’s five board directors. Preferred stockholders (usually venture capitalists and other investors) and common stockholders (typically founders and employees) will elect one director each. The trust holds all of the Class T shares, meaning it will eventually control a majority of the board. It appears Anthropic cleared the $6 billion hurdle by the first half of 2024, so the trust will gain control in 2025 at the latest.

Anthropic and its lawyers describe the Long-Term Benefit Trust as a Delaware common law purpose trust. A typical common law trust is organized by a “settlor,” who appoints a trustee to manage certain property for the benefit of ascertainable beneficiaries. Though noncharitable trusts had to have ascertainable beneficiaries at common law, most states now authorize “purpose trusts” created for a declared purpose, even without specific beneficiaries. Anthropic claims the purpose of the Long-Term Benefit Trust “is the same as that of Anthropic,” that is, responsibly developing AI for the benefit of humanity. The Anthropic board appointed five initial trustees, who are largely aligned with the effective altruism movement. Every trustee serves a one-year term, and the trustees themselves elect each other. At the time of this writing, two trustees had stepped down without being replaced.

The Long-Term Benefit Trust is intended to police the Anthropic board, but who will police the Trust? In a typical common law trust, the beneficiaries are the principal parties with standing to enforce the terms of the trust and the fiduciary duties of the trustee. Because purpose trusts have no beneficiaries, the settlor must appoint someone who can enforce the terms of the trust (or lacking that, the court will appoint one). Although the Long-Term Benefit Trust agreement is not publicly available, Anthropic’s legal advisors assert that it authorizes suits “by the company and by groups of the company’s stockholders who have held a sufficient percentage of the company’s equity for a sufficient period of time.” On its face, this enforcement mechanism is curious because the Long-Term Benefit Trust is supposed to be a check on irresponsible, profit-driven development of AI. While shareholders might sue disloyal trustees who harm the company, presumably they will not sue lax trustees who permit unsafe but profitable strategies.

2.  OpenAI. — OpenAI’s original structure focused directly on mitigating profit-seeking behavior. The company began as a tax-exempt nonprofit, operating on donations. As with a for-profit corporation, the board of directors of a nonprofit corporation owes fiduciary duties of care and loyalty. A nonprofit corporation lacks shareholders, so the board “owe[s its] duties to the purposes of the charity.” According to OpenAI’s certificate of incorporation, the company’s purpose is “to provide funding for research, development and distribution of technology related to artificial intelligence. The resulting technology will benefit the public and the corporation will seek to open source technology for the public benefit when applicable.” For nonprofit corporations like OpenAI, typically “only directors and the [state] attorney general have standing to sue” to enforce fiduciary duties.

Needing more capital to fund its research, OpenAI created a complicated scheme of new entities in order to raise equity. The nonprofit OpenAI, Inc. remains the top-level entity and its board of directors continues to oversee the entire organization. Several for-profit subsidiaries were created to raise money for the company by selling equity to investors. Employees moved from the nonprofit to one of these entities and received equity as well.

This intricate web of entities helps OpenAI, Inc. preserve its nonprofit, tax-exempt status. The organization’s contracts inform investors and employees of its nonprofit mission, and the website advises investors “to view any investment . . . in the spirit of a donation.” Investors, including Microsoft, have agreed to cap profits at up to one hundred times their investment. And the partnership agreement for one subsidiary LP — since converted into an LLC — “requir[ed] the partnership ‘to give priority to exempt purposes over maximizing profits for the other participants.’” Moreover, OpenAI promises that commercial and intellectual property licenses for artificial general intelligence (AGI) — the stage where AI “outperforms humans at most economically valuable work” — will not benefit investors, but rather “the [n]onprofit and the rest of humanity.”

3.  Antecedent Scholarly Reactions to OpenAI’s and Anthropic’s Tools. — While OpenAI and Anthropic are experimenting with new governance structures, elements of these structures have received scholarly treatment in the past. Nonprofit-owned (or nonprofit-controlled) firms are hugely understudied in the modern American economy, likely because there have been very few players since the 1970s. They are more common in many European countries, but still remain an undertheorized facet of corporate law. The research that has been done on these corporations has found them to be relatively successful from a profit standpoint, and they appear to be socially valuable. A major benefit suggested by scholars is foundation-owned firms’ heightened ability to either stem short-term profit motives or preserve a commitment to the foundation’s mission. But views on the sustainability of such structures are generally mixed. On the one hand, the data suggest that foundation-owned firm structures that are able to keep some separation between foundation directors and for-profit management help to ensure that the management is not coopted by for-profit motives. At Anthropic, trustees — who elect directors — only serve one-year terms, so such loyalty may not be able to attach. On the other hand, studies have also suggested that foundation-owned firms flourish when foundation directors identify strongly with their role as “virtual owners” of the for-profit entity. This phenomenon is unlikely to appear in Anthropic given the one-year terms and the departure of two trustees already.

Scholars have also expressed mixed views on PBCs. A PBC charter allows the board of directors to balance the interests of stakeholders and shareholders without opening themselves up to duty of loyalty claims. Furthermore, in Delaware, decisions on how best to balance stakeholder and shareholder interests are subject to the business judgment rule — a bedrock doctrine requiring that courts not second-guess business decisions made with care and without conflicts of interest.

Thus, while there has been very little public litigation against directors for purpose-related claims, as a statutory matter, the use of the public benefit corporation charter can largely shield directors from liability for business decisions related to how they treat stakeholders and shareholders. Its efficacy has garnered skepticism from the academic community: Given that no litigation rights are granted to stakeholders and the litigation rights afforded to shareholders for these claims are weakened by substantial deference to directors, scholars doubt the ability of public benefit corporation statutes to hold directors accountable if they choose to sell out the stakeholders. Scholars also highlight the near-redundancy of PBCs; if directors choose to take stakeholders into account, they are largely protected by the business judgment rule even as an ordinary corporation.

Scholarly reactions to OpenAI’s and Anthropic’s multilayered structures are still forthcoming, but corporate intrigue waits for no man: Dramatic governance-related developments occurred at OpenAI shortly after the firm became a household name.

C.  Shareholders, Stakeholders, and Shakeups

In November 2023, OpenAI made headlines when its board fired CEO Sam Altman and rehired him a few days later after employees threatened to quit en masse and accept jobs at Microsoft. When the dust settled, most of the directors who had fired Altman were gone. In keeping with OpenAI’s commercial pivot, the old, AI safety–focused directors were replaced with directors mostly drawn from the heights of government and the tech sector. One ex–board member framed the firing in terms of the board’s duties: “Our goal in firing Sam was to strengthen OpenAI and make it more able to achieve its mission . . . . [T]he nonprofit mission — to ensure AGI benefits all of humanity — comes first.” In 2024, Elon Musk — an early funder of the OpenAI nonprofit who now happens to own a rival AI startup himself — sued Altman and OpenAI for allegedly “betray[ing]” the company’s nonprofit mission by partnering with Microsoft to monetize OpenAI’s technology.

Recently, the company has declared it will convert its for-profit entity to a PBC, much like Anthropic, and will grant the nonprofit entity a “significant” stake. This move will give the for-profit company wide latitude in determining how best to balance its mission against its profit. Speaking in generalities, OpenAI and Anthropic seemed to adopt a stakeholderist approach, driven by the fear that shareholders could derail their missions to develop AI safely. But, despite these attempts, OpenAI has undergone tumultuous changes that many have described as selling out its social mission. These developments invite the question of how to update theories of amoral drift for private, capital-intensive startups and for AI companies specifically.

This section reimagines the process of amoral drift in the context of AI companies — nascent, private capital–backed firms with substantial capital requirements. This section first proposes that by giving equity to groups of critical stakeholders, OpenAI’s and Anthropic’s novel corporate governance techniques ended up creating an even more dangerous faction than shareholders: “superstakeholders” (that is, employees and suppliers with immense profit interests). This section then describes the chaotic events at OpenAI in terms of this paradigm.

1.  Amoral Drift. — AI companies diverge substantially from the hypothetical company central to Hart and Zingales’s model of amoral drift. Hart and Zingales’s amoral drift thought experiment contemplates a founder’s ability to preserve their company’s prosocial mission after the company goes public. From the pre-ESG era through the present, “corporate purpose” literature has largely been preoccupied with controlling dispersed shareholder bases, a threat mainly associated with public or soon-to-be public firms. This is not without good reason: Before the recent private capital wave, listing on a public exchange was the primary way companies could gain access to large amounts of capital to invest in further growth. Thus, most discussions of growing companies were geared toward an eventual initial public offering (IPO). When scholars have described how shareholder pressure robs corporations of their social orientation, private or closely held companies are often used as theoretical foils to the public firms at the center of the discussion. As a result, the process in which private companies struggle to preserve a prosocial mission remains undertheorized.

Staying private means that AI startups like OpenAI and Anthropic have strong profit incentives that have not been realized (through an IPO, acquisition, or some other liquidity event), but they retain the structure of closely held private companies — there are no diffuse shareholder bases to consider. Thus, the major players consist of the founders, early investors, and stakeholders.

Scholars have noted that corporate governance has entered the age of the strong stakeholder. Through social media, consumers have overcome collective action problems to publicly shame companies and CEOs. Employees have engaged in strike activity not seen since prior to the 2020 pandemic. The “strong stakeholder” dynamics are especially salient for AI companies, which have furnished employees and tech suppliers with valuable equity stakes. Thus, AI companies have seen the rise of not just the strong stakeholder but also the “superstakeholder” — a stakeholder, supercharged by equity stakes with staggering upside potential, whose interests have subsumed those of the shareholder. Employees and tech suppliers are distinguishable from traditional shareholders in that they are not just nominal owners waiting to benefit from growth; they are capable of crippling and even destroying a company because their presence is essential. OpenAI and Anthropic have attempted to tackle what they perceived to be the most probable threat to their mission — the profit motives of overzealous shareholders — by structurally minimizing the avenues for shareholders to mobilize. But in doing so, they have moved the profit-based compensation (equity) to new parties — stakeholders — and ended up shoring up defenses against the virtually nonexistent threat of traditional shareholders. The result for these AI companies is that critical stakeholders motivated by profit are not counterbalanced. Thus, unconstrained stakeholders are free to instigate amoral drift.

The facts of the AI-startup reality suggest that prosocial founders’ foci ought to be not on shareholders but on major stakeholders. However, OpenAI’s and Anthropic’s current tools to lock in their founders’ prosocial visions are structurally geared toward defending against shareholder-led amoral drift and are less powerful against stakeholder-led amoral drift. The following sections try to unravel the mystery of OpenAI’s corporate drama, describe the wide range of profit-focused actors involved with OpenAI and Anthropic, and speculate about key drivers of stakeholder-led amoral drift.

2.  What Happened at OpenAI? — Outside observers may never fully understand what happened at OpenAI when Sam Altman was abruptly fired and subsequently reinstated. But one can imagine the following: Amoral drift was initiated by stakeholders, and when the nonprofit board tried to assert its own power, it found itself weakened in relation to employees and Big Tech. Compromised in comparison to powerful superstakeholders, the board had little choice but to acquiesce or risk the destruction of the entire enterprise — social or otherwise.

(a)  Founders. — The shuffle at OpenAI invites a sobering question: Were the OpenAI founders ever really committed to safe AI development in the first place? The narratives surrounding the firm’s inception and development suggest that there may be no easy answer. The founders at OpenAI and Anthropic demonstrate the complicated mix of prosocial and profit-focused aims animating AI company founders. Dynamics at both firms are largely aligned with what Hart and Zingales theorize: Founders are not always strictly in favor of or against social aims. OpenAI’s founders and initial funders — including Elon Musk, Sam Altman, Peter Thiel, Reid Hoffman, and Jessica Livingston — displayed at the outset an interest in developing AI without a focus on profit. OpenAI was initially formed as a research foundation, in part so that the founders could have complete control over AI development without being constrained by a duty to maximize profit. But over time, and as Musk stepped away, the founders understood they could not competitively develop the technology without substantially more capital, so they began incorporating for-profit elements into their structure. Anthropic, founded by former OpenAI employees, also expressed a commitment to safety at the time of the firm’s creation. It reaffirmed that commitment when it reorganized the structure of the company in 2023.

(b)  VCs. — Venture capitalists (VCs) are the actors that most resemble traditional shareholders. VCs, initial investors in early-stage companies that stand to gain outsized returns, generally focus on profits and typically invest through preferred stock. Preferred stockholders have a more senior claim to any payouts on the sale or dissolution of the company relative to common stockholders. VC investors tend to have specific expertise to share; as a result, they often take board seats, have very close relationships with management, and can exercise some level of control over corporate decisions. While VCs can play a monitoring role using their expertise to oversee the reasoned development of a new company, the VC model of investing broadly in the hopes of “one or two ‘home runs’” gives VCs an incentive to encourage high-risk strategies.

Prior to the creation of the for-profit entity, OpenAI relied on funding from Elon Musk and primarily encouraged other investment “in the spirit of a donation.” It therefore did not engage in traditional fundraising targeting VC investors. Prominent VC firms including Thrive Capital and Andreessen Horowitz invested in OpenAI by purchasing shares owned by employees.

(c)  Employees. — When OpenAI’s nonprofit board fired Sam Altman, employees expressed surprise and threatened to resign in droves. Employees are natural casualties of a corporation’s exclusive focus on profit because suppressing wages helps lower expenses and boost profits. But AI company employees do not fit this mold. Startups, including AI companies, often compensate employees with potentially very valuable equity. To provide a striking example: In February 2024, OpenAI closed a sale that allowed employees to sell their equity to Thrive Capital. The pricing of the equity produced a company valuation of $80 billion, up from $29 billion the previous year, meaning that the value of employees’ equity had increased by nearly 300% since the previous sale. As a point of comparison, the S&P 500 produced a total three-year return of 7.3% with a standard deviation of 17.4% as of January 2, 2025. The ability of equity to turn employees into profit-focused capitalists has been widely studied, but AI companies are unique in that the risk of depressed wages and layoffs is not a countervailing consideration for AI developers. OpenAI employees who threatened to leave would not be out in the job market for long; the market for skilled AI developers is booming.

Despite being compensated with equity that could make them millionaires overnight, employees of OpenAI have previously had relatively few opportunities to cash out. Recently, OpenAI has expressed a move toward conducting more frequent equity sale opportunities in an attempt to appease current and former employees. This shift supports a general sentiment of anxiety regarding employee equity compensation. It is not difficult to imagine the anxieties that would arise amongst OpenAI employees upon learning of Altman’s removal, especially given external investors were pulling out of employee liquidity transactions and considering revaluing the company’s equity at zero. Employees as profit-driven superstakeholders were able to weaponize their immense leverage against the nonprofit board to bring back Altman.

(d)  Large Tech Companies. — Microsoft played a key role in the chaos after the OpenAI board fired Sam Altman, “assur[ing] . . . positions for all OpenAI employees” and “[p]laying a central role in negotiations” to reinstate Altman as CEO. Here, Microsoft represents another hybrid stakeholder — large tech companies. These companies play many roles. As investors, Big Tech is looking to benefit from the growth in firm value. The AI market has been incredibly competitive, and some large companies have decided to coopt burgeoning startups rather than focus exclusively on developing an AI arm in-house. Many see Microsoft as the de facto owner of OpenAI, and Google and Amazon have both poured staggering amounts of capital into Anthropic. The markets have been receptive to Big Tech’s cooptation of AI startups. On the other hand, Samsung, which failed to either develop robust AI capability in-house or partner with an AI startup, declined in value by over $120 billion due to investor concerns that it was losing the AI race. Microsoft, too, suffered a “notable loss” in the value of its stock after news of Altman’s firing initially broke. As such, Big Tech’s own stock prices and market positions are still tied to their presence in the AI sector and their relationships with AI companies. These large tech companies also act as suppliers, licensors, and business partners. For example, a substantial portion of Microsoft’s multibillion dollar investment into OpenAI is believed to consist of cloud computing credits. AI requires large amounts of “compute,” fostering a symbiotic relationship between AI startups and Big Tech. In fact, Microsoft’s services give it so much leverage that Altman once told an interviewer that “if Microsoft were to cut [OpenAI] off from its servers, [OpenAI’s] work would be effectively paralyzed.” Thus, employees and Big Tech working in concert threatened the very existence of OpenAI.

3.  Superstakeholder-Led Amoral Drift. — While this Chapter can only speculate, it is possible that OpenAI’s nonprofit board miscalculated, or misread, the power of the stakeholders. OpenAI’s nonprofit board had no shareholders and no fiduciary duty to boost profits. Furthermore, OpenAI’s nonprofit board was quite far removed from the day-to-day experience of employees, as well as the for-profit entity’s reliance on Microsoft for investment and compute. It’s possible that absent ties to a traditional shareholder base, the nonprofit board could not immediately sense the temperature of employees and Microsoft on firing Sam Altman. The current chair of OpenAI’s nonprofit board, Bret Taylor, has made remarks consistent with this dynamic, suggesting the existence of a disconnect even prior to OpenAI’s board shuffle: After describing his role as one of “governance” rather than “day-to-day operations,” he contrasted OpenAI’s aim of “building artificial general intelligence” with the aims of his other company, Sierra, which he described as “creating a product for enterprises.” Although Taylor may view his role as focused on governing research into AGI rather than overseeing a commercial, product-oriented enterprise, OpenAI’s equity-compensated employees and investors as well as customers may not feel the same.

If this process of amoral drift did occur, it is unlikely that organizing as a PBC would have prevented it. Additionally, the nonprofit-controlled structure likely exacerbated the disjuncture between OpenAI’s nonprofit board and its stakeholders. AI companies may need to look elsewhere for ways to preserve their social missions in the face of superstakeholders.

D.  Amoral Drift: Inevitable or Avoidable?

As the previous sections have discussed, OpenAI’s novel corporate governance measures incompletely shielded the board’s decisionmaking from profit-oriented superstakeholders. These employees and suppliers (Microsoft) had appetites for risk similar to shareholders because their compensation incorporated a stake in future profits. But the OpenAI employees and Microsoft have far more leverage over the company than typical shareholders because they also provide scarce, mission-critical resources: in the employees’ case, highly skilled labor, and in Microsoft’s case, compute.

OpenAI compensated its employees and Microsoft with equity out of necessity. Providers of scarce resources can demand higher compensation. But startups generally don’t make enough money to afford to pay so much compensation in cash. While loans are an option for more mature companies, banks are generally unwilling to advance funds to startups, whose uncertain business prospects imperil timely repayment. As a result, startups typically compensate employees with equity — that is, a piece of the (hopefully gigantic) profits they will make in the future — because startups have little else to give. Similarly, startups sometimes compensate service providers such as lawyers — and in OpenAI’s case, Microsoft — with equity as a form of deferred compensation in place of or in addition to fees.

This phenomenon of equity compensation complicates attempts to use corporate governance to minimize externalities. Employees and other stakeholders with equity compensation will want the company to make more profitable (but potentially riskier) choices. Conversely, entirely salaried employees and supplier-creditors will desire stabler, less risky (but potentially less profitable) choices. After all, a stable company will be a stable employer and a reliable partner for suppliers. Riskier strategies might lead to losses, causing employees to be laid off and suppliers to be left unpaid. With equity compensation, however, employees and suppliers receive huge potential upside that compensates for the downside risk of losses or bankruptcy. If startups have little else of value but equity, how can startups like OpenAI adequately compensate critical stakeholders without creating misaligned superstakeholders?

One potential answer is equity compensation linked to mitigation of AI risk. Corporations in recent years have tied their executives’ pay to measures of ESG performance. For example, CEO compensation has been tied to improving employee diversity and reducing carbon emissions, among other goals. AI startups could similarly attempt to align employees’ incentives with the founder’s prosocial goals through strategic grants of equity. Performance share units (PSUs) are commonly used instruments that typically “deliver[] a variable number of shares at the end of a three-year performance period.” With PSU grants, employees and executives will receive more shares at the end of the performance period if the company achieves AI safety goals and avoids AI accidents or fewer shares if AI threats come to pass.

However, there are several shortcomings with ESG-linked compensation, which could reduce its effectiveness for AI companies. First, AI safety metrics may be hard to measure, so the entity charged with measuring will have outsized influence over employee pay. Typically, the board or the board’s compensation committee handles the specifics of executive pay. The boards of Anthropic and (at least initially) OpenAI are structurally geared toward AI safety. So, in theory, these boards could plausibly be trusted to impartially calculate AI safety–linked compensation. However, if the board might drift from the original safety-centric mission (as occurred at OpenAI), it would not be wise to vest discretion over easily manipulable compensation-related AI safety metrics in the board. An external referee in the mold of Anthropic’s Long-Term Benefit Trust is likely the most trustworthy entity for ensuring compensation is aligned with prosocial goals.

Second, it may be difficult to square the time horizon of equity compensation with the time horizon of AI risk. On one hand, some manifestations of AI risk are more or less immediately perceptible: for example, discoveries of AI bias or intellectual property violations. In these cases, it should be relatively easy to adjust compensation to account for internal failures. But some hazards may only surface years down the road. If plausibly dangerous superintelligent AI is decades away, a researcher working on pieces of the puzzle now will not feel constrained by conditions on equity that vests in three years. Extending the vesting period to significantly longer than three years is an infeasible solution because long-delayed compensation would be unattractive to prospective employees.

Creative debt instruments offer another potential answer for compensating stakeholders. Scholars have proposed “corporate social responsibility bonds” (CSR bonds), which do not require repayment if prosocial goals are met, and “flexible low yield paper” (FLY paper), which is low-cost debt that converts into equity if founders abandon the prosocial mission. These instruments may prove useful for prosocial AI startups because they can use the capital to pay non-prosocial stakeholders in cash rather than equity, preventing the creation of superstakeholders. Furthermore, shareholders and profit-motivated stakeholders will have to think twice before pressuring the board to drift from the prosocial mission because it will affect their bottom line. The CSR bonds will reduce the company’s profits if the bonds have to be repaid, and the FLY paper will dilute the existing equityholders if it converts to equity.

The main obstacle to issuing these creative types of debt is the limited supply of prosocial capital. AI startups cannot effectively fund themselves with this debt if nobody wants to buy it. Private, closely held companies like startups cannot tap public markets to access the capital of the ordinary, prosocial people contemplated by Hart and Zingales. Startups rely on wealthy individuals and institutional investors for capital, only some of whom will be able and willing to invest prosocially. Indeed, OpenAI has justified its transition to a for-profit structure by pointing to the inadequacy of donations and capped profits for meeting the enormous costs of developing AGI.

Ultimately, the promise of any corporate governance–oriented solution to AI risk is bounded by its reliance on prosocial corporate actors. Employees skeptical of the warnings about AI may choose to work at “dirty” AI companies with higher, less rule-bound compensation. Investors who disbelieve the so-called “AI doomers” may offer their capital to “dirty” startups offering higher returns instead of clean ones. If citizens think AI does in fact pose serious threats to society, they should not leave it to private ordering to solve the problem.

Conclusion

This Chapter has reimagined “amoral drift” for applicability to the unique landscape of AI companies, which are private, closely held companies with founder staying power — all elements that Hart and Zingales did not have in mind. And yet analysis of these elements suggests that amoral drift may still be inevitable despite efforts to prevent it. Ultimately, by solving for shareholder pressure instead of stakeholder pressure, AI companies may have solved for the wrong variable. Future prosocial innovation in AI corporate governance must thread the needle between raising capital and reducing the influence of profit-motivated actors. Despite the industry’s admirable willingness to innovate and experiment, recent experiments do not appear to have met the challenge. Whether AI will prove as risky as its critics predict remains an open question. But the attempts to address concerns through corporate governance have revealed the multifaceted nature of shareholders and stakeholders, confounding efforts to solve amoral drift.

The post Amoral Drift in AI Corporate Governance appeared first on Harvard Law Review.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Block the adblockers from browsing the site, till they turn off the Ad Blocker.