OpenAI, the non-profit star… who was worth billions – L’Express

OpenAI the non profit star… who was worth billions – LExpress

If you had to guess Sam Altman’s favorite game, it would be Towering Inferno. This popular childhood pastime involves stacking more and more wooden blocks without letting them fall over. As the head of OpenAI, Silicon Valley’s most-watched company, Altman is now playing the adult version. His goal: to see how many billions of dollars a nonprofit can absorb without collapsing.

Elon Musk, who helped create the entity, is the quickest to joke about its evolution: “OpenAI should rename itself ‘Ultra-closed AI for maximum profit.'” The fact that the structure took a more lucrative turn in 2019 constitutes, in his eyes, “perfidy and deception of Shakespearean proportions.” He filed a complaint about it in February, before discreetly withdrawing it in June. Then he filed a new one in August, “much more solid,” according to him.

Some of Musk’s criticisms are excessive. “There is nothing abnormal about a non-profit structure opening a for-profit subsidiary. It can lead to great successes. If everything is well thought out, the success of the second benefits the first and everyone’s interests are aligned,” points out Sophie Chassat, director of several large companies and vice-president in charge of sustainability issues at the firm Accuracy, citing the European model of the shareholder foundation.

A huge valuation

OpenAI had good reasons to open a for-profit subsidiary. AI research is expensive. Extremely expensive. Raising the funds needed to purchase the required computing power and recruit experts in the field with simple donations was wishful thinking.

What makes OpenAI’s case unique and tricky is that its subsidiary has worked a little too well. Its structure has attracted colossal investments – $13 billion from Microsoft. Its valuation of more than $80 billion could cross the $100 billion mark thanks to a new round of funding – the names of Apple and Nvidia are circulating.

Of course, OpenAI claims to have taken all the appropriate measures to protect its original mission: to develop general artificial intelligence in a safe and beneficial way for all humanity. The board of directors of the nonprofit parent company controls all of OpenAI’s activities. And the majority of its members must be independent. The maximum return that investors and employees can receive is capped, so that commercial interests are “balanced with the requirements of security and sustainability rather than incentivizing a pure search for maximum profit,” it specifies. the OpenAI website.

READ ALSO: OpenAI: What the serial departures of founders hide

The situation remains murky, however. The November crisis that led to Altman’s abrupt dismissal was quickly labeled a huge blunder by the board. Faced with angry employees who threatened to resign en masse and Microsoft’s terrified reaction, the board members quickly backtracked. Several of them, including Ilya Sutskever and Helen Toner, are no longer on the board. And Sam Altman has been restored to the throne.

“The sequence revealed governance dysfunctions,” Sophie Chassat points out. “The board of directors was not aware of strategic information: the commercial launch of ChatGPT or the fact that Sam Altman was the owner of OpenAI Startup Fund, a venture capital fund.” Although he quickly regained his position as CEO, ownership of this fund was taken away from him a few months later. As for the fact that OpenAI employees defended its strategy tooth and nail, their opinion is not entirely neutral: an internal mechanism grants them a share in the profits of the lucrative subsidiary.

Tensions after the “coup” against Sam Altman

Several departures and positions have also shown that the way OpenAI secures its products is not unanimous internally. A year ago, Dario and Daniela Amodei left the company to start their own start-up, Anthropic, presented as more demanding in terms of AI security. One of OpenAI’s co-founders, John Schulman, has since joined them. Ilya Sutskever, scientific director and key figure in the building, also left the ship last May, a few weeks after supporting the “coup” against Altman.

At the same time, researcher Jan Leike, who co-led OpenAI’s Alignment team, joined Anthropic in a scathing critique of his former employer. “Building machines that are smarter than humans is inherently risky. OpenAI takes on an immense responsibility on behalf of all humanity. But building flashy products has taken precedence over our security culture in recent years,” he said in a post on XIn June, a dozen OpenAI employees and former employees, accompanied by two Google DeepMind employees, finally denounced, in an open letterpressure to sign excessively strict confidentiality agreements:

“AI companies have only weak obligations to share information with governments, and none with civil society. There is no reason to believe they will do so voluntarily. […] Until there is effective government oversight, employees—current and former—are the few people who can hold these companies publicly accountable for their actions. Yet extensive confidentiality agreements prevent us from voicing our concerns. Traditional whistleblower protections are inadequate because they only work for illegal activities. Yet many of the risks we are concerned about are not yet regulated.”

The burning question in OpenAI’s case is whether its hybrid structure will effectively protect its original mission for the greater good. “The cap that OpenAI has set on returns is, in theory, supposed to allow the nonprofit entity to benefit from its for-profit subsidiary. But the bar is so high (100x) that it’s questionable whether it’s even a ceiling,” says Rose Chan Loui, founder and director of the Lowell Miken Center on Philanthropic Structures at Los Angeles Law School.

READ ALSO: Stuart Russell (Berkeley): “The capabilities of generative AI have been overestimated”

The fact that OpenAI imposes a majority of “independent” members on its board of directors is also great on paper. But, contacted on this subject by L’Express, the company did not go into detail about what could constitute a conflict of interest in its eyes.

A new structure for OpenAI?

“Not owning shares in OpenAI is, for them, a sufficient guarantee of independence. But there may be indirect economic interests, for example, among partners of the profit-making entity,” warns Rose Chan Loui, who published with colleagues a detailed note on the subject. Elon Musk’s complaint also suggests that Sam Altman – who is a member of the board – has interests in several companies that do business with OpenAI. “If this turns out to be true, it is not necessary for Altman to have shares in the lucrative sector, to have an interest in it generating profits,” decrypts the director of the Lowell Miken Center. The American association of consumer protection Public Citizen was also moved of these gray areas, and invited the California attorney general to look into the matter.

“No board is completely immune to pressure from its donors, or in OpenAI’s case, its investors,” says Rose Chan Loui. This is doubly true in a case where investments are so high. “The truth is that the for-profit subsidiary is too successful commercially and financially to remain truly subordinate to the board,” says Bruce Kogut, professor of strategy and founder of the “Business, AI and Democracy” project at Columbia Business School.

The expert predicts that OpenAI “will evolve towards a simpler governance structure and will ensure that it attracts leaders with established reputations.” A credible hypothesis. Sam Altman seems, in fact, to be distancing himself from the “charity.” Even if a spokesperson for the company assures us that the non-profit entity is “central and will continue to exist,” several American media outlets, including The Information and the Financial Timesrevealed that OpenAI was trying to shift to a more flexible, B-Corp-type business model. A model already used by competitors such as Anthropic or Elon Musk’s company xAI. And which would put OpenAI on the right track for a stock market listing.

.

lep-general-02