ChatGPT: the lucrative turn of OpenAI, the laboratory that wanted to “protect humanity”

What ChatGPT taught me about Macron and Shakespeare by Sylvain

How many believed in OpenAI that day? We are in May 2019, when recording StrictlyVC, a show dedicated to Silicon Valley. In front of an audience of insiders, Sam Altman, the company’s president, gets tangled up. Still unknown, its structure, which wants to build the first AI as intelligent as man, has just gone “for profit”. A turning point. But for now, OpenAI “has never generated revenue” and has “no idea how to do it”, confesses Altman. He adds: “We will ask our AI”. His listeners laugh. Was he serious?

Three and a half years later, OpenAI’s machines have not reached such a level. But the company managed to design an intelligent system, able to write summary but convincing speeches, pitches or poems. In free testing since the end of 2022, ChatGPT has already acquired several million users, and should integrate Bing, Microsoft’s search engine. Its equivalent for the image, Dall-E 2, has a million customers waiting on the waiting list. Subscriptions to these tools should bring in a billion dollars in 2024, according to the organization which, in its beginnings, wanted to be… philanthropic.

In seven years of existence, the American start-up has carved out a choice place for itself in the small world of artificial intelligence – until claiming a valuation at $29 billion, if the new fundraising announced materializes. While cultivating ambiguity about its raison d’être, in a sector crossed by important security questions, about the biases and powers of such technologies. Created as a research laboratory intended to “protect humanity” against the “misuse of AI”, the organization has gradually turned away from its initial promises – non-profit, publication of its source codes and financial independence. .

David against the Gafam

In 2015, the founders of OpenAI, Sam Altman and Elon Musk brandished the “non-profit” status of the organization as the ethical guarantee of their crazy ambitions: to compete with the human brain. “As our research is free of financial obligations, we can better focus on a positive human impact”, assure the respective bosses of Y Combinator and Tesla, in a introductory ticket. And to insist: in the face of the potential dangers of AI, “it is important to have a leading research institution, capable of favoring the collective interest. […]”.

This promise allows OpenAI to obtain the support of renowned investors, such as Reid Hoffman and Peter Thiel, respective founders of LinkedIn and PayPal. They are injecting a billion dollars, a tidy sum for an organization which from scratch must compete with the digital giants. Fifteen of the most brilliant scientists in the field, such as Ilya Sutskever, a former Googler and pioneer of these technologies, also signed, attracted by the ambitions of the structure, which did not hesitate to pay them a large salary, despite the lack of profit.

OpenAI adopts other safeguards, supposed to prevent the creation of dangerous AI. A department is dedicated to safety and ethics research. And the organization promises to make its work public. If Google and Facebook still reveal their source codes, “will they do so, if they are close to surpassing human intelligence”, questioned Altman at the launch of OpenAI. “OpenAI responds to emerging fears about AI, with the idea that if the tools developed are accessible to everyone, the risks are lower,” summarizes Yann Chevaleyre, research director in artificial intelligence at Paris Dauphine.

Railings… with feet of clay

In February 2019, OpenAI abruptly broke with this policy. In a blog post, the research structure explains that it has just created a technology that is too dangerous to publish, called GPT-2, the previous version of the technology that allows ChatGPT to be so realistic in its use of the language. . The company claims to fear an avalanche of “misleading press articles”, and “content […] falsified for social networks”… but multiplies press access, in parallel. Exercise of lucidity, or stunt of com’? OpenAI will end up publishing the details of this code. A shift… foreseeable? From April 2018, a few weeks after the withdrawal of Elon Musk, OpenAI adopts a new charter of principles in which the organization already plans to reduce the share of source codes transformed into common goods, because of the “security issues” linked to the improvement of its technology. The organization also explains “having to mobilize substantial resources”, faced with computing capacities which double every 3 or 4 months in the sector, according to his estimates.

A month after the GPT-2 episode, the transition is accelerating. To raise more funds, OpenAI is becoming “a capped profit venture”. At the same time, she wins a match against the world champions of Dota 2, a cooperative video game with complex and confusing strategies for the computer. A world first. High point, in July 2019, Microsoft invests a billion and makes sure to take advantage of its tools as a priority. In exchange, OpenAI can use Microsoft’s remote calculators. The structure avoids redemption, but says goodbye to philanthropy.

What to become “the number one company”

Would OpenAI have wanted to keep the goose that lays the golden eggs for itself? The timing is surprising. “After a few years of trial and error, OpenAI started to produce some really interesting developments. At the same time, they stopped sharing their manufacturing secrets”, underlines Yann Chevaleyre. On this subject, Altman assured, still in May 2019: “If we wanted to make money, we would already do so”. While hinting, through other executives, that OpenAI could become the “world’s number one” company and enjoy “unprecedented” margins, if it gets its way.

A useful double discourse, notes Jean-Gabriel Ganasica, president of the CNRS ethics committee, and computer scientist. “The leaders of OpenAI have understood that to obtain funding, and to recruit at a very high level in the face of Gafam, it was necessary to make sensational declarations. Developing AI to protect humanity against AI, does not mean anything in practical, we don’t really know the reality of the OpenAI project, but it’s attractive. OpenAI frightens, and says at the same time that it can protect”.

“OpenAI productions are not so different from other market players”, adds François Yvon, researcher in digital sciences (Saclay, CNRS). “His strength was above all to understand the scope of text and image transformation tools. They could assist in the production of reports, draw insights from large databases, or write legal summaries, among others. technologies were there, but they were the first to imagine these applications”.

December 2022. After the success of ChatGPT, OpenAI tries to recruit using a promotional clip. The safeguard of humanity would rest there, in these gray brick premises, in the middle of San Francisco, according to the video. Between the indoor plants, in front of the large wooden bookcases, or perhaps, on those soft linen sofas, which we see almost everywhere in Silicon Valley. Here would be made “powerful AI”, “for the benefit of all”, “totally secure”. At the same time, Elon Musk tweeted: “ChatGPT is horribly efficient. We’re getting closer to a dangerously intelligent AI”. A wink.

Since Microsoft’s investment, a few heads have been missing in San Francisco. At least 14 scientists have left the company, according to the FinancialTimes. Some had participated in the emergence of ChatGPT. At the head of the slingers, Dario Amodei, former head of the “Artificial Intelligence and Security” department, has created his own organization: Anthropic. A “public benefit corporation”, with special governance arrangements, to protect itself from commercial interference. And guarantee an AI… “for the benefit of humanity”.

lep-life-health-03