its colossal fundraising does not hide the elephant in the room – L’Express

its colossal fundraising does not hide the elephant in the

Billions as if they were raining down. The weather report has not changed for two years at OpenAI. The entity co-founded by Sam Altman has just completed a colossal fundraising of 6.6 billion dollars. A round table which may surprise even though the participants are the “usual suspects” of the sector: Thrive Capital, Microsoft, Nvidia, SoftBank and the Abu Dhabi fund MGX.

Hadn’t Microsoft already invested $10 billion in OpenAI less than two years ago? The valuation of the entity co-founded by Sam Altman has climbed at a staggering rate, from 20 billion in October 2022 to 157 billion today. Even the SpaceX comet took three times as long to climb to this altitude. “By raising such amounts, the message that OpenAI sends to its competitors is: ‘you will never be able to catch up with us'”, analyzes a Parisian investor focused on technology.

READ ALSO: Yoshua Bengio, Turing Prize: “If AI becomes smarter than us…”

If the OpenAI team has good reasons to burst the champagne, this lifting still puts an angry question in the spotlight. The elephant in the room that many pretend to ignore: generative AI is an expensive business. Not to say ruinous. OpenAI can boast of having attracted users and generated revenue in record time. More than 250 million people now use ChatGPT every week. Among them, eleven million subscribe to paid versions (10 million to ChatGPT Plus and 1 million to the “Business” versions). Thanks to this, the start-up is now generating tidy revenue. According to the New York Times, which obtained elements transmitted by OpenAI to potential investors, it achieved 300 million dollars in turnover in August and should total 3.7 billion over the year 2024. OpenAI plans to generate 11 .6 billion dollars from next year.

OpenAI is “burning cash” at a maddening rate

But the figures revealed by the American daily also shed harsh light on the cost of these generative AIs: that of training before they are put on the market, but also, the cost of “inference” (what each request is worth). made at ChatGPT), perhaps the most annoying. These costs are obviously high because, in 2024, OpenAI will incur another $5 billion in losses. And this figure does not take into account the stock compensation enjoyed by OpenAI employees, nor several other significant expense items.

READ ALSO: OpenAI, the non-profit star… worth billions

Given the rate at which OpenAI is “burning cash”, it will undoubtedly soon need to raise billions of dollars again. “Ultimately, it will be listed on the stock market, but rather within two or three years,” predicts a good connoisseur of this ecosystem. Its rival Anthropic, which already raised 7 billion last year, is also actively looking for new investors.

The OpenAI team is betting that inference costs will eventually fall. And she has proven, in the past, that she does not lack flair. “One of the keys to their current success is to have made the risky bet, because it is very expensive, that by massively increasing the size of the models they would become significantly better. Projections suggested this but this was by no means certain. Until we saw the incredible performance of GPT-3,” confides a French researcher specializing in artificial intelligence.

Despite OpenAI’s extraordinary trajectory and the confidence it inspires among investors, an unfortunate unknown remains: will the start-up’s revenues one day significantly exceed its costs? OpenAI reportedly plans to increase the price of ChatGPT’s monthly subscription to $22 by the end of the year and to $44 within five years. An aggressive price increase, which may be difficult for users to swallow for two reasons.

The Achilles heel of generative AI

The first? Despite the theatrical declarations of Altman – and many others – generative AI is far from having proven its relevance in all fields. For tasks where production reliability is not the main criterion (creative text or visuals, etc.), large language models can save precious time as long as the user knows how to formulate their request well. They also achieve amazing results in areas such as translation. On the other hand, we demand reliable responses and reasoning from generative AI and things become complicated. Seriously. Because they only ever make sophisticated probabilistic answers.

If, to questions such as “how much is 2 + 2?”, ChatGPT sometimes answers “4”, it is because the vast corpora of texts on which it has been trained make it say that this is probably the best answer , not because he performed the operation. He often makes mistakes in simple operations.

The new OpenAI o1 model unveiled by Sam Altman on September 13 certainly improves the situation, by breaking down the complex problems submitted to it into several simple steps and by evaluating the reliability of its “path of thought”. But even o1 will not completely solve the problem. A little riddle that we submitted to him highlights the extent to which “o1” still lacks common sense: “A teacher of a large kindergarten section asks her students to cut a 10×10 cm sheet into 2×10 cm strips. On average a child of This class takes 20 seconds to cut a strip. How long will it take on average for a child to completely cut their paper into strips? If the reasoning that o1 follows does not lack logic (it evaluates the number of strips contained in the sheet before multiplying it by the average time to cut a strip), the AI ​​has not “understood” that a Once the penultimate strip was cut, the last was cut simultaneously. So the correct answer is 80 seconds and not 100, like o1 and many of us may think that instinctively.

READ ALSO: Stuart Russell (Berkeley): “The capabilities of generative AI have been overestimated”

In a long interview given to Express, the professor at the University of Berkeley and author of the reference work Artificial Intelligence : A Modern Approach, Stuart Russell confides that “merely training LLMs will not produce real AI“. It will undoubtedly be necessary, according to him, to resort to other approaches. A necessity also pointed out by Yann Le Cun, the director of AI research at Meta, who received a Turing prize with Yoshua Bengio and Geoffrey Hinton in 2019. for their work on deep neural networks.

The artificial intelligences of OpenAI and its rivals obtain impressive results in the tests they are given in different fields (mathematics, IT development, etc.). “The problem is that their training corpora are now so large that we don’t know whether these AIs are able to generalize latent skills from this data or whether they are repeating, like parrots, things they have learned. seen passing by,” confides an AI researcher.

ChatGPT price will double

The second reason why OpenAI may have difficulty swallowing its price increases is that competition is becoming tough in the field. Several players (Anthropic, Google, Mistral AI, etc.) are breathing down its neck. And Meta has given a boost to those who hope to make money with AI, by making its Llama model open source. It is true that Mark Zuckerberg’s group has less need to sell AI to its users than to keep them by offering them fun tools thanks to it.

Well aware of this, Sam Altman also asked investors who participated in his last round not to invest in other competing companies like Anthropic or xAI, Elon Musk’s start-up reveals the Financial Times.

OpenAI has a close race to win. But in this marathon, the company is dragging a few burdens. The biggest? The internal crisis that the start-up is going through. In recent months, OpenAI has seen several members who participated in its creation leave. John Schulman, Ilya Sutskever, Andrej Karpathy… Of the eleven founders of OpenAI, today only Sam Altman and Wojciech Zaremba remain. Greg Brockman has not officially left but he has taken a sabbatical without indicating a specific return horizon.

READ ALSO: “The promised revolution has not yet taken place”: Generative AI, the shadow of the bubble

The circle of OpenAI co-founders is not the only one to have shrunk. Several key figures in the company have left, such as Jan Leike last May. And on September 25 there was a thunderclap: Mira Murati, technology director of OpenAI, Bob McGrew, director of research, and Barret Zoph, VP in charge of research, all three packed their bags. Of course, the AI ​​hype is partly fueling these departures: competing companies and investors are dangling golden bridges for AI bigwigs to join their ranks or start their businesses. But that is not enough to explain such a wave of departures. Several members have also indicated that the direction taken by OpenAI no longer suits them.

The burning question behind this brain drain is: Does OpenAI have the talent required to carry out its projects (GPT-5, Sora, etc.)? Despite the stoic messages posted by Altman and Zaremba, we sense that things are getting complicated. The latter boasts on the social network a start-up “incredibly wonderful despite all its imperfections. Sam despite his flaws and mistakes has created an incredible organization”. But to cheer himself up, he uses funny comparisons. “Their departure made me think of the trials of parents in the Middle Ages when six of their eight children died prematurely. The parents had to accept these painful losses and find deep joy and satisfaction in the two survivors.” Hello atmosphere.

Sam Altman’s lunar projects

These latest departures may well have been the last straw for Apple, which initially considered participating in the raising of OpenAI, but ultimately gave up. And internal earthquakes, OpenAI is preparing to experience others, the start-up clearly having the firm intention of moving away from its nature as a non-profit entity. Its goal of transforming itself into a for-profit B-Corp company promises to pose a legal headache around the number, value and sharing of OpenAI’s shares.

The key to OpenAI’s success will be to maintain a certain pragmatism. Certainly, it is by daring to spend money to create larger models that the entity succeeded. However, we should not take this logic to the extreme. Sam Altman’s statements at this level could legitimately worry his investors. “I don’t care if we burn 500 million, 5 billion or 50 billion dollars. As long as we are on the right trajectory to create more value than that for society,” he declared a few months ago.

THE story of New York Times of his time at TSMC headquarters in Taiwan is in the same vein. Lightly, Altman waved before the management of the semiconductor giant the idea of ​​investing 7,000 billion dollars to build 36 chip manufacturing factories. Statements considered lunar by several officials of the group who know the risks posed by this type of expensive projects, reports the American media. From visionary company to industrial accident, the line is always fine. OpenAI must make sure to stay on the safe side.

.

lep-life-health-03