It will be OpenAI with him. Or nothing at all. Almost all of the 700 employees of the firm marketing ChatGPT have asked in a letter to the current board of directors to resign, and to a new board to reinstate Sam Altman as CEO, otherwise they will also leave. Fired, almost rehired, before finally being picked up by Microsoft, the 38-year-old entrepreneur is therefore once again expected to take up his former position. Is this the end of the soap opera? The reasons to believe in happy ending are perhaps found in one of the signatures of the letter : that of Ilya Sutskever, the head of the scientific section at OpenAI and member of the board of directors. Neither more nor less than… the leader of the revolt which led, according to several American media, to the exclusion of Altman. “I deeply regret participating in the board’s actions. I never intended to harm OpenAI. I love everything we have built together and will do everything I can to reunite the company”, he also wrote on X (formerly Twitter).
It’s hard to doubt Sutskever’s sincerity. The 36-year-old deep learning specialist co-created OpenAI with Sam Altman and several other personalities in 2015. The differences between the two characters, however, have never been a secret. Altman is a successful entrepreneur like America makes dozens of every year. A small app founded at just 19 years old and then sold at a high price a few years later opened the doors to Silicon Valley wide for him. He became a figure in the start-up incubator Y Combinator around the time when Sutskever was already revolutionizing AI. Science.
The Russian-born researcher is notably at the origin of AlexNet, a deep neural network created in 2012 which has considerably improved the automatic detection of images. The research paper, co-written with Geoffrey Hinton – one of the pioneers of AI at Google – and Alex Krizhevsky, caused a sensation, also because of the use of GPUs, these chips until now used for graphics processing in video games ( and which are now being sold at a high price at Nvidia). Sutskever then helped create AlphaGo, an AI that shrivels the best Go players in the world. Another decisive project in the history of this technology, which DeepMind boss Mustafa Suleyman recently described in his book The Breaking Wave, from “Sputnik point” for China. This is the moment when the country understood that it had to invest massively to catch up in this area.
A leader in modern AI, Ilya Sutskever has been thinking about AGI for a long time. Or the prospect of a general artificial intelligence, capable of carrying out any human task. Just like Sam Altman. This common vision surely explains why Ilya Sutskever has so far never failed at OpenAI. Even when it, deviating from its non-profit tradition, has become a company valued at nearly 90 billion dollars. The departure of Dario and Daniela Amodei to found another competing company, Anthropic, more focused on security, does not make it change course during the year 2021. Nor does the release of ChatGPT in November 2022. But the The former’s security fears about AI are publicly gaining momentum after the release of GPT-4. “Humans can lie, hide their intentions, and do it for years. Why not AGI? It can be difficult to detect,” he notes in a tweet on June 23. Shortly after, he published a blog post on the notion of “superintelligence”, capable of occurring “this decade”. It is described as “the most impactful technology humanity has ever invented”, but on the other hand, could also “be very dangerous and lead to the marginalization of humanity or even human extinction”. Sutskever thus monopolizes 20% of OpenAI’s computing power to try to build in return a “superalignment”, a system capable of “reliably supervising AI systems much more intelligent than us”. Perhaps a first, internal fault line between him and Sam Altman, who first wishes to run his product, ChatGPT, for commercial purposes.
Ilya Sutskever’s torments are part of the burning debate taking place, a year after the release of ChatGPT on the security of artificial intelligence. “Are we going too fast?” ; “Can these AIs benefit all of humanity”; “Are they dangerous?” So many questions that currently divide the community. One part places its hopes in creation open source of models and the sharing of knowledge to allow artificial intelligence to be better controlled. In a TEDx conference published Monday, November 20, Ilya Sutskever also highlighted this process: “We expect our competitors to share technical information so that AIs are safe,” he urged, without however mentioning the word open source. And vice versa.
The decision of the scientist, accompanied by the other members of the council (as for them non-founders) Adam D’Angelo, Tasha McCauley and Helen Toner, to announce the departure of Altman, still conceals an element of mystery. But it seems to come after the divide has widened a little more in recent days, between business and science. Altman had just launched GPT-4 Turbo, a more powerful version of his large language model, as well as a “store” for custom GPT purchases, at DevDay, the first conference for OpenAI developers . Sutskever’s regrets, however, suggest that he had underestimated one risk: that of the collapse of his own firm.