one year after ChatGPT, the clan war rages – L’Express

one year after ChatGPT the clan war rages – LExpress

“The psychology of AI alarmists? Elon Musk: savior syndrome, always looking for a threat from which he could protect the world. Geoffrey Hinton: far left, ultra-eccentric. Yoshua Bengio: naive idealist […] Sam Altman: his supposed transparency regarding the risks of AI is part of his pitch commercial.” Almost all the names in artificial intelligence (AI) take it for their stride in this post mocking signed Pedro Domingos, professor at the University of Washington and author of the book The Master Algorithm. This spawn is far from being the only one to scrap with his counterparts. Since the launch of ChatGPT at the end of November 2022, the world of AI has indeed changed and a small clan war is brewing there. First between the “boomers” and the “doomers”, as the Anglo-Saxon press calls them. The first refer not to babies born after the war but to the large family of optimists towards AI seen as a factor of progress. The latter are their counterpart: pessimists who think that AI could one day plunge humanity into oblivion.

READ ALSO >>OpenAI: Sam Altman – Ilya Sutskever, the divide between business and science

Last spring, the “doomers” demanded a moratorium on AI. Scientists recognized in the field such as Yoshua Bengio or Stuart Russell have joined this approach launched by Elon Musk, as has Raja Chatila, professor emeritus at La Sorbonne, former director of the Institute of Intelligent Systems and Robotics (ISIR ), who still remains in favor of a break in AI. “We continue to create more powerful systems,” he notes, “but which present the same problems: when you interrogate them, the result is not reliable. There is no sufficiently developed mechanism to limit this risk, detect hallucinations or the emergence of unexpected behaviors.” Even one of the godfathers of modern AI, Geoffrey Hinton, regrets having participated so actively in its development.

Yann LeCun facing the AI ​​doomers

On the other side of the spectrum, “the camp of optimists including Yann LeCun, [NDLR : vice-président de Meta en charge de l’IA] is perhaps the leading figure”, analyzes Sylvain Duranton, director of BCG that AI will be 80% beneficial and 20% harmful. It brings digital assistants that will reduce our workload. And also problems, such as poorly controlled machines,” said Neil Mawston, associate director of the analysis firm TechInsights. The psychodrama that played out in mid-November at OpenAI exposed the squabbles in broad daylight. between the two chapels. Attached to the project of securely developing artificial general intelligence (AGI), the company’s scientific director, Ilya Sutskever, seems to have played a key role in the ousting of the famous CEO Sam Altman, who was more concerned , him, by the commercial future of the entity. Which Altman has since been reinstated, almost all of OpenAI’s staff having threatened to resign otherwise.

READ ALSO >>Xavier Niel: his plan to make Iliad a European AI giant

These debates may seem idle. “We don’t know how to define AGI. And today’s AI already poses real questions: training models with copyrighted data, bias, manipulation, false information… Fear of Terminator especially excites white billionaires”, criticizes the doctor in artificial intelligence Gilles Moyse, author of Will we give our language to ChatGPT? The impact of AI on our future (Le Robert éditions). But these tensions are part of a more political context, that of the regulation of AI. In early November, leaders from around the world gathered at Bletchley Park in the United Kingdom to discuss tool safety and standards to limit the risks. The debate is particularly intense in the United States and Europe, where discussions around the regulation of the AI ​​Act are coming to an end.

Meta, unexpected hero of open source AI

To understand what is at stake in this battle, we must look at another division in the world of AI: that between the so-called “open source” sphere and the so-called “closed” or “proprietary” sphere. Open source brings together those who believe that they will develop efficient and secure technologies more quickly by pooling their advances. Everyone can consult their discoveries, improve them, use them to build new bricks. The “owner” world maintains that we must do exactly the opposite… for the same reasons! In other words, keep your developments away from the inquisitive eyes of competitors and possible malicious actors.

The geography of AI, at this level, is surprising. The open source world thus counts in its ranks an actor that we did not expect there: Meta. Historically, the giant had always favored the “closed” approach – Facebook’s algorithm is jealously guarded. Mark Zuckerberg’s group, however, agreed to widely share details of Llama 2, a major language model rivaling the one ChatGPT is built on. Even if this sharing is accompanied by some constraints, Meta’s turn on the subject has enabled the meteoric rise of open source AI in recent months. Created in 2016 by three French people, Hugging Face, the platform for sharing models and data for learning AI, has become the beating heart of this ecosystem.

READ ALSO >>France and AI: the underside of a formidable comeback

On the other side of the spectrum, “proprietary models like those of OpenAI, Google or Baidu remain the most powerful, given their number of parameters,” explains Yang Wang, senior analyst at Counterpoint Research. The choice of the “proprietary” approach is perfectly legitimate. Training models costs a lot of money”, recalls Sylvain Duranton. It is therefore understandable that certain players wish to protect a hard-won lead. And the story could end there. “There are so many use cases that there is room for both paradigms. Each has its advantages and disadvantages,” says Laurent Daudet, CEO of LightOn, a French AI startup.

“Giving crazy people the opportunity to do dangerous things”

As the debate on the regulation of artificial intelligence heats up, some proponents of closed AI have, however, greatly criticized the opposing camp. “The risk of open source is that it gives crazy people the ability to do dangerous things,” Geoffrey Hinton declared last May. Same story at OpenAI: “Open sourcing AI is not reasonable,” Ilya Sutskever recently declared. However, there is a more nuanced range of possibilities than these speeches suggest. Many players opt for hybrid approaches, publishing widely what can be done safely, and more selectively – to researchers for example – which presents risks. Secretly developing AI within a company does not guarantee that it presents no risk. Even less so in view of the influence that generative AI is called upon to exercise on the information presented to us, and on our creations: text, illustration… “If open source AI is prohibited by regulators, only one small number of companies on the west coast of the United States and in China will control the entire digital diet of humans. What impact would this have on democracy and cultural diversity? It is rather THIS that prevents me from sleep at night”, pointed out Yann LeCun at the end of October on the X network.

Open source has also been decisive in the development of the web. “That’s where the idea of ​​adding audio and video to browsers was born. If Opera or Mozilla had only launched it on their products, the web we know today would be very different, much less rich and sophisticated,” confides Mitchell Baker, president of one of the pillars of the web, the Mozilla Foundation. It is also from this bustling sphere that Google built its famous mobile ecosystem, Play Store, on Android. “The majority of progress made in AI over the last ten years is linked to open source work,” recalls Brigitte Tousignant, spokesperson for Hugging Face. ChatGPT itself owes its existence to it. Rather than a victory for one side over the other, it would be good for the two worlds to learn to coexist. Peacefully.

.



lep-sports-01