here comes the “Terminator” scarecrow again – L’Express

IA il y a bien plus de raisons de

It is a strange form of procrastination that artificial intelligence has given rise to among politicians in 2023. Instead of focusing on the challenges that generative AI poses now, they seem to prefer the risks that it will entail. perhaps, in the distant future, if they manage to acquire certain futuristic abilities. A strange order of priorities that we find on the program of the first world summit on the risks of AI, organized by British Prime Minister Rishi Sunak at Bletchley Park, and to which the elite of tech and technology are flocking. politics, from the pioneers of modern AI Yoshua Bengio and Geoffrey Hinton, to the companies OpenAI and Meta, including Elon Musk, Ursula von der Leyen and Kamala Harris.

One of the key topics of this summit which will be held from November 1 to 2 will in fact be the hypothetical existential threat that a system beyond human control could pose. AI professionals are used to these catastrophic speeches: for years they have not been able to take a step forward without the “Terminator” scarecrow being waved in front of them. But this sea serpent rears its head with particular vigor this year, for two reasons.

The first is the powerful mirage created by AIs that seem to have mastered the art of conversation. Human beings see in language what distinguishes them from the rest of life. That tools like ChatGPT seem to handle it with ease gives him the feeling that they have the same level of awareness. However, the functioning of generative AI is radically different from ours. It is based on the “digestion” of immense databases of heterogeneous texts and the statistical learning of the patterns which link certain words to others.

“These systems actually represent the probability that one word follows another […] What makes them particularly interesting is that they don’t always opt for the most likely word, there are random variables that make their productions richer. So there’s nothing magical about it, it’s pure mathematics,” Ivana Bartoletti, founder of the Women Leading in AI Network, responsible for privacy and data protection, recently recalled in the columns of L’Express. staff at Wipro and visiting cybersecurity researcher at Virginia Tech.

This is both more prosaic and more dizzying than expected: it is not necessary to reproduce the exact human thought process to surpass certain precise human performances. Which of course poses new challenges. Invasion of Ukraine, terrorist attack by Hamas in Israel… contemporary conflicts already show this. The ability to create slews of fake, ultra-realistic images in a chain fashion makes the fight against disinformation more complex than ever. Trained on poorly balanced databases, AIs can also make mistakes or amplify the stereotypes found there. In the artistic sphere, this will result in images reproducing prejudices where doctors are almost always men; criminals, people of color, etc. In recruitment or banking, through invisible closed doors.

However, politicians prefer to scare each other with much more distant problems. It must be said that speeches from AI companies, such as OpenAI, have added to the confusion. Some are indeed surprisingly vocal about the risks posed by their own products. Confusing at first glance, but in reality an effective way to attract the attention of investors and gain a good place at the table in negotiations on regulation. Asked by the Australian Financial Review, Andrew Ng, one of the world’s experts in the sector, having worked at Google and Baidu, believes that if several AI start-ups are crying wolf, it is to oust the competition, by pushing for regulations which will complicate the emergence of new players. In particular, those from the open source sphere who, by choosing to pool their findings, are progressing at high speed.

The fantasy of an AI as intelligent as humans that frightens so many people is, however, far from being upon us. “Discussing the existential risk of AI is very premature, until we have a design for a system that can even rival a cat in terms of learning capabilities. we don’t have at the moment”, explained again in mid-October, in Financial Times Yann LeCun, head of AI at Meta and pioneer in the field. According to him, thinking about how to regulate these hypothetical super-powerful AIs would be like trying to regulate the airline industry in 1925, when jet planes had not even been invented yet. If you want to anticipate too much, you run the risk of getting into the wrong problem.

lep-sports-01