when disinformation becomes child’s play – L’Express

when disinformation becomes childs play – LExpress

From Florida to Moscow, John Mark Dougan has made a very strange reconversion. This former American sheriff, now under the protection of the Kremlin, has become the coordinator of a sprawling pro-Russian disinformation network, as revealed NewsGuard, a start-up specializing in the detection of unreliable sites. DCWeekly, Chicago Chronicle… 167 sites posing as media in Europe and the United States in order to polarize these citizens, delegitimize their elections and justify the Russian invasion show very strong signs of connection to Dougan’s network, despite the latter’s denials. A revealing case of the explosive impact of generative AI in this battle. “It used to take tens or hundreds of people to build such devices. With AI, a single individual can do it,” warns Chine Labbé, editor-in-chief and vice-president of NewsGuard.

Such cases are increasing. In late 2022, Meta detected a vast Russian disinformation network, Doppelgänger, using “clone” sites imitating real media. More recently, fake audio recordings, childish to produce with AI, have found their way into politics.

READ ALSO: Data leaks, foreign interference… Brussels, nest of spies

In Slovakia, one of these montages suggested that the democratic candidate Michal Simecka wanted to rig the election and… increase the price of beer. It is impossible to assess the impact that these deep fakes had, but after a close vote, Simecka lost the election. American voters in New Hampshire were surprised to receive “false messages imitating the voice of Joe Biden and advising them not to vote in the primaries,” points out Camille Grenier, executive director of the Information Forum and democracy.

These deep fakes can mislead voters, damage the reputation of political figures by making them appear in compromising situations, but also “exacerbate social tensions, by conveying false statements which can inflame existing conflicts”, warns Sonia Cissé, associate lawyer specializing in technology and data protection at Linklaters.

“A reserve period on social networks”

Fake news is only part of the problem. The artificial virality given to certain stories, more or less authentic, is another worrying facet, because it gives a twisted vision of public opinion and society. This phenomenon is not new. For years, misinformants have relied on computer programs called bots to manage armies of fake accounts and inflate trends or controversies at will. Generative AI, however, risks complicating the task of platforms trying to track down and close these accounts with unnatural behavior: by allowing them to regularly publish “unpublished” and varied content, AI in fact makes the activity of bots closer to that of authentic humans.

READ ALSO: Putin and his dirty war against Europe: sabotage, manipulation and organized crime

In Europe, the majority of these disinformation operations come from Russia and France is one of the main targets. Many actions can nevertheless protect Europeans, particularly during elections. President of Make.org and co-author of a recent report on the subjectAxel Dauchez recommends ten measures to strengthen European democracy in the age of AI, including a form of “reserve period on social networks”, the creation of an AI content detector which the media could use and the establishment of codes of conduct relating to “artificial content” for parties, the media but also “influencers with a significant number of subscribers”.

Raising European awareness of these new threats is crucial. First, by informing them more about current operations – Viginum is doing this more and more. Then, using the prebunkingadvocates the historian David Colon, author of Information Warfare (Tallandier): “The idea is to expose citizens to the manipulation techniques to which they could be subject before they are confronted with them, this strengthens psychological resilience.” For a year, the EU democracies have been waking up. It was time.

.

lep-sports-01