Information war: “A European social network is missing”

Information war A European social network is missing

Along with air, sea, land and space, cyber today figures as one of the spheres of confrontation between major world powers. The French start-up Bloom stands out in this area thanks to its analysis of social networks (Twitter, TikTok, Telegram, etc.), enabling it to detect the manipulation of information and the influence strategies that are carried out there. Proof of its expertise, it has just been selected by NATO to participate in its next major military exercise, the Bold Quest, which will take place next September in the United States.

“We are going to deploy Bloom on networks of social platform simulations, representative of the global information warfare ecosystem”, explains Bruno Breton, its leader. The latter also deciphers with L’Express the clear evolution of his work since the beginning of the war in Ukraine, followed by the advent of generative artificial intelligence as a powerful tool of destabilization. Just Monday, the broadcast of a false image of an explosion at the Pentagon, the headquarters of the US Department of Defense, caused a brief drop in financial markets.

L’Express: How has the information war on social networks changed with Russia’s invasion of Ukraine?

Bruno Breton: This event contributed to better anticipation of the threat. To make the tracking of “weak signals” unavoidable in order to “pre-bunker” malicious acts. Unlike “debunking”, which consists of correcting an element of misinformation, this is about pulling the rug out from under the aggressor, or at least preparing for what he is planning, by disclosing his intentions and correcting him even before his misinformation becomes widespread. A bit like the American authorities, at the very beginning of the war, who immediately transmitted to the general public crucial information on the motivations of Vladimir Putin.

Ukraine had also provided a brilliant illustration of this. Its strategy on social networks, thought out in the aftermath of the 2014 war and the loss of Crimea, made it possible to very effectively counter the Russian narrative in the first days of the offensive, like the supposed desire for denazification of its territory. Ukrainian soldiers are also very fond of TikTok. The Kremlin, for its part, has focused its propaganda effort on neighboring European countries, including after the closure of Russia Today (RT) and Sputnik. The number of accounts associated with these propaganda channels on social networks has thus doubled in six months, increasing according to our analysis from 350,000 to 700,000. Bloom has therefore been in great demand since the start of the war, to understand the dynamics of influence and propose response or intervention strategies. France, for example, intends to “hyper-exist” by highlighting its culture, its values, its vision of the world, of democracy. This offensive turn is particularly noticeable on the African continent, facing a group of mercenaries linked to Russia like Wagner, whose rise in power we revealed three years ago. The story of the “real fake” mass grave in Mali, immediately denounced by the French authorities, and relayed by the major reliable news media, is an example of this desire to respond extremely quickly, on the basis of the first elements collected very early on. on social networks. Companies are also getting into it, because some are seeing their stock market rating weakened by fake news and digital raids.

You say that anticipation relies on the detection of “weak signals”. How to spot them?

We use a technology called social inference. It consists of scrutinizing the content of social networks such as Facebook, Twitter, TikTok, Telegram or even VKontakte (the Russian Facebook) from link to link (shares, comments, likes, etc.). And this, regardless of the keywords, which tend to limit identification to simple bubbles, or not to be correctly detected due to “algospeak”, namely the words used to circumvent the algorithms of moderation of large platforms (“Seggs” is a notorious example, to avoid saying the word “sex”, Editor’s note). This also allows communities to move forward under the radar. Our method makes it possible to effectively go back to the sources of an attempt at misinformation and to bring ecosystems to light. Then, we analyze their publications, their strong themes, and the associated risks.

Bruno Breton Bloom

© / Bloom

What methods are used by the propagandists of authoritarian regimes?

They test a lot. Especially in the “comments” sections. Often with small accounts, they react to news posts they oppose and observe the results. The best replies or ideas then themselves inspire posts, images, videos intended to be relayed as widely as possible. This is what we call in our jargon the strategy of informational variants. At some point, following numerous tests, a content will adapt well to a target community and spread there easily. Then, it’s never very subtle. There are recurrences in certain comments, sometimes from “troll farms”. Anti-vaxxers, pro-Putin and climatosceptics are, in 50 to 60% of cases, the same accounts on Twitter. Finally, there are clearly identified strategies depending on the country. For Russia, the elements of destabilization are situated in a logic of confrontation. The Chinese will rather favor competition, with the proposal of alternative models. States in the Middle East have distinguished themselves by the use of robots to saturate the networks, to radicalize them.

Does the recent boom in generative artificial intelligence (AI) make it possible to multiply these experiences?

Yes, all these phenomena of misinformation, manipulation and radicalization are becoming more sophisticated with AI. A simple text will be able to feed dozens of different images. We are currently investing hundreds of thousands of euros to develop fake person detection tools. We rely in particular on the emotional analysis and translation, not always perfect, of these AIs to identify them in two small questions. But the response will undeniably become more complicated for democracies which, by nature, offer freer spaces for speech than elsewhere.

Europe, in particular, has something to worry about. Today, all the major platforms are American, Chinese, even Russian. It is therefore very complicated to try to build a more virtuous model without any support to promote it. I think, ultimately, that Europe lacks its own social network. However, two trends dominate at the moment. A reduction in moderation, as on Twitter, which has upset its method of verifying accounts. And an increase in engagement rates with TikTok, whose video content offers exacerbated possibilities for manipulation compared to a simple photo or text, and is easy to share to other social networks. It must be remembered: these spaces offer, to date, the best value for money in order to destabilize a country or a company. And it doesn’t seem to be stopping anytime soon.

lep-general-02