The Internet was to be the agora of the 21st century, a universal space for sharing knowledge where every voice could express itself freely. This utopia of pioneers, like Tim Berners-Lee, has enjoyed spectacular success if we judge by the figures: today, 5 billion users spend nearly two and a half hours on social networks every day. However, this initial promise is now facing a massive invasion of “bots” [NDLR : des agents logiciels] which threatens the very authenticity of online exchanges. The phenomenon has taken on a dizzying scale. The collapse in the costs of inferring artificial intelligence models has democratized the creation of fake accounts on an industrial scale. Running a sophisticated bot capable of sending a million tweets now costs only $5 to $10 per month. And even though X has introduced a limit of 2,400 tweets per day, it is easy to create multiple accounts.
A disinformation campaign can therefore be deployed for less than $10,000 per month, a paltry sum for state actors or large influence groups. According to the report 2024 Bad Bot Report of Imperva, 32% of traffic on social networks is now generated by bots, a rate that has increased every year for the past five years. Remember that 0.1% of user accounts – whether or not they are held by a human – are at the origin of 80% of the sharing of false news, creating a devastating amplification effect.
Ticket trafficking helped by bots
The growing sophistication of bots makes their detection increasingly complex: 61% of malicious bots are now considered “evasive”, that is, capable of circumventing traditional detection systems by imitating human behavior, 45% impersonate users mobile, and 26% use residential IP addresses to hide their true nature. This technological development is also widening a gap between players with the means to deploy sophisticated bots and those who use more basic tools, creating a new form of digital inequality.
Platforms must rethink their defense mechanisms to counter this growing threat. Bots are now attacking unexpected targets: reservations for high-end restaurants, administrative appointments, concert tickets, etc. As soon as an online service combines high demand and limited availability, they appear. Some New York restaurants are seeing their free reservations resold for up to $340 on third-party platforms.
Faced with this artificial surge, a radical approach is emerging: creating social networks populated exclusively by bots. This is the audacious bet of SocialAI, an application designed by Michael Sayman, 28 years old, who has worked with Facebook, Google and Roblox. The principle? Each user has their own private social network run by chatbots that they can personalize: supporters, critics, “brutally honest” or even trolls. While the idea may seem dystopian, it responds to the need to escape the growing toxicity of traditional networks. The platform, which is based on the OpenAI API, is already attracting leading investors, proof that the market believes in this vision of digital sociability mediated by artificial intelligence.
Pseudonymity against bots
To stem the proliferation of malicious bots while preserving freedom of expression, a middle solution is required, pseudonymity – not to be confused with total anonymity. It allows users to express themselves under an assumed name while remaining identifiable by the authorities in the event of an offense. This approach, at the heart of the French digital bill, finally abandoned in its latest version, aims to empower Internet users without exposing them publicly. It significantly complicates the mass creation of fake automated accounts, by requiring identification, while protecting those who need to express themselves without revealing their identity: whistleblowers, activists or vulnerable people. Personally, I am convinced that it will become the norm, because it participates in a trend towards transparency in our societies.
* Robin Rivaton is Managing Director of Stonal and member of the Scientific Council of the Foundation for Political Innovation (Fondapol)
.