The top 50 has fizzled, the music rankings that matter are now those of Spotify and others. Some artists have millions of plays. Others who are leveling off tirelessly rework their melodies. Some rogues, however, take the easy way out and fake their wiretaps. It is neither complicated nor excessively expensive because these platforms often have a free offer in parallel with the paid subscription. Malicious individuals can therefore create a host of fake accounts and develop programs that automatically launch eavesdropping on the titles of their choice. An effective way to artificially boost an artist’s rating. In January, a study by the National Music Center worried about this scourge: “In France, in 2021, between 1 and 3 billion streams at least are false, or between 1 and 3% of total listening.”
And the music world is far from the only one targeted by programs posing as humans: these malicious “bots” are everywhere on the Internet. They’ve made black market ticket smuggling easier than ever. “We sometimes see them booking plane seats or hotel rooms en masse when they anticipate a high demand, for example before a holiday period,” explains Tamer Hassan, co-founder and CEO of detection specialist Human. fake accounts. These swarms of bots (controlled by humans) are, it is true, much faster than us to grab travel or show tickets: they refresh the search pages continuously and instantly identify new offers before automatically fill in forms. Once all the tickets have been won, all that remains is to resell them to genuine customers… at a higher price, it goes without saying.
Robots who flirt on Tinder
In video games too, “bots” help cheaters by repeatedly performing simple actions that give the right to rewards (eg exploring bushes to collect gold coins). On Tinder, they come to flirt with humans, before promoting a scam. They sometimes connect en masse to a target’s site to overload their server and make it unavailable (a “DDoS attack”) or simulate clicking on ads to earn money. “Internet traffic from malicious robots continues to increase. In 2021, it represented 27.7% of total traffic”, warns Gerald Delplace, regional director Europe, Middle East and Africa of the cybersecurity group Imperva.
It is on social networks that these programs pose the most serious problems. Twitter alone fires an average of one million fake accounts…every day. Elon Musk, new owner of the network has also promised (only half joking) “to exterminate the spam bots, or perish trying”. Going through the doors of social networks under a false identity is, it must be said, very simple. And immense manipulative power. “Imagine being able to pass yourself off as a million people on the Internet”, illustrates Tamer Hassan of Human. A personality excites you? You make believe that a multitude of people support it. Does a business displease you? You flood it with negative comments from an imaginary “crowd”. These virtual armies also serve to bring out narratives (fake news, political message, etc.) amplified with fake likes and fake shares. “The bots spread specific content with such high frequency and volume that their message is quickly positioned in trend”, confirm the teams of the cybersecurity group Mandiant.
Bots are also the armed wing of some stalkers. “Groups of fake accounts are programmed to automatically send insults to people who talk about certain themes”, observes Emmanuelle Patry, founder of the Social Media Lab. A tweet about Putin or a post about vaccines against Covid-19 and these programs broadcast a shower of insults or even threats. “The objective is to intimidate Internet users so that they hesitate in the future to venture on these subjects”, analyzes the expert. Even when idle, these little agents can do damage. Previously, fake accounts were bought by companies or influencers wanting to get hyped. But social networks detect them better and penalize those caught in the jam jar. Some have therefore devised new, more devious tactics. “They order fake subscribers from their adversaries so that the platforms reduce the visibility of their target”, explains Emmanuelle Patry. A real digital ball that the victims are not even aware of dragging if they are not aware of these subtleties.
Fighting against bots, especially those that abound on social networks, is unfortunately complex. “You kill one, ten reappear,” explains Loïc Guézo, Cybersecurity Strategy Director for Southern Europe, the Middle East and Africa at Proofpoint. And they camouflage themselves better and better. “Previously, these accounts often had abnormal behaviors. Some for example posted messages 24 hours a day, which is suspicious because a human needs to sleep. Today, however, they mimic human behavior much better and are tougher to detect,” says Tamer Hassan.
Hard-to-detect sleeper agents
The task is all the more delicate as there is no typical “human” behavior: some very real people write a lot, others little; some respect the rules of the platforms, others blithely ignore them. Like spies, some bots even play sleeper agents. “They gain the trust of Internet users by posting classic messages for a while, then suddenly change their behavior,” says Jason Soroko, cybersecurity expert at Sectigo.
The new so-called “generative” artificial intelligences are godsend for the designers of these malicious programs. What still often betrays fake profiles is that they are not very personalized: they like and share en masse but publish few “personal” photos and texts. AIs such as Midjourney, Stable Diffusion or Dall-E-2 are, however, capable of creating artificial photos from scratch that one would swear to be authentic. Memories of evenings in a trendy bar or family hikes, a shocking photo of a burnt down factory or humans erecting barricades… Enough to invent a life larger than life for a robot or give fake news a veneer of realism . Tools like the famous ChatGPT can, for their part, write very realistic texts in the style and on the subject of our choice (a forum for or against the pension reform, a company pitch, etc.). Convenient to make a fake profile even more realistic by creating a start-up site or a high-activity blog for it.
To prevent these false Internet users from proliferating, it is necessary to fight “as early as possible in the stages of creating accounts”, argues Loïc Guézo. The more complex these steps are (with information to give, checks to perform), the more difficult it is to create computer programs that mass-produce fake profiles. In the wide range of anti-bot measures, another fun option is to fool them. Social networks can limit the visibility of certain accounts if they wish. This is the famous “shadow ban” or “phantom ban”: the person no longer appears in the thread of other users but they do not know it. Applied to humans, the measure is controversial, because they cannot challenge a sanction of which they are not even informed (and platforms often make mistakes). Applied to bots, the method is however attractive. Fake profile creators are not tempted to create new ones, as their accounts are not closed. But these fake Internet users are placed in a virtual bubble where real humans no longer hear them scream.