Just two days before the European elections, Meta, which left social media, published an ad on Facebook featuring the French president Emmanuel Macron hanging.
The text of the ad claimed that because of supporting Ukraine, the French are not getting the necessary cancer treatments.
According to an organization specializing in fact-checking, this ad is an extreme example of how poorly Meta moderates its paid ads before the election.
– Such an ad should never have gotten through Meta’s system. It is incomprehensible that an ad with such graphic content is published on Meta’s platform right on the election weekend, algorithm researcher Paul Bouchaud updates in an interview with .
Bouchaud followed the spread of disinformation during the EU elections at the fact-checking organization AI Forensic.
He says that during the election weekend, there were also xenophobic ads on Facebook and Instagram in France, urging people to vote for the extreme right.
Advertisements intended for Meta’s platforms basically go through a preliminary evaluation before Meta approves their publication.
Political ads still have their own requirements, for example, how the payer’s name must be marked in connection with the ad. According to the rules of the meta political ads may only be published in the publisher’s home country and the identity of the publisher must be verified.
According to a report by the fact-checking organizations AI Forensic and Check First, Russian propaganda against supporting Ukraine and supporting the extreme right was allowed to spread on Meta’s platforms in the weeks before the elections.
– You could see from many of the advertisements that their author knows the political situation of the country in question very well, says Paul Bouchaud.
Russian propaganda multiplied during the EU elections
According to a report by fact-checking organizations, the spread of Russian propaganda began to accelerate three weeks before the elections in large Central European countries.
According to the fact-checkers’ estimate, Russian propaganda reached more than three million people in Germany, Poland, Italy and France in May with the help of Meta’s ads.
In all these countries, the far-right and far-right did very well.
According to Meta’s advertising library, the advertisement containing the picture of Macron and the gallows did not have time to spread after its publication. Instead, many other advertisements containing Russian propaganda spread.
– For example, we saw an ad that supported the extreme right, which collected more than 200,000 views, says Bouchaud.
In France alone, fact-checking organizations found 100 paid ads containing Russian propaganda that were published in May.
The Commission is investigating whether Meta violated EU rules
According to the report, the results show Meta’s bad failure in advertising moderation.
Because the ads were published just weeks before the EU elections, they could influence public opinion and voting behavior, Check First’s report states.
Bouchaud emphasizes that the goal of disinformation campaigns is above all to reduce people’s trust in the media and society.
Therefore, the impact can extend beyond one election.
– Responsibility [tällaisten kampanjoiden ehkäisemisestä] is above all on Metal and other social media waste. They should be able to do better, says Bauchaud.
The European Commission is currently investigating whether Meta violated the obligations of the Digital Services Regulation (DSA) during the EU elections.
There are no signs of large disinformation operations in the Nordic countries
It seems that extensive disinformation campaigns focused specifically on large Central European countries during the EU elections.
For example, the Finnish Faktabari did not detect significant new disinformation campaigns aimed at the Nordic countries during the elections.
In May, the troll factory of the Sweden Democrats was revealed in Sweden, through which, among other things, various troll accounts insulting other parties were maintained.
No platform gets clean papers
During the elections, the Spanish fact-checking organization Maldita monitored how different social media platforms reacted to the spread of election messages that were proven to be false.
According to the results, video service YouTube and instant messaging service X did the worst.
YouTube failed to flag or remove as many as 75 percent of videos with false content. X left 70 percent of similar messages without action.
In total, about 1,400 social media messages and videos that had been found to be false by various fact-checkers were reviewed for the research.
According to the report, the most disinformation was spread about immigration and the integrity of elections.
Neither YouTube nor Tiktok removed or flagged a single video that presented lies about immigration.