“The problem is not fake news, it’s their virality” – L’Express

The problem is not fake news its their virality –

Invectives, punchlines, dishonorable montages, fake news… During the recent European and then legislative elections, social networks looked more like a boxing ring than a civilized debate hall. A sad habit, despite increasingly significant attempts at control and clean-up by legislators. Europe, in particular, via the DSA (Digital Services Act), its major digital legislation, has forced the largest platforms to better moderate their content and to demonstrate more transparency with regard to their algorithmic choices. Which remains insufficient, deplores Dominique Boullier, a sociologist specializing in the uses of digital technology and cognitive technologies. All the more so in the era of generative artificial intelligence where the circulation of messages, images and videos is expected to intensify.

In a radical way, the author of Propagations. A new paradigm for the social sciences (Armand Colin, 2023), also a professor at Sciences Po, calls in a recent note to be put back flat (reset) the architecture of social networks by directly attacking virality, in order to break the current rate of propagation of content, at the heart of the degradation of our public conversation.

L’Express: During the European and legislative election campaigns, the National Rally candidate Jordan Bardella boosted his popularity on social networks with the help of videos in which he put himself in banal poses, such as simply eating an apple. What does this say about the debate in our democracy?

Dominique Boullier: Obviously, this type of content does not encourage the exchange of ideas, argumentation, or the presentation of a program. What do you want to say about eating an apple? TikTok, which has since been copied by Instagram (Meta) or YouTube Shorts (Google), has transformed the way we broadcast content, their virality. It constantly offers new videos based on those already seen before, the time spent on them, and not based on those viewed by your friends and your subscriptions. Inevitably, what we see there quickly becomes very repetitive. This system pushes everyone present to adopt the same codes, the same postures. This is why Jordan Bardella is not trying to convey any message, but just to present himself in a friendly, eye-catching way, by establishing a form of connivance. He is not the only one. Since this seems to please TikTok users, all creators are complying with this behavior, to have a chance of being seen.

READ ALSO: Bardella, the reasons for his success on TikTok: “He manages to reach young people who are not yet politicized”

Why has online trading deteriorated to this extent?

The monetization of social networks, around 2008-2009, on YouTube, Facebook and then Twitter, was decisive. It gradually grew, to the point of generating absolutely incredible revenues for some players. At that time, the platforms adopted habits of opacity on their algorithms and the measurement of their audience. The important thing was now to capture the attention of users by making them react, by keeping them on their news feeds. Which caused problems: the Cambridge Analytica scandal proved that there had been deliberate manipulation of a certain number of accounts to influence behavior.

But the virality system continued to grow also because traditional media latched onto it, for example by picking up on trending topics – trending topics – highlighted by certain networks like Twitter, which has become X. In fact, almost everyone – stars, influential people, politicians – has fallen into line with the rhythm imposed by the networks and their quest for virality. We already have enough evidence to say that there is something strange today in the way we debate online. There are fewer and fewer appeals to scientific reasoning and, conversely, more and more proven facts are reduced to the rank of opinion, such as global warming.

The alert is not new. How is it more urgent today?

The race for virality on social networks is now well established in habits. Enter generative AI, with which the number of synthetic contents, or fakeshould explode. We don’t know how they will be pushed, to whom, and when. Current moderation is absolutely not capable of supporting this new wave, because there are fewer and fewer humans dedicated to this task. We risk witnessing a collapse in the ability to distinguish quality content, facts and reliable data based on centuries of stabilized knowledge. At this point, it is no longer just a problem of public debate, we are attacking the foundations of general knowledge. We are drifting towards “everything is equal”, generalized distrust. It is therefore time to react.

READ ALSO: Facebook is 20 years old: despite the scandals, the social network remains ultra popular

Do you not consider the latest social media regulations to be effective solutions for regaining a form of serenity?

There has been notable progress on the obligations of means, design, moderation – states are also tracking illegal content – or even abuses of dominant position thanks to the DMA (Digital Markets Act). The notion of virality remains absent, however. Everyone forgets that it is unhealthy for a collective, and not just for public debate, to be constantly caught up in a logic of “bludgeoning”, as they say in advertising. Either under a constant flow of content that you did not ask for, that you cannot filter or only with great difficulty. We therefore get used to reacting, to being bombarded in this way. Including false information. Recently, a rumor circulated about students from a Jewish school who were deliberately underestimated during a baccalaureate exam. It was quickly dismantled. However, it continues to be commented on by a whole bunch of suspicious people who were not satisfied with the official denial. The gears were set in motion.

Fight against the fake news and disinformation is not enough?

No, the problem is not so much that there is false information on social networks. There always has been and there always will be, we must of course continue to track it. The problem with this false information is its “acceleration” by virality mechanisms: the click, the re-sharing – the majority is done within an hour of publication of the post -, the comment. The choices of technical formats that are adopted influence the behavior of billions of people on a global scale. It is this vice of form that is at the heart of my recommendations, which I summarize with this simple formula: yes to free speechto freedom of expression, not to free reachto the flooding of minds.

Digital sociologist Dominique Boullier.

© / DR

To achieve this, you recommend forcing platforms to make audience measurement independent, to fully open access to data for researchers, to limit the duration of the connection, to prohibit scrolling (scroll) infinite content… In short, to completely review their architecture. It is hard to imagine them complying with these rather radical measures, especially since they directly affect their economic model.

But this economic model is built on sand. The brands that finance the platforms must understand this: they are being had. Programmatic advertising places them in a very opaque way on news feeds. Some have sometimes found themselves under far-right, racist content, which goes completely against their values. Then, the audience figures are provided by the platform itself, without control and without transparency as Médiamétrie does with the media in France for example. We can assume that they are manipulated or inflated. An independent third party carrying out the measurements, with a committee of stakeholders deciding and validating the prices and strategic choices, seems essential. Brands are therefore a very important lever for reacting to virality. But of course, significant political will from the authorities is also required.

Do legislators lack evidence to characterize the harmfulness of virality and limit it?

Researchers are in fact seriously lacking access to data from major platforms in order to conduct longitudinal studies and provide solid evidence. But in reality, the networks already know that extreme virality has catastrophic effects on cognitive and mental health. Facebook Files delivered by whistleblower Frances Haugen in 2021 are an illustration of this. Just as the tobacco industry knew how harmful its products were, social networks are not unaware of the impact their algorithms have on their users. Nor is the public, for that matter. A handful of political figures [NDLR : à l’instar d’Anne Hidalgo, la maire de Paris]institutions and media have withdrawn from X. The platforms themselves have timidly begun to take charge of the issue.

X, again, now suggests when re-sharing, or retweeting, a press article to be redirected to the paper in question. This type of friction in the user experience is useful for slowing down virality. Meta, for a time, removed the “vanity metrics“, these statistics visible to all under a publication with the number of shares, likes, comments that it has generated. The large platforms – this is what I recommend – should definitely stop highlighting them and even go further by providing a personal dashboard, permanently installed, counting our own shares and likes. This information would represent an effective cognitive alert, in the same way that we have a dashboard displaying the speed in a car. In the event of exceeding the frequency of reactions, we could imagine blocking activities on the platform for twenty-four hours, for example.

Aware of the difficulty of imposing this type of reform, you call for self-regulation, believing that the spread of online content should be viewed like that of viruses…

Yes, as with a virus, let’s relearn a certain number of barrier gestures, to break the chains of contagion of content. In fact, this supposes understanding the mechanisms of virality at the origin of their propagation. There are of course algorithms, robots that push content rather than others, but it is not only that. It is also very individual. Adam Kucharski’s theory of propagation, the DOTS – duration, opportunities, transmissibility, susceptibility -, can partly apply. We also analyze this process with the “novelty score”, introduced by the researcher Soroush Vosoughi: as soon as something goes out of your cognitive habit, it attracts your attention. So it is not necessarily fake news or negative content, but something funny – the Pope in a white down jacket -, unusual, strange, shocking… These criteria make one content circulate more quickly than another.

What would clean social media look like? You mentioned a limit of 150 contacts per person.

Bad information, like good information, is not intended to multiply at the current speed. We need to find times for publication, meetings. Give ourselves time for research, debate, and contradiction. The figure of 150 contacts was set by anthropologist Robin Dunbar, for whom direct mutual acquaintance is no longer possible beyond that. I think that this limit would allow us to reconstitute a form of affiliation between close friends, and therefore, naturally, self-regulation. After a while, people know each other, can exchange and identify those who are overflowing. We would then get closer to what is commonly called a “social network”.

.

lep-general-02