WhatsApp: experience in a disinformation nest

WhatsApp experience in a disinformation nest

Misinformation doesn’t just work on social media. Its playing field is much wider. It sometimes interferes in certain television or radio broadcasts, in the written press and in our private communications. These three environments being different, they deserve to be studied separately. Indeed, we probably do not have the same perception of information, whether it comes from a Facebook news feed, a streaming news channel or a WhatsApp group. Brazilian researchers have questioned whether parameters like political orientation or openness influence our judgment when exposed to information on the WhatsApp app.

You will also be interested


[EN VIDÉO] Passing on science: the doubts of skeptics
When we seek to transmit science, many questions arise beforehand. But in fact, what are we trying to convey when we talk about the transmission of “science”? Are we talking about the methods that science uses? Data it produces? Of the spirit and the approach that scientists must have to exercise it correctly? Or even a critical mind?

Brazil has experienced a very high death rate during this pandemic, totaling to date 289 deaths per 100,000 inhabitants. For most health authorities, there is no doubt that part of these deaths were caused in part by what theWorld Health Organization called infodemia, that is to say, ” information overload and the rapid spread of misleading or fabricated information, images and videos, [qui], like the virus, are highly contagious, grow exponentially (…) and complicate response efforts to the Covid-19 pandemic ”. However, this supposed infodemia is very heterogeneous. If we want to better understand theecology information and the way people react to it must be studied wherever it spreads. This is what Brazilian researchers undertook by submitting participants to messages WhatsApp built from scratch. They publish their results in the journal Judgment and Decision Making.

The emergence of the Internet in Brazil

In 2009, only 39% of the Brazilian population had access to Internet. This figure now stands at 74%. Historically, this is the first time that more than half of the rural population (53%) has access to the Internet. This state of affairs has considerably changed the ecology of communication and information within the country. In fact, private messengers such as Skype, WhatsApp or even Facebook, Messenger are the leading cause of network connections, well ahead of social media (92% vs. 76%). However, we know that these applications, and particularly WhatsApp, have been privileged channels of diffusion of misinformation and disinformation around the world. According to some authors, L’application alone would have concentrated 70% of fakes news concerning the pandemic in Brazil.

What variables influence our discernment?

The experimenters highlighted the first thing in this study. The majority of people going through the experiment did not identify with explicitly politicized WhatsApp groups (Bolsonaro’s family / Free Lula) and did not want to be part of them. On the other hand, they were inclined to identify with and want to participate in the conversations of more neutral groups, especially those called “friends”. Experience also suggests that there is a small negative correlation between trusting social networks and private couriers in matter ofinformation and the ability to discern right from wrong and that the time spent on the application does not influence the capacity for discernment.

In addition, even if the correlations remain very modest, we can note that being open-minded (measured by the scale of active thinking and open-mindedness on the evidence which accounts for our propensity to change our beliefs based on new evidence), trusting mainstream media and reading newspapers are all three correlated (correlation rates are 0.29; 0.29 and 0.26 respectively) to good discernment skills between ” true ”and“ false ”information.

Infodemia = misinformation = bad decision making?

While it is necessary to better understand the ecology of misinformation, how it spreads and how it influences the population, several questions remain unanswered: is infodemia a major cause of misinformation and this misinformation is it? -Is it even a preponderant cause in bad decision-making by the population? According to several specialists in scientific communication who publish a commentary in the Journal of Applied Research in Memory and Cognition, it is far from being that simple.

In their short article, they remind us first of all that we are once again fascinated by a very old problem. Indeed, the problem of misinformation is a historical by-product of liberal democracies. It is neither new nor co-emerging with the arrival of social networks. One can cite propaganda as a leading example of disinformation or other forms of persuasive political messages intended to deceive the population which have existed for several centuries. The authors note that, according to the work published on the subject, misinformation is particularly heightened during an election period, even though it is a time frame when individuals are generally more attentive to the information around them. For example, very few Americans were able during the duel between Al Gore and George W. Bush (long before the arrival of social media therefore) to restore their respective political positions on crucial points such as the compulsory registration of weapons. at fire.

Like politics, we know that people have trouble understanding scientific facts, especially how they are produced. Most scientists unfamiliar with the social sciences and decision-making then think that setting the record straight is enough to solve the problem. Without realizing it, they place their trust in knowledge deficit models that are not consistent with the best available science on the subject. These individuals then exert colossal efforts for often minimal results because they consider that if people were only exposed to correct information, they would automatically make better decisions. Again, this is not what empirical research on the subject suggests as pointed out by Social experts action network trained by the National Academy of Sciences and Medicine in the United States, ” the main reasons people don’t do the things they know they should be doing are cognitive preferences for old habits, forgetting, small inconveniences in the present moment, preferences for actions that require the least efforts or confrontations and reasoning ”.

The fuzzy trace theory

There is a theory that adequately accounts for these issues: the fuzzy trace theory or Fuzzy-Trace Theory. The latter postulates that we classify information into two memory categories: the essential and the precise (gist and verbatim, in English). The essential corresponds to the overall idea that we have on a subject that we have heard about, while the precise corresponds to the exact memories of the information that we know and a contextualization of the latter. According to this theory, we prefer to reason with the essential part rather than with the precise part. For example, according to this theory, one may have heard the figures for the benefits and risks of vaccination, to have accepted them and yet, during our reasoning in order to make a decision, to remember only the essentials in a confused way by thinking that the two choices involve risks without again mobilizing the exact figures and the nature of the risks. Faced with this conclusion, the cognitive mechanisms that we mentioned above would then encourage us not to be vaccinated.

Why are extension efforts ineffective against disinformation?

By putting the disinformation present in the minds of individuals on a pedestal and by often omitting social and contextual explanations, we are undoubtedly guilty ofa fundamental attribution error (a bias which consists in giving more importance to the internal characteristics of individuals than to external events to provide an explanation). As we have just seen, the link between misinformation and behavior is not as linear as one might think. Therefore, this state of affairs is a first avenue for considering the modest efficiency of the fact-checking or even popularization that strives to reestablish the “truth”.

Then, these interventions which are based on models which do not have the best level of evidence (knowledge deficit models) encounter a completely new information ecology. Indeed, even if the problem of disinformation is old, the fact remains that social media have helped to accentuate it because of their economic model which consists of promote user engagement. Therefore, even the information is used to serve this objective: precision is left aside in favor of engagement (some efforts were nevertheless made during the Covid-19 pandemic, heterogeneously depending on the platform).

Also, in the context in which we have lived for more than two years, the category of scientific disinformation is far from being engraved in the marble. For example, information can be categorized as misinformation and then ultimately be considered a possible hypothesis (the hypothesis of the laboratory accident in Wuhan is a good example). Likewise, science is a corrective enterprise. Therefore, the resulting recommendations evolve over time based on the best available evidence. It would be interesting to investigate in more detail how the general public perceives this and how it impacts their decision-making. It would also be desirable for the public authorities in charge of communication to take stock of the work carried out to date and draw inspiration from it. Finally, as a last plausible explanation, actions against disinformation sometimes focus on very specific details of a topic in question. Yet, according to fuzzy trace theory, these details have little impact on decision making.

Interested in what you just read?

.

fs6