Alert on porn deepfakes: “Pedocriminal networks are taking over this practice”

Alert on porn deepfakes Pedocriminal networks are taking over this

“I just came across something horrible.” In the passenger seat of her friend Marcus’s car, content creator Léna Mahfouf looks, dismayed, at the screen of her phone. The YouTuber with 2.68 million subscribers has just come across a naked photo of herself, published on the Internet without her consent. Or rather on a photo montage integrating his face on a naked female body, which is not his. The trick is so well done that the shot appears authentic. The young woman decided not to cut this sequence when editing her famous “August vlog” – these short videos of her daily life published during the summer on YouTube, which achieve audience records every year. A month after its publication, the video has already been seen by 1.6 million people.

While advances in artificial intelligence now make it possible to integrate the face or voice of any Internet user – famous or not – into any video, this sequence is not anecdotal. The phenomenon of “deepfake porn“, which uses this audiovisual manipulation in pornographic or child pornography montages, is exploding. “We are worried today about the rapid growth of “deepfake porn. Entire sites are dedicated to it in order to harm and humiliate recognizable people”, underlines the High Council for Equality between Women and Men in a shocking report on porn crime published on September 27. “No one is safe from being subjected to blackmail, humiliation, or violence linked to this phenomenon. The practice has never been so accessible and trivialized, which is extremely worrying,” laments Johanna-Soraya Benamrouche, co- founder of the association Feminists against cyber-harassment. “It doesn’t just affect famous people, and the law must adapt quickly to this growing phenomenon,” adds Rachel Flore-Pardo, lawyer specializing in cyber-harassment cases and co-founder of the association. support for victims Stop Fisha. Cross-interview between the two specialists.

L’Express: How to concretely define “deepfake porn“, and how can we explain the exponential development of this practice?

Johanna-Soraya Benamrouche. THE deepfake porn is a photo, video or sound montage of a sexual nature, which uses the physical or vocal characteristics of a person, which looks real, and which is broadcast without their consent and without mentioning that it is a fake. Montages have obviously existed since the image itself existed, for humorous purposes, humiliation or propaganda to deceive or manipulate public opinion. What particularly worries us about the phenomenon of deepfake generated by artificial intelligence, is that the technologies used are always more efficient, and that the algorithms are improving. Their diffusion becomes a mass phenomenon, extremely viral, popular and accessible. It is now easy to generate a deepfake thanks to artificial intelligence, and their publication has become commonplace.

According to a Deeptrace investigation dating from 2019, 96% of the content of deepfake thus belong to the pornographic genre, and 99% target women. The four biggest porn sites dedicated to deepfake then had 134 million views… And this has only increased as the videos have become more professional. Via certain software or applications, you can create 25 images for 10 euros, in a few minutes. There are also groups on Discord, WhatsApp or Telegram, or private forums, on which attackers exchange “best practices”, the best methods for generating these images, to remain as anonymous as possible when they are published, disseminate them on a large scale… Child criminal networks are taking advantage of this practice and a large number of child pornography contents are found hosted on French sites.

Personalities such as actresses Emma Watson or Scarlett Johnsson have notably been victims of these practices. In France, Léna Mahfouf and Enora Malagré have communicated publicly on the subject… Does the phenomenon affect people exposed in the media more easily?

Rachel Flore-Pardo: As with all other types of cyber-violence, if the victim is famous, there may be an increase in views, virality, and the reach of these images, the production of which is facilitated by the number of videos or photos. already existing. But the deepfake porn obviously does not stop there, and can affect absolutely everyone: more and more victims who have nothing to do with the media world contact us for these types of facts, this increases as the tools become more widely available .

For what purpose are these victims exposed, and what are the consequences on their personal lives?

Johanna-Soraya Benamrouche: There is, each time, a desire on the part of the aggressor to submit, to intimidate, to threaten, to gratuitously seize the body of others and to capitalize on it. These images are sensationalist, they generate traffic and solidify masculinist spaces. They affect all types of women, depending on whether they are famous, recognized in the media, have a political voice, or simply have the audacity to exist on the Internet. THE deepfake harm the integrity of the victims but also their reputation, their work or their school life, their family and romantic relationships. They distract and destabilize, and can push people to leave the digital space, or even disappear from it completely.

The impact of this cyber-violence on the health of victims is just as real: anxiety disorders, depression, insomnia, suicidal risks, etc. As with other phenomena of cyber-violence of a sexual nature, the goal is to create a climate of fear, distrust, extremely anxiety-inducing. In exchange for a fabricated but extremely realistic video, attackers can, for example, demand sexual blackmail, money, or simply silence. The targets are exactly the same as for other types of cyber-violence: mainly women under 30.

How does the law currently protect victims?

Rachel Flore-Pardo: Today, certain types of assembly may fall under the scope of article 226-8 of the Penal Code. As it stands, this article punishes with one year’s imprisonment and a fine of 15,000 euros the act of publishing, by any means whatsoever, a montage made with the words or image of a person without his consent. But the facts are only sanctioned if it is not obvious that it is a set-up or if it is not expressly mentioned, which leaves a legal vacuum. It is precisely this legal void that is addressed by the bill carried by Digital Minister Jean-Noël Barrot, which has already been voted on by the Senate and which will be discussed in a public session at the National Assembly on October 4. The goal is to create an article that concretely mentions the deepfake pornand takes into account the rise of the phenomenon.

Article 226-8-1 of this bill thus provides that the act of publishing, without consent, by any means whatsoever, the montage made with the words or image of a person will be punished by two years’ imprisonment and a fine of 60,000 euros, and of a sexual nature. Publishing this type of content using an online public communication service will be punishable by 3 years’ imprisonment and a fine of 75,000 euros.

These provisions provide for punishing the authors of the content. But what about all the other harassers, aggressors or consumers, who sometimes distribute them on a large scale?

Rachel Flore-Pardo: With the Stop Fisha association, we have made an amendment which was voted on in a special committee and ensures that this new article will be able to sanction all other shipments of deepfake porn : whether it is a general public publication or sent to third parties in private conversations. We ask that the article truly understands the way in which these montages are published, that is to say on social networks, in private conversation channels on WhatsApp, Snapchat or Telegram.

What about content hosts?

Johanna-Soraya Benamrouche: The Digital Service Act, which aims to combat online hatred and disinformation on a European scale, has just come into force on August 25. It aims in particular to greatly empower the platforms, and we expect a lot from it. It is not desirable that the diffusion of deepfake is considered inevitable and uncontrollable. It is obviously necessary to sanction the applications, make all content hosts responsible, allow the law as it already exists to be applied and the procedures to succeed. Less than half of complaints for cyber-violence (47%) gave rise to legal proceedings according to our cyberviolence investigation conducted by Ipsos in 2022.

Are these laws sufficient to protect victims?

Rachel Flore-Pardo: That these facts are punished by our law and by criminal law remains an absolute priority, but it is indeed not enough. Prevention is essential, and there must be sufficient resources dedicated to the application of these laws: currently, a large part of the provisions aimed at combating online hatred are not applied due to lack of resources. For example, it is necessary to increase the effective resources of Pharos, allocate to the courts and police services the necessary resources to handle these cases, and ensure the training of judicial police officers responsible for collecting these complaints. Too often, victims are discouraged, told that their complaint will be of no use… And this is absolutely unacceptable.

Johanna-Soraya Benamrouche: There is also a topic on spotting real and fake images in the future. Something is obviously at stake in education, via information campaigns, by allocating significant resources to raising awareness and preventing cyber-violence, and by supporting the support and guidance of victims. Substantial resources must be allocated to combat the digital threat posed by deepfake porn.

lep-life-health-03