Fake profiles, scams, deepfakes… When AI sows trouble on dating apps – L’Express

Fake profiles scams deepfakes When AI sows trouble on dating

You don’t need a crystal ball to see the problems that generative AI can pose. Just look at the dating sector. Tinder, Bumble, Hinge… Dating apps are home to a multitude of people of varying ages, nationalities and socio-professional categories, who interact non-stop. A real virtual Petri dish. And what do we observe there? A proliferation of AI-powered scams.

Generative artificial intelligence takes the creation of fake profiles to a whole new level. No more need to retrieve photos of attractive strangers on the Internet and pretend to be someone else, a mediocre technique that a “reverse search” on Google could reveal. With image generators such as Midjourney or Dall-E 2, thieves can create fake aliases by the thousands. Brown or blonde hair, dark eyes or light eyes, thirty-year-old or senior… You just need to describe the desired result to obtain it.

READ ALSO: Porn: with AI, the worrying boom in fake videos

Thanks to ChatGPT and others, scammers also automate the sending of coherent responses to their victims’ messages, instantly adapting to their language and semantic field. Is their target filled with doubt and asking for “proof”? No problem, generative AI can produce videos and audio messages that scream realism.

Malicious campaigns spread over months

Love scams or “romance scams” ​​have always been lucrative: 1.3 billion dollars have been stolen in the USA by this means in the year 2022 alone. But until now, they took the criminals a lot of time to charm their victims and gain their trust. This obstacle is no longer one: AI now provides aigrefins with the perfect arsenal to automate the management of long-term malicious campaigns, on an increased number of targets. In the columns of The Guardian in January, Europol also warned of the observed increase in this type of attacks.

The report Modern Love 2024 from McAfee allows you to see the extent of AI-generated content on dating sites. The cybersecurity specialist indicates that 42% of people surveyed have already seen artificial-looking photos or profiles on dating platforms or social networks. These fake images are not the preserve of thieves – some Internet users use them to embellish their real profiles. Scammers are, however, the most common users.

READ ALSO: MidJourney: draw me a job killed by artificial intelligence

Dating is not the only sphere affected. In early February, scammers stole $26 million from a Hong Kong company by fooling an employee using AI. He thought he was receiving a video call from his superiors ordering him to transfer funds, when in reality it was a deepfake, visibly assembling videos prepared in advance and audio generated in time. real.

Deepfakes to encourage the target to pay

So-called “grandparent” scams could also be made easier. In these cases, thieves call elderly people and pose as one of their grandchildren. They claim to be in a critical situation – accident or arrest – requiring urgent financial assistance. A stressful scenario which already fools many victims: under pressure, they do not always realize that the voice is not exactly that of their loved one. But with new voice synthesis AI capable of easily creating voices with more or less young, feminine, masculine timbres, in all languages, this type of scam risks becoming child’s play.

READ ALSO: Generative AI: a miracle method so that they no longer make mistakes?

In the background, a world is emerging where we will have to completely rethink the way in which we ascertain someone’s identity. Receiving a phone call from someone with a familiar voice will no longer be enough. Before carrying out certain actions, it will be appropriate to take further precautions – call the person back, ask specific questions. Platforms where you can create an account in three clicks, by choosing a pseudonym and an avatar, will perhaps also become rarer. To obtain Meta Verified certification, Mark Zuckerberg’s clients must now display their real names.

Flirting co-pilots and virtual lovers

Faced with the rise in romantic scams, dating platforms are in any case strengthening their security. In the coming weeks, Tinder will launch its badge certification in several new countries: the United States, the United Kingdom, Brazil and Mexico. To obtain this key, which reassures potential suitors, customers will have to provide the platform with a valid ID and record a selfie video. The stakes are high: scams and fake profiles seriously degrade the experience that dating apps offer to their customers. If they don’t have enough interesting interactions, they will quickly end their subscription.

The publishers of these applications do not limit themselves to protecting themselves from AI: they also seek to use it for their benefit. If generative AI makes it possible to create false profiles en masse, it also helps to detect them. Above all, it opens the door to new options, for example, conversation assistance suggesting questions/answers to lovers. Romanticism loses a little, but the shy and those in a hurry are already trying it. Some companies go much further and invest in the business of 100% virtual lovers. Here, Internet users interact with artificial suitors, always listening to them and configured according to their fantasies. A new world where Internet users no longer talk to bots without their knowledge, but knowingly pay to do so. Dizzy.

.

lep-general-02