Faced with the proliferation of real-time deepfakes, a company has determined two simple methods to find out if your interlocutor is really who he claims to be. Simply ask him to wave his hand in front of his face or stand in profile.
You may also be interested
[EN VIDÉO] Algorithms and their decision making Increasingly, algorithms are replacing human beings in decision-making or task performance. They take into account a number of parameters but the question of the transparency of the algorithms arises: can they explain their decision?
The quality of deepfakes moving at breakneck speed and the results are more and more convincing. In March a fake president Zelensky announced the capitulation of Ukraine. This kind of video is prepared in advance with hours of processing. However, there are now algorithms that work in real time and allow you to change heads during a video call.
At the end of June, the FBI reported that some criminals use this technology to conduct job interviews for telecommuting positions. They thus obtain access to protected company information without ever revealing their true identity. Artificial intelligence specialists from Metaphysics company looked into the matter and discovered two simple methods to find out if a correspondent uses a software of deepfake during a call.
Turning your head or waving your hand in front of your face is enough to break the illusion
The first is to ask him to wave his hand in front of his face. The software should attempt to overlay the actual hand image onto the face image it generated. As the algorithm tries to compensate, this creates latency and visual artifacts, like disappearing fingers… According to Metaphysic, advances in the field of AI should however make it possible to compensate for this kind of problem in the near future.
The second method, and the most reliable according to specialists, is to play on the angle of the face. In Maya real-time deepfake had successfully tricked a “living detection” system (liveness detector), an AI used in particular by banks for biometric identification systems. She makes sure that the person is a human being, not a fake. But in this specific case, the system hadn’t asked the person to turn their head.
Deepfake algorithms are designed to work from the front. All you have to do is ask your interlocutor to turn their head to the side to see errors appear and the profile of the person behind the deepfake. But beware, in some cases software can display plausible images up to an 80 degree angle. This requires the person to turn their head at a 90 degree angle to the camera.
Errors difficult to correct in the absence of profile photos
The technology could evolve to try to correct this shortcoming, but it will always lack the data for a perfect result. For celebrities, there are usually plenty of snaps and videos from all angles and so it’s entirely conceivable that real-time deepfakes could eliminate mistakes using their faces. Nevertheless, apart from well-known people, few people have profile photos taken. Even with high definition video shared on social networksit is unlikely that AI can find enough matter to create a perfect deepfake, especially with different facial expressions.
Maybe the next scam will be fake job interviews where a fake employer asks you to turn your head to prove you’re real, while saving your profile picture to reuse later in a deepfake…
Interested in what you just read?