You may know the site “whichfaceisreal.com”. Created in 2019, it presents you with two faces, and it is up to you to detect which one is generated by an algorithm. Which — honestly — isn’t very difficult. After a few attempts, you learn to focus on areas that often lack finishing, such as the mouth, ears, eyes or background. And every time, we find the intruder.
But since then, algorithms have made immense progress. According to a study carried out by researchers at the University of Texas, the synthesis of faces by artificial intelligence is on the way to becoming sufficiently perfect to no longer be detectable with the naked eye. They presented 128 faces to 315 participants. On average, the detection rate was only 48%, which is worse than a random choice. Then they presented 128 images to 219 participants who were trained to spot fake faces. The detection rate increases to 59%, which is also not huge. “Synthesis engines have gone through the valley of the strange and are able to create images that are indistinguishable from real faces”say the researchers.
Average faces inspire more confidence
This is not very surprising, you will tell me. It was to be expected that the artificial intelligence would sooner or later correct its rendering errors. What is more surprising, however, is the result of the third experiment carried out by the researchers, during which 223 participants viewed 128 faces and had to judge whether the person represented was trustworthy, on a scale of 1 to 7. The fake faces got an average reliability score of 4.82… compared to 4.48 for the real faces. This imbalance cannot be explained by the facial expression, because the true faces were on average more smiling than the false ones. So why ? “This may be because synthesized faces tend to look more like average faces which themselves are considered more trustworthy”suggest the researchers.
Also see video:
For Internet users, this is obviously bad news. These perfect artificial faces can be used to create fake profiles on the Net and serve as vectors for fraud or misinformation. The researchers therefore recommend restricting access to these synthetic technologies as much as possible and including watermarks in the generated images. However, it is unlikely that these protective dikes will hold in the long term. It will therefore be necessary to get used to rubbing shoulders with fake humans on the networks…
Source: Scientific report