In the image, a bloodied old man cries out for help. Around him, the forces of order surround him. Some have blood on their hands. The scene seems to attest to the violence of the authorities, in the midst of a demonstration against the pension reform. Problem: it never existed.
Macron as a garbage collector, Trump arrested or the Pope dressed like Rihanna… For weeks, fakes have been invading social networks. In question: the artificial intelligences generating images, these systems capable of creating works resembling photos. The best known, Midjourney, announced on March 30 to suspend the free trial offered to its customers. Following the release of the fifth version of its software, the hype was too great, the fakes too real. It is in particular to her that we owe the wave of fakes currently flooding the web.
But Midjourney is far from being the only player in this market. Faced with these increasingly sophisticated technologies, and increasingly easy to produce, how can you prove that an image is false? Several studies have shown that the majority of people are already falling into the trap, while the technologies are not yet at their maximum capacity.
First reflex: check
To be certain that an image is false, only one solution: go back to the source. For the abused old man, the author of the image himself intervened. Régis Gonzalez presented himself to AFP explaining that he had used it for a satire. He had mentioned from the start that it was fake.
First reflex to have: do research to verify that the media or reliable sources have not raised doubts. To find out who posted the image first, it is possible to enter it into Google, with the “reverse search” option. This also helps to date its appearance online. This time, Midjourney had left visual clues. CRS badges are missing, and colored bands on the helmets. A man appears to have an incomplete visor. On social networks, specialists have thus brought up these inconsistencies.
Recurring visual clues
For now, the AI is still crashing. His most common difficulties: human anatomy. To try to see the flaws, look at the hands. The protagonists often have too many fingers, or too long fingers. Like in this photo where the pope is holding a deformed can without fingers… Others have aberrant joints.
Another flaw: words. AI is not aware of meanings. So it’s common for her to miss entries when she tries to put them in. Finally, some objects do not take into account the laws of physics. They escape from the hands, enter the bodies or float in the air. These faults are more common in the background.
Counting on the flair of its users, Twitter is experimenting with a collaborative way of verifying images. But soon fewer errors will be visible, making the task even more difficult. “AIs are improving day by day and have fewer and fewer anomalies, so we should not rely on long-term visual clues”, analysis with AFP Annalisa Verdoliva, professor at Frédéric University -II of Naples.
Towards international standards?
It will then be possible in some time to compare the image with reliable photos of the same event. Thus, the fake shot of Putin kneeling in front of Xi Jinping has a very different setting from that of the photos of the Chinese leader’s visit to the Kremlin. Software already exists, but their results are still very mixed. Soon, perhaps only AI will know how to discern AI…
To avoid the worst situations, the companies that market these AIs are already putting filters in place. “We filtered violent and sexual images from all training data,” said Open AI, a company that notably created Dall-E-2, a competing tool to Midjourney. Operators also automatically select the images offered to customers to avoid the most shocking, or the most sensitive. Still insufficient.
Faced with the quantity of fakes, some are calling for global regulation. “The world needs stricter ethical rules for artificial intelligence: this is the challenge of our time”, called theUNESCO in a statement issued on March 30. A few days earlier, some tech players, including Elon Musk, called on the world to pause the development of AI, so that the technologies and regulations to control them catch up.