identifying images generated by AI, mission impossible?

identifying images generated by AI mission impossible

With the rise of artificial intelligence, images generated by AI are increasingly common on the internet. Widely used by propagators of fake news, their disturbing realism sometimes makes them very difficult to identify. But there are clues…

THE Pope Francis in a white down jacketDonald Trump arrested by the police or more recently an old man with a bloody face arrested during a demonstration against the pension reform… The development of artificial intelligence is now blessed bread for the propagators of false information – or simple jokes who use them to produce ultra-realistic images that were invented from scratch.

Software like Midjourney, DALL-E or Stable Diffusion are thus able to generate an infinity of shots from a huge database constantly fed by user requests. These images that look quite realistic at a quick glance are confusing, especially when related to current events, but further analysis can – Sometimes – allow them to be identified.

Logo and reverse search

To create these images, nothing could be simpler. In software like Midjourney, just type in a search bar a written request to generate, from millions of images, a new image created, pixel by pixel, by artificial intelligence. The result can be stunningly realistic, but some imperfections can remain and raise awareness.

The first element that can indicate that a photo has been generated by an artificial intelligence is the signature that can be found in the lower right corner of the image. For the DALL-E software, it is for example a multicolored rectangle, this point of reference can however easily be deleted by people with bad intentions by cropping the image. Another solution is to carry out a reverse search in a search engine by dragging the image in question into the search bar to find its past occurrences and find its source.

Pay attention to details

But the best way to spot an image created by an artificial intelligence is still to open your eyes wide and focus on the details. For example, AIs still have a lot of trouble generating reflections or shadows. The grain of the image is often very particular and the backgrounds are generally very blurry and if we find texts there, they do not mean anything.

“You have to find the inconsistencies in the details. These are often photos that at first glance are very realistic, but when you look at them better there are often problems, analysis Lise Kiennemann, journalist for the site The observers of France 24, which works on these themes. Texts are problematic because the AI ​​can’t generate them well. Another clue is the faces in the background which are quite poorly made. They are blurry faces, not quite formed. »

By focusing on Donald Trump’s fake arrest photos shared by the founder of Bellingcat website, Eliot Higgins, we realize for example that what is written on the caps of the police officers does not mean anything, that the former American president carries a truncheon and that there is an inconsistency at the level of his lower limbs since he seems to have three legs. So many clues that allow us to say that these are images generated by an AI, especially since at the time of their publication, Donald Trump had not still not been arrested.

Image generators also often create asymmetries with disproportionate faces, with ears at different heights. They also have trouble reproducing teeth, hair but also the fingers. At the beginning of February, images of women hugging police officers during a demonstration against the pension reform went viral, but they were quickly identified as fake because in one of them, the police officer had … six fingers.

Towards perfection

Artificial intelligences therefore still have room for improvement, but at the speed at which they evolve, it could very quickly become impossible to differentiate images generated by AI from images that are indeed real. ” Midjourney, they are at V5. The difference between V1 and V5 in a few months is absolutely stunning. We can think that in a few years perhaps, but I think rather a few months, we will no longer be able to tell the difference “, explains Guillaume Brossard, specialist in disinformation and founder of the Hoaxbuster website. Midjourney is itself overwhelmed by the scale of the phenomenon since the site announced on March 30 that, crumbling under ” extraordinary demand and trial abuse “, it suspended its free trial version.

Corollary of this evolution, images sow doubt even though they are very real. The photo of a young woman arrested in Paris on the sidelines of the demonstrations, for example, was immediately identified as the creation of an artificial intelligence by Internet users, until the author of the photo confirmed that it was real and that other images of the arrest, taken from another angle, corroborate his statements.

“You can believe that real images are actually AI and you can believe that AI images are real, so the borders are already very blurred and they will even blur a little more in the months to come. come, analysis Guillaume Brossard. But there is one thing that the AIs cannot do and that they are not close to knowing how to do, I think, and that is to reproduce a scene from several angles and that is a very good clue. »

Disinformation in a new era

Therefore, finding images of an event taken from different angles is a good way to check if an image is real. tools like the Hugging Face app also make it possible to determine the probability that an image comes from an AI, but their reliability remains relative and this should not go away.

Faced with these new technologies that will bring disinformation into a new era, the best behavior to protect yourself therefore remains to constantly question the images that you can see, in particular those that will seek to touch the sensitive chord. of our emotions by trying to scandalize us. According to Guillaume Brossard, this is one of the main sources of misinformation and ” as soon as an image generates an emotion, it is imperative to ask the question of whether it is not potentially tampered with in one way or another “.

With the dazzling improvement of artificial intelligence, however, it is not sure that this is enough to fight against the growing influence of fnews. ” Today, people only believe what they want to believe. They don’t care if what we show them is true or not and that’s the problem, laments the founder of Hoaxbuster. We are somewhat following what Trump had theorized a bit with alternative truths and the post-truth era. We are right in it and we will have to learn to live with it. »

To counter this threat, media education remains an indispensable lever. Voices are also being raised to demand a ” pause of the development of artificial intelligence. People like Elon Musk, the boss of Tesla, or Steve Wozniack, co-founder of Apple, wrote an open letter to ask for the opening of a six-month moratorium on AIs which they consider a ” existential issue ” for Humanity. A position shared by Guillaume Brossard: “ There should be a moratorium, a bit like what was done at one time for nuclear weapons. That humanity asks a little and decides to add a kind of fingerprint that certifies 100% that a file is from an artificial intelligence “. But the expert concedes, ” given the stakes of disinformation today, it is really not won. »



rf-5-general