Generative artificial intelligence models have made the creation of misleading content accessible to everyone. This content used for disinformation or scam purposes is called deepfakes. Last March, Eliot Higgins, founder of Bellingcat, an independent international collective of researchers, investigators and citizen journalists, created a realistic photo of Donald Trump resisting arrest and being taken away by the police.
In his tweet, Higgins confirmed that he had used Midjourney V5. Except that the images, which have gone viral, have been viewed more than 2.5 million times, with many users of the social network believing them to be real. Recently, the FBI warned against the increase in sexual blackmail based on fabricated intimate images and kidnapping scams where criminals make their victims listen to an artificial voice resembling that of one of their loved ones. .
At the end of July, the companies OpenAI, Microsoft, Google, Meta, Amazon, Anthropic and Inflection have committed to develop technology to clearly watermark content generated by their artificial intelligence models. Simply put, it involves inserting a traceable element into a piece of text or an image. This will help make it safer to share text, video, images and audio generated by artificial intelligence models by preventing the risk of people being misled about the authenticity of the content. This voluntary commitment, under pressure from the White House, did not fail to be presented by the latter as a decisive step forward in the regulation of the deleterious effects of new technologies.
If this watermarking technology is used by Stability AI for its Stable Diffusion model since 2022, among the giants, it was Google DeepMind which was the first, at the end of August, by launching a new tool of this type. No wonder: At Google’s annual I/O conference in May, CEO Sundar Pichai said the company builds its models to include watermarking from the start, highlighting security as a differentiating factor in a race where Alphabet is lagging behind. The tool, called SynthID, will initially be offered as an option in Google’s artificial intelligence image generator, Imagen. SynthID is based on a neural network that produces an image almost identical to the original, with a few subtly altered pixels. This creates an integrated pattern invisible to the human eye.
Traditionally, images were watermarked by adding a visible overlay or adding information in their metadata. But this method is fragile and the watermark may be lost when images are cropped, resized or edited. However, this is what Adobe chose, which announced on October 10 a new symbol designed to indicate when content was generated or modified using AI tools. This logo, Content Credentials, looks like a lowercase “CR” in a curved bubble in the lower right corner. It displays metadata stored in a PDF, photo, or video file that includes information about the origin of the content and the tools, generative or conventional, used in its creation. The information is automatically added by Firefly, Adobe’s image generator.
Adobe has been leading this fight for several years as part of the Coalition for Content Provenance and Authenticity (C2PA), a project launched in 2021 bringing together the BBC, Microsoft, Nikon and Truepic. Microsoft, which has until now used a custom digital watermark with its Bing image generator, will soon adopt the new C2PA system. Adobe believes that the Content Credentials brand will become as common as the Content Credentials brand in the future. copyright. However, C2PA provenance is only useful if the user is willing to identify the signatory. Until now, relying on users’ willingness to verify the veracity of a document before sharing it has proven a losing bet. The architecture of social networks continues to accelerate the distribution of content that sparks outrage. The question that will inevitably arise is that of preventive censorship of those watermarked by social media control algorithms.