Faced with OpenAI’s Sora and its fake videos, there is an obvious solution – L’Express

Faced with OpenAIs Sora and its fake videos there is

A pelican cycling on a coastal path. Genghis Khan’s horde of warriors galloping across the steppe. An aerial view of New York’s skyscrapers. If social networks have been populated with unusual videos for several hours, it is because OpenAI has decided to strike a big blow before the opening, this Tuesday, December 10, of the NeurIPS conference, the high mass of AI which brings together the elite of the sector. Monday evening, the leader in generative AI launched its highly anticipated AI video generation model, Sora. Via ChatGPT Plus and Pro subscriptions ($20 and $200 per month), it is now available in many countries, including the United States, India and Japan – but not yet in the European Union. Sora allows Internet users to generate superb 20-second clips in landscape or square format, from a simple sentence. The Christmas present that conspiracy theorists dreamed of?

READ ALSO: Books, music, art… Can AI really make you rich and famous?

Many rightly fear an increase in fake news. The renowned American tech YouTuber Marques Brownlee revealed on The texts in the “news alert” banner make no sense but the faces of the false presenters scream the truth. And the montage which presents the viewer with traffic lanes disrupted by fog has the dynamic rhythm of professional videos.

A much more realistic rendering than the fake video sequence – also generated by AI – showing Donald Trump and the current first lady Jill Biden, fighting like ragpickers during the reopening ceremony of Notre-Dame de Paris.

Of course, OpenAI has implemented certain security measures and, for example, blocks the creation of sexual deepfakes. However, it is difficult for any AI player to anticipate all possible malicious requests, and especially the way in which Internet users are likely to formulate them.

Track down fake AI-generated videos

Internet users are showing ingenuity to circumvent the defenses of these tools. While a company’s AI refused to present them with information reserved for management, some discovered that it was enough to ask it to do so “in nautical language” for it to change its mind and executed. The problem has since been corrected, but other flaws of this type certainly still exist. Not to mention the models open source that experts manage to modify in order to make them more efficient or… less strict.

READ ALSO: Wall Street under surveillance: how technology controls traders’ conversations

Researchers are working hard to improve their AI content detection techniques. Like many companies, OpenAI includes, in Sora images, a sort of digital signature (with metadata and watermarks) to limit mistakes. And on the Internet, many detectors assess the probability that originally blurred content was generated by an AI by studying certain abnormal recurrences. The usefulness of these tools will only increase as they advance. But malicious actors are also perfecting their camouflage. By developing, for example, tools that slightly modify AI content in order to erase the signs that can betray it. It is therefore illusory to think that all fake news can, one day, be automatically detected.

However, there is a very simple solution to this problem. In the world of generative AI, we must come to terms with the idea that the seriousness of information is no longer assessed by the content but by the source. It doesn’t matter if the images seem real, the figures credible… Which person, which organization is spreading it? And is this his official account? Showing yourself worthy of Internet users’ trust will pay off in the world of AI.

With Worldcoin, identity becomes a business

Humanity has already proven its great capacity to adapt to technological progress. Internet users no longer take the assertions of any blog at face value. They skillfully identify photoshopped images. And companies have already adapted to the first generations of “bots”, these programs which pose as authentic Internet users in order to inflate the audience of a site, retrieve data or block a platform. This is what gave birth to “Captcha”, these annoying quizzes that force us to copy illegible letters or count traffic lights.

READ ALSO: Will progress in generative artificial intelligence hit a wall?

If the scourge of fake news is not already resolved, it is linked to a deeper problem. “A study carried out in 26 countries shows that the factor which best explains differences in attitudes towards conspiracy theories is the level of corruption in the public sector. The more corrupt a country’s public sector is, the more its population believes in conspiracy theories,” Laurent Cordonier, sociologist and research director of the Descartes Foundation, explained to L’Express. Educated citizens living in a democracy, on the other hand, distinguish better fake news solid information And in general, favor reliable and independent sources when they exist. This does not prevent a portion of the population from relaying fake news out of a desire to gain notoriety or ideology. “The belief motivated is a known phenomenon: we believe more easily what suits our point of view”, pointed out Laurent Cordonier last June.

But the realism of fake news is not the key factor in membership. Tracking them down is like trying to empty the barrel of the Danaids. Let’s take the problem in reverse. The authentication business is booming. Companies like WizTrust certify press releases and company publications via blockchain. Sam Altman, CEO of OpenAI, is also digging the vein. He helped found Worldcoin, recently renamed “World”. A project which offers nothing less than providing, in exchange for an iris scan, digital proof of identity to each Earthling.

.



lep-general-02