Europe legislates without vision, by Robin Rivaton – L’Express

Europe legislates without vision by Robin Rivaton – LExpress

On December 8, 2023, after months of intense negotiations, the European Parliament and Council reached a political agreement on the European Union law on artificial intelligence (AI). Hailed by the President of the Commission, Ursula von der Leyen, as a world first, this text is much stricter than the executive order taken by the White House a few weeks earlier.

This legislation aims to ensure the safety of AI systems in the EU market by strictly defining four different risk classes. Minimal risk systems will be exempt from any obligation, as they do not threaten the rights or safety of citizens. For those at limited risk, transparency is required. Users must be aware that they are interacting with a machine when it comes to a conversational agent. As for audio and video content of a synthetic nature, they must be marked. While some systems will be banned entirely, such as facial recognition for law enforcement purposes in publicly accessible spaces, providers and users of so-called high-risk systems, such as social rating systems applied to recruitment, will have to comply with strict requirements regarding bias limitation, data protection and documentation.

READ ALSO: AI Act, the compromise that pleases no one: regulatory overdose and French laments

Everything will have changed in two years

The obsolescence of this regulation is already evident. It will come into force after its publication in Official newspaper in summer 2024, but will only come into force after a grace period of two years, except for bans. Difficult delivery for a work launched in 2018, following the publication of the European strategy on artificial intelligence. Everything will have changed by 2026, and banning tools whose developments no one has control over seems a very pretentious exercise. In the meantime, with disarming candor, the Commission will launch an AI pact and bring together developers voluntarily committing to enforce the rules in advance.

READ ALSO: Mistral AI: the three challenges awaiting the French prodigy

The obsession with general-purpose models

It also reveals a total absence of industrial policy, with an obsession with what is visible, namely general-purpose AI models – or GPAI -, to which the famous extended language models, LLMs, belong. After lengthy discussions, a compromise was found, with strengthened obligations for those presenting a systemic risk. And broad exemptions for open source models, which are developed using freely available code that developers can modify for their own products and tools. The aim was to preserve the competitiveness of European open source AI companies, notably France’s Mistral and Germany’s Aleph Alpha.

But there is, as with the cloud, a clear misunderstanding of the order of magnitude of the means to be used to develop local players. In November, the Commission announced with great fanfare a competition to give four European companies the right to each access one million hours of supercomputers to train their models. A laudable initiative, immediately ridiculed on the Internet: this quota barely corresponds to one week of training on the Meta SuperCluster.

A barrier for corporate clients

This obsession with fundamental models makes us forget that AI is not a scientific competition, but an industrial revolution that must be incorporated into all products. The software that equips car garages, building managers or agri-food factories must evolve with artificial intelligence so that the continent’s overall productivity improves.

We could reassure ourselves by saying that this regulation will not be applied, that the authorities responsible for market surveillance in each Member State will be saturated, that companies will be able to self-assess with minimal risk, and that this text, ultimately, corresponds to a pure exercise of ostentatious virtue such as the European Union so often knows how to produce. But the risk it poses, particularly through the right given to consumers to file a complaint, is real, and will undoubtedly discourage certain sectors from incorporating AI into their tools.

.

lep-sports-01