regulatory overdose and French lamentations – L’Express

regulatory overdose and French lamentations – LExpress

Emmanuel Macron’s irritation was palpable with the AI ​​Act, the major European legislation on artificial intelligence. “We can decide to regulate much faster and stronger, but we will regulate what we do not invent,” tackled the President of the Republic from Toulouse, Monday December 11, as part of a progress report of the France 2030 plan. The sentence formulated in the future tense almost suggested that negotiations were still underway. However, a compromise had already been found on this text, two days earlier, after 38 hours of carefully detailed debates on X (formerly Twitter).

One of its architects, European Commissioner Thierry Breton, has also put an end to any major change, given the effort. “The text has been approved; it is no longer open for discussion,” he told The gallery. Emmanuel Macron is not the first to be annoyed by this. His Minister of Digital, Jean-Noël Barrot, also stepped up to the plate on Saturday. “We must avoid crushing European innovators under excessively heavy regulations,” he urged. Before calling to “preserve” the innovation capacity of start-ups.

READ ALSO >>AI Act: a compromise and rare prohibitions

According to the Financial Times, France, alongside Germany and Italy, would now push to “block” the adoption of the AI ​​Act. Not its part concerning use cases – leading to consensus – with the ban on citizen rating systems, or behavior manipulation devices. But at a minimum, that concerning text and image generation models, which are still very young. A segment potentially quickly obsolete. “Technology has evolved so much in one year since the release of ChatGPT, so imagine the implementation of the text in 2026…”, breathes to L’Express Marianne Tordeux Bitker, head of public relations at France Digitale, a French digital influence group.

Code of Conduct

In the great war of generative AI, the camp in favor of light regulation has apparently been heard. Thierry Breton himself described the AI ​​Act as “pro-business”. This is much lighter compared to a first draft having a time that made the ecosystem shudder. “The balance found between innovation and regulation is good,” recognizes Mehdi Triki, director of public relations for Hub IA France, another sector lobby. Generative AI companies specializing in open source – whose code used to create their language models (LLM) is freely shared – are free from heavy regulatory constraints in terms of transparency. This is good: Europe is particularly banking on them, like the French champion Mistral AI. The latter formalized at the beginning of the week (the timing is no coincidence) a fundraising of nearly 400 million euros, becoming in the process the first French unicorn in AI. The strongest obligations would currently only concern the American models GPT (from ChatGPT) or the more recent Gemini (Google), considered at “systemic” risk due to their power. Its direct rivals.

READ ALSO >>AI: Europe finally enters the race

Supporters of tougher regulations for safety reasons are also dissatisfied. No notion of “frugality” of the models, notes Jean-Baptiste Bouzige, head of the company Ekimetrics, specializing in data science: “not very serious, in the middle of COP 28”. For his part, Raja Chatila, professor of robotics, AI and ethics at ISIR-CNRS and at Pierre and Marie Curie University, regrets the blank check given to open source software.

The French laments are, however, not unfounded. In principle, generative AI must now live with a “code of conduct”. “Thierry Breton has put forward a logic of labeling. The models will have to prove their virtuous approaches upstream, and not a posteriori, after having innovated and perhaps acted in any way”, welcomes the boss of Ekimetrics. Many models will therefore have to display a certain degree of transparency, provide information on the sources used to train their models, and respect copyright. “A good thing for the user,” points out Jean-Baptiste Bouzige. Less for business: this compliance, however modest, remains costly in time and money. “Even if the IA Act did not come into force until 2026, this process, which requires, among other things, the recruitment of legal specialists, often begins the day after the official text is voted on,” explains Winston Maxwell, director of studies. law and digital at Télécom Paris and former lawyer at the New York Bar.

Regulatory overdose

Big Tech would easily accommodate this. Less emerging companies, particularly in Europe, already proactive in digital regulation in the broad sense. “It must be remembered that the AI ​​Act is not the only text currently being adopted or recently adopted on the continent. In Tech, there is also the DSA, the DMA, the Data Act [NDLR : les règlements sur les plateformes, le commerce en ligne et les données]”, adds Olivier Martret, partner in the Serena Capital investment fund. The risk of regulatory overdose is not very far away.

READ ALSO >>Regulation of AI: American flexibility, by Robin Rivaton

Today it is the real cost of this compliance – and therefore the impact on the competitiveness of European AI – which is uncertain. Especially since the precise text of the compromise has not yet been made public. “All the criteria which make it possible to define whether the models are at systemic risk or not are not yet known,” confides Marianne Tordeux Bitker. A power level has indeed been mentioned, but the number of users of a model or parameters used in its training could also come into play. Finally, many observers fear an increase in the AI ​​Act. Why not to the detriment of open source. “New power or user parameters could make their regulatory exception disappear,” fears France Digitale. Mistral, soon to be treated like Google or OpenAI?

This is what France is trying to avoid, with its German and European partners. But this return to the negotiating table carries a risk for these anti-regulationists: that the opposite camp will also be heard. “I understand that companies engaged in the race for artificial intelligence want to move quickly. But we must innovate correctly. If they do not develop trustworthy AI, they will lose market share no matter what, believes Raja Chatila Unlike Emmanuel Macron, the scientist is convinced that regulation has never hindered innovation. And reminds, a bit gratingly: “Europe does not have a cloud, nor a company comparable to Google, Microsoft, or Amazon. It is behind in digital matters, and has been for many years. No law is the cause of this.”

.

lep-sports-01