AI race: “Let’s not do a French-style ChatGPT”

AI race Lets not do a French style ChatGPT

The “start-up nation” likes to talk about artificial intelligence. But as it is bluntly reminded the Court of Auditors, France is far from having achieved its goal of placing itself among “the top five expert countries in AI on a global scale”. According a note from the Institut Montaigne published on April 19, France and Europe, however, have a card to play by specializing in AI security and auditing. Interview with its author, Milo Rignell, resident expert on new technologies at the Institut Montaigne.

L’Express: Yann Le Cun, Luc Julia… Many influential AI experts are French. How to explain that France is so behind in the field?

Milo Rignell: The fundamental reason is economic. The players able to provide the computing capacities necessary to develop large AI models are the tech giants. DeepMind, for example, was originally British but was acquired by Google in 2014. Companies in Europe cannot compete with them at this level. All AI research laboratories are also setting up partnerships with American tech behemoths.

What was the use of France’s AI roadmap designed by Cédric Villani in 2018?

The proposed strategy has had a positive impact. It relied heavily on specific uses: investing in AI applied to health or the environment. This promotes the emergence of truly useful systems. By following this reasoning, however, we have somewhat missed the strategic subject of general-purpose AI systems, that is to say, which have no predefined use. GPT-4, the model behind ChatGPT, can answer medical questions as well as perform math exercises. We partly missed this turn.

What can France do to catch up on the delay in the field of artificial intelligence?

It is useless to chase after the American giants. You don’t have to do a French-style ChatGPT. We have no chance of competing with the United States in this area. It can be done, on the other hand, on other strategic axes, in particular that of the safety of AI. The fundamental barrier that all advanced artificial intelligences come up against is the ethical barrier, with AIs suddenly behaving aggressively or making up entirely fictitious facts. This area is strategic, and France has major assets for working on this key link in AI. The country has a vibrant industrial safety culture and ecosystem. All these skills that allow aircraft to be built with little risk of accident and power plants that meet high levels of safety. France has the necessary capacities to become the reference player in the control and audit of AI.

Is it difficult to align the operation of AI with the general human interest?

Yes, getting AI systems to understand human preferences is a key area that is largely unexplored to date. No country is really ahead on this subject. This is an area that requires interdisciplinary skills. The engineers in machine learning have much to add to the subject, but they alone will not solve the mystery of the nature of human interests and how to translate them into the machine. There is a place to take for France.

Why would France be better placed than other European countries on the subject of AI safety?

It is not necessary to have the enormous computing capacities of the American giants to work on the subject of the safety of AI, but you still need enough to carry out the appropriate experiments on these systems. France is one of the few European countries to have them. By investing in the Jean Zay supercomputer, we were pioneers. As a result, we are the only European country to have, with the Bloom project, a large language model that comes close to that of ChatGPT.

France has also already worked on the safety of AIs used in critical systems (nuclear, automotive, etc.). It has invested 100 million euros in this area and has acquired the first bricks of useful intellectual property on these subjects. It must now expand its scope beyond critical environments and take on the security of general-purpose AI: by relying on French talent and bringing in the best talent from abroad.

Easier said than done. How to attract them?

Granted, it’s hard to compete with the salaries offered by companies like OpenAI. But it’s underestimated how dissatisfied top AI researchers are, philosophically, with the direction many labs are taking. In 2018, when Google employees opposed plans to work with the Pentagon, it was an unusual action. Today, this type of opposition concerns cutting-edge talent across the industry, not just within one company. If France shows strong ambitions on these issues of ethics and safety, it has a real chance of attracting them.

In parallel with the investments to be made, are there urgent protections to take to face the wave of AI calmly?

The counterpart of the investment component is the regulatory component: protecting European populations from unreliable AI that does not respect their rights. This component is being put in place with the European regulations on AI (AI Act), even if it needs to be enriched to adapt it to “general purpose” AI such as ChatGPT. It is also necessary to look into European data which could have great added value. AI systems have so far made extensive use of consumer data, but this will be the turn of industrial data. We must identify and promote those in Europe that can make the area more competitive.

If France develops in the security and auditing of AI for general use, will it not come up against the refusal of foreign groups who will certainly be reluctant to reveal the workings of their AI?

This is the regulatory issue of the European AI Act. Should we, yes or no, require general-purpose AI companies like ChatGPT to be assessed to verify that they meet European security standards? We believe this is necessary in the action note we are releasing. If today’s AI systems are difficult to understand, nothing prevents them from being audited. It is a matter of economic incentive. When companies weren’t required to develop auditable AI, they didn’t. If tomorrow they have to do it to put these AIs on the European market, they will definitely do it.

lep-life-health-03