The development of artificial intelligence (AI) is currently at the heart of global concerns, as evidenced by the tough battle between the United States and China to occupy the leading position. And France, in all this? Since the submission of the Villani report in 2018, the country has adopted a strategy, funding, and even, it is less known, different national coordinators on the issue. Succeeding Bertrand Pailhès (who became director of technologies and innovation at the CNIL) and Renaud Vedel (now chief of staff to Jean-Noël Barrot at the Ministry of Digital), Guillaume Avrin occupies this position attached to the General Directorate of Enterprises , in Bercy, since January 2023. That is to say a short month after the launch of ChatGPT, triggering global interest around these generative artificial intelligences capable of producing text or images.
Its mission consists of disseminating AI in the economy, using a budget of 1.5 billion euros, released as part of the France 2030 plan. All this, while favoring a form of “sovereignty” , a term widely used today within government and in particular by Emmanuel Macron. That is to say, by not depending on foreign solutions, like the American ones, which are the most advanced and mature on the market at the moment. Interview.
L’Express: What does the term “sovereignty” consist of in artificial intelligence?
Guillaume Avrin: Personally, I prefer the term “strategic autonomy”. It is about the possibility of making a choice, and that it is not sudden. We can therefore very well integrate a recent tool like ChatGPT within our companies, for many reasons including the quality of the interaction with this conversation agent – it would be stupid to deprive ourselves of it -, but we must keep in mind that we could still have integrated a solution developed on national territory or in the European Union. To summarize, it means not being dependent on international players for information or data that we do not wish to share with just anyone, which may be the case on sovereign subjects. Then, we do not know how our relations may evolve with different countries around the world which host these AI. We must be able to say: “If these ever deteriorate, we are able to fall back on internal systems.”
One of the main issues remains that of performance. And for now, we have the feeling that the most efficient large language models (LLM), at the basis of recent innovations in AI, are all foreign.
With the advent of ChatGPT, a little less than a year ago, we quickly realized that we were not well positioned in generative AI. Certainly, laboratories were working on the subject, notably at INRIA (National Institute for Research in Computer Science and Automation) as well as at the CNRS (National Center for Scientific Research). But in terms of companies, only LightOn or almost only has been actively working on the issue for several years. There was therefore a real issue of sovereignty. The goal was then to position a certain number of players on this generative AI value chain. From the very beginning, from the creation of data corpora, to the training of models, to the development of APIs, these software interfaces which allow them to be used. I think it worked pretty well. We can cite the emergence of start-ups like Mistral, Dust, or even on the security aspect, Giskard. Then, beyond sovereignty, some of our companies are already positioning themselves in strategic verticals, I am thinking of Nabla, which stands out in the medical sector or of Poolside, in computer code, which has raised more than 100 million euros recently.
How can we explain this delay despite everything?
We must remember that we are not at the same investment levels. In the United States, a player like Microsoft has committed $10 billion to a single company, OpenAI, and to a single subject, generative AI. These are extraordinary amounts, which are difficult to achieve here, from both private and public actors. But it’s true: we ask ourselves every day, “how do we manage to be competitive?”. For the moment, the answers we have are as follows: first, when we are in a catch-up phenomenon, which was the case for the development of generalist LLMs, it costs ten times less than to get there first.
From then on, a country like ours can get back into the race. And that’s why we’re already seeing the birth of very good actors such as Mistral. The latest LLM 7B (comprising 7 billion parameters) from this start-up is considered among the best in its category. It is particularly competitive compared to Llama, from the American giant Meta. And if we don’t want to just catch up, we must make ourselves attractive to the international financial system. The efforts here are bearing fruit because there have been some very good fundraising efforts on artificial intelligence in recent months. Then, of course, there is Europe. Programs like Horizon Europe or Digital Europe have significant investment budgets. There are currently between 300 and 600 million euros allocated to the creation of what is called “trusted AI”, which meets standards of transparency and confidentiality. To my knowledge, no international initiative in this area comes close to these amounts. Europe is clearly the leader in this segment.
The new AI tools created today are largely based on a single electronic chip supplier: Nvidia. Is this the blind spot of any sovereignty strategy?
Nvidia is indeed the overwhelming leader in the GPU market [NDLR : ces puces utilisées dans le calcul pour de l’intelligence artificielle]. You have to be pragmatic. How much would it cost us to create such an advanced component industry? Nvidia is investing several billion euros in a single category of GPU. This would not be the best use of public money. For Nvidia, as long as we don’t have GPU supply difficulties and we can manufacture supercomputers, there is no problem. Indeed, we should not be deprived of it. There, we would face a new sovereignty issue, since we would have no other comparable choices to compensate for this loss. But I think we need to look further. The hardware, the material infrastructure used in AI, is currently not optimized for this. There is a reasonable path towards adequate, less energy-consuming tools. In my opinion, this is the main competitiveness issue for AI tomorrow, and it is 100% in line with the issue of frugality and environmental protection desired by France and Europe. We must consume less energy. I think that if investments are to be expected, they will rather be in this direction.
What advantages can France bring to bear in the world of AI?
First, I would say our math and engineering skills. We are also in the process of setting up new “AI clusters” in the region, for a total budget of 500 million euros. The objective is really to have totem places, places of excellence, which will combine training and research, in order to have the best talents on the issue. We are also considering developing dual “AI + X” skills, with doctors or lawyers for example, who will bring their knowledge to the training of specific models in health or law. It is a bit like a continuation of the Interdisciplinary Institutes of Artificial Intelligence (3IA), deployed during the first phase of the AI strategy (between 2018 and 2022).
And all this, backed by computing power, our other strength. The president’s announcements at the VivaTech show focused in particular on this, with the extension of the Jean Zay supercomputer and the creation of another, of the “exascale” type, at the highest international level. We are also witnessing the emergence of “as a service” supercomputers, for private needs. ScaleWay, from the Iliad group, is positioning itself on this as we speak. Eviden, from the Atos group, is also among the world leaders. If we add quality data as well as carbon-free energy, which is preferable for model training or daily use, training and perfecting models could become our main asset – always with this research of frugality and confidence specific to France and Europe.
France recently established a committee specially dedicated to generative artificial intelligence, made up of eminent experts such as Yann Le Cun, Luc Julia and Joëlle Barral. What will his role be?
The goal of this committee is to propose recommendations that go somewhat beyond the administrative framework. Which involve broader consultations, I am thinking of the impact of generative AI on respect for copyright, related rights, respect for the GDPR and the protection of personal data… The first report is expected in March but a first crossing point will take place at the beginning of November. My role will be to identify the most relevant and easily actionable proposals in order to implement them quickly. And therefore, potentially, if necessary, to exceed the initial budget a little (1.5 billion euros). Which would trigger a new phase for the development of AI in France.
.