It is a tool with which the military will have to become familiar. AI is a crucial issue for the future of armies, whether for intelligence or mastery of “collaborative combat”. It also raises ethical questions with the risks represented by the development of autonomous “killer robots”. On all these points, Michaël Krajecki, Artificial Intelligence project director at the Defense Innovation Agency (AID) since 2019, explains that France “is not behind”.
L’Express: Why should the French armies not miss the turn of the AI?
Michael Krajecki: Like the civilian world, they are experiencing a digitization of their systems, which produce a lot of data. We thus have more data arriving on the ground from our satellites, without having more experts to analyze them. Thanks to advances in algorithms and computing power in recent years, AI facilitates this processing.
More broadly, in the French armies, seven fields of application have been determined. 1) Decision-making and planning support, where AI will make it possible to size the resources necessary for a mission, generate training scenarios. 2) Collaborative combat, to coordinate units and achieve the desired effect. 3) Cybersecurity and influence, in order, among other things, to guarantee the protection of our IT systems. 4) Logistics and maintenance in operational condition, so as not to run out of a spare piece of equipment, for example. 5) Intelligence processing, in particular satellite imagery. 6) Robotics and autonomy, with systems that will help a combatant. 7) Support, a more administrative axis concerning the management of human resources.
In what areas are armies already using AI?
This is the case in that of transport flows, both well controlled in the world of defense and in the civilian sector. Work is being done on collaborative combat within the Scorpion program [renouvellement des blindés de l’armée de Terre française par des engins connectés entre eux]. Thus, if several vehicles detect a start of fire, they will be able to calculate its trajectory and propose a reaction. There is also the Artemis.IA program, piloted by the Digital Defense Agency, which should provide various operational generative AI platforms. The first should be deployed at the end of the year and will concern intelligence.
Precisely, what is AID doing in the development of AI?
We are attached to the General Delegate for Armaments, to whom we report on the developments that we carry out, for the benefit of the armies. We offer technological bricks from the civilian sector that already exist or we launch new projects to meet an operational need. We also collaborate with academic actors, such as INRIA, CNRS, CEA, which allows us to work on subjects that arouse less interest outside the defense sector, such as frugal AI, based on little data. We also participate in international projects, within the framework of the European Defense Fund. Or bilaterally: with the Singapore Ministry of Defence, or rather: we have created a joint laboratory in research and development in the field of AI.
Why are “start-ups” essential for developing military AI?
They make it possible to import technologies under development. The AID makes them work with the major defense manufacturers, so that they can propose developments for their equipment. This is the case of the MALICIA program [Maturation agile des logiciels pour l’intégration des composants d’intelligence artificielle] for the Thales radars. There is also the ARES program [Action et résilience spatiale]where startups were selected to develop an AI capable of identifying space objects.
Ukraine relies on civilian-developed AI to fight Russians…
Yes, civilian technologies contribute quite significantly to war efforts. Having an efficient civilian ecosystem makes it possible to benefit from them in the context of defence, it is the very essence of AID to have this capacity to incorporate them into our operational systems. The fact that it was created 5 years ago, and not in reaction to recent conflicts, shows that AI is a fundamental trend for armies.
Driven by their digital giants and their vast defense budgets, won’t the United States and China outpace us?
With the United States, there are circles of discussions to share on these subjects and to envisage joint action capacities. Chinese and American efforts are disproportionate, of course, but I don’t think France is behind. It must reason in an inter-allied context with bilateral and multilateral collaborations.
France is at the forefront of the ethical questions posed by autonomous weapons. Doesn’t this risk limiting its abilities against less scrupulous opponents?
The Ministry of the Armed Forces has set up an ethics committee. One of his first works focused on lethal autonomous systems (ALAS), which, it should be remembered, do not yet exist. The committee has framed developments in Lethal Weapon Systems Integrating Autonomy (SALIA), so that sufficient human control is maintained. These ethical principles are a guideline in the design of the systems in which AID participates. Other players won’t have the same precautions, but I’m not sure that gives them an operational advantage.