Planes, software, “killer robots”… How AI will revolutionize warfare

Planes software killer robots How AI will revolutionize warfare

The scenario comes straight out of the algorithms of ChatGPT, which was asked to describe the impact of artificial intelligence on a land battle in 2040. Thanks to it, artillery strikes combine precision and coordination, cyberattacks are incessant, while troops and drones clash according to tactical models adjusted in real time. We also demanded to know what would tip the scales: the defeated army is, here, the one having followed to the letter the recommendations of an AI fed with deceptive data flows… generated by the AI ​​of the victorious forces.

OpenAI’s software is not soothsayer, it has only digested the many reports and evocations of potential military uses of AI. “It will be used in all aspects of what the army and the security services do”, warned Gregory Allen during a recent seminar of the French Institute of International Relations. “It is already in certain niches”, continued this director of a center devoted to these technologies within the American think-tank CSIS, referring to “the processing of satellite images” and “predictive maintenance”, which anticipates repairs before failure – enough to save budget and increase the availability of equipment.

In Ukraine, it already plays a crucial role in the fighting. Thanks to smartphone applications such as Gis Arta or Kropyva, the Ukrainian forces have managed to reduce to a few minutes, even seconds, the loop between the identification and then the strike of a Russian target, from data collected by drones or satellites. “Ukraine has opened a window that will not close, AI has become a reality on the battlefield, notes Alexandre Papaemmanuel, intelligence specialist and teacher at Sciences po Paris. The armies that will make the difference are those that will master AI and software.

Terminator will not scroll on July 14

Two main types of application have been listed by the director of the office of emerging capabilities policies within the Pentagon, Michael Horowitz, in addition to the processing and interpretation of large volumes of data. First there is command assistance. The AIs will thus be able to anticipate enemy troop movements and propose action scenarios taking into account innumerable parameters, such as the degree of shielding of the opposing equipment, the state of the roads, that of the weapon systems, and even the lessons of the battles of the past.

The third application pointed out by Horowitz concerns the autonomy of machines. AI will be at the heart of sophisticated weapons such as the SCAF (“air combat system of the future”), the fighter plane called to succeed the Rafale, accompanied by various drones and missiles, all coordinated by a “combat cloud”. Taken to the extreme, these technologies could lead to weapons capable of detecting and striking targets without a human operator. This could be the case of swarms of hundreds or even thousands of drones programmed to readjust to each other, on which China wants to be the leader.

The United States is not to be outdone, thanks to the many programs launched over the past twenty years by the Pentagon’s research agency, Darpa. It is on this model that France created in 2018 an Agency for Defense Innovation (AID). “We offer armies technological bricks from the civilian sector or we launch new projects to meet an operational need”, explains Michaël Krajecki, director of the Artificial Intelligence unit at AID. Start-ups and the academic world are thus associated with manufacturers to develop AI, such as that of the new armored vehicles (Griffon, Jaguar, Serval) of the Scorpion “collaborative combat” program.

Paris is in any case at the forefront of the moral questions posed by lethal autonomous weapon systems (SALA) or “killer robots”. “Terminator will not parade on July 14, France refuses to entrust the decision of life or death to a machine that would escape all human control”, insisted, in 2019, the former Minister of the Armies, Florence Parly. A defense ethics committee has since “framed developments in lethal weapon systems integrating autonomy (SALIA), so that sufficient human control is maintained”, recalls Michaël Krajecki.

Not sure that China and Russia have the same scruples. “During the fights, we are in a speed race, points out a senior officer. If you really don’t have an automatic mode, because the man must be in the loop at all stages, the time behind the opponent could translate into human losses for which you will be blamed. ” And rock the battles of the future.

lep-life-health-03