The UN wants to regulate robot killers

The UN wants to regulate robot killers

The 125 members of the United Nations are meeting until tomorrow in Geneva to lay the foundations for regulations on lethal autonomous weapon systems, the famous killer robots. There is clearly no longer any question of banning them. As with the last attempt, consensus is far from being reached and military doctrines around the use of these weapons differ from country to country.

You will also be interested

Robot dog armed with an assault rifle, Iranian scientist eliminated thanks to a robotic weapon, swarm of killer drones, there is no shortage of recent examples to fuel the debate on lethal autonomous weapons systems (SALA), in other words killer robots. Two years ago (see below), at the UN, the establishment of an international treaty banning the use of lethal autonomous weapons was blocked by a minority of countries. This year, the members of the United Nations are meeting again in Geneva until tomorrow December 17 for floor on the subject.

While the 125 members agree that a legal framework must be applied to their use, there is no longer any question of banning them, since they are already sometimes operated in an operational manner by certain members. No one is under any illusions about the establishment of a real treaty this year, but states hope that the foundations will nonetheless be laid for the future. But, as at the last meeting, it is always the same countries which are dragging their feet on this regulation. Thus India, Russia, which is testing combat robots in Syria, and the United States, which does not want binding regulations, should once again prevent any consensus.

Killer robots non-decision makers

In France, the defense ethics committee has already given its opinion. Its members do not want the military to operate fully autonomous lethal weapon systems. On the other hand, they are not opposed to robotic weapons, piloted by human operators. We were also able to see the Spot robot dog from Boston Dynamics work alongside French soldiers during exercises. France has even renamed the SALA to SALIA, for “Weapons systems integrating autonomy”.

It is also the doctrine of other countries, such as Australia, Israel, Turkey, China and South Korea, which are also developing their own lethal autonomous weapons systems.

For its part, the NGO Human Rights Watch, at the origin of the Stop Killer Robots, indicates that these weapons will eventually fall into the wrong hands even before any possible regulations are set. Contrary to the slow pace of the establishment of a legal framework, the technological race continues and robots are improving very quickly. For them, when their price is low enough, the risk is high of finding them as a guerrilla arsenal in terrorist organizations or in certain dictatorships in order to carry out targeted assassinations.

What you must remember

  • Several countries are hampering plans to ban autonomous killer robots.
  • The scientific community as well as the NGO Human Rights Watch warn states against the use of these weapons which could spiral out of control.

Lethal Autonomous Weapons: Should Killer Robots Be Banned?

At the United Nations, a few countries have just blocked the establishment of a treaty banning Lethal Autonomous Weapons Systems, or SALA. Killer robots can therefore continue their development before arriving on the battlefield. The scientific community is worried and the NGOs are increasing the calls to impose this treaty. What if the solution was to teach robots a moral?

Article by Sylvain Biget, published on 12/15/2019

After a week of meeting at the office of United Nations in Geneva (Switzerland), the establishment of a new international treaty banning the use of lethal autonomous weapons was blocked by a minority of countries (Australia, South Korea, United States, Israel and Russia). In the end, on this subject, only about twenty non-binding recommendations were adopted at the end of the meeting, on the night of Friday to Saturday. It focused on renewing the current mandate of the group of governmental experts.

Germany and France simply proposed to maintain the principle of human control over the use of force, while President Emmanuel Macron had already “Categorically opposed”fully autonomous lethal weapons.

But only one treaty can protect humanity against these robots killers, and this one is far from on track. In fact, Australia, South Korea, the United States, Israel and Russia oppose any treaty proposal for the time being.

With its bulky size, the very agile Boston Dynamics Big Dog has already been the subject of numerous experiments alongside soldiers of the US Army. It is not operational due to the volume of sound emitted by its engine, but the firm has developed other quadrupedal robots that are more discreet and with astonishing performance. © Boston Dynamics

Do we need a code of ethics for killer robots?

SALAs, or killer robots, have already been experienced for years. Futura regularly discusses the prototypes of Boston dynamics. The firm has been developing robots dedicated to war for more than twenty years. Robots that are declined in bipedal machines, quadrupeds with impressive capacities. Whether it is these particular robots, autonomous drones or armed vehicles without occupants, the arrival of a ” Terminator Programmed to independently and coldly kill predefined targets is no longer the domain of science fiction. This kind of combat robot could land on the battlefield in a few years alongside the troops, or in their place. And above all, the absence of regulations in the matter will inevitably lead to an arms race once the first models of combat robots are truly operational.

For the NGO Human Rights Watch, at the origin of the campaign Stop Killer Robots, the arrival of these weapons is dramatic. According to her, dictators or terrorists could use it quite easily and at a good price to control or exterminate populations. They could also order the machines for targeted assassinations. The NGO is not the only one to raise awareness about these killer robots. On July 18, the tenors ofuniverse high-tech, including Elon musk, warned the member countries of the United Nations against these armaments. The signatories fear that their use in conflicts will overstep the scale of human understanding.

These reactions are far from the first. Every year since 2014, this kind of message has been sent to the United Nations. In vain. From the beginnings of these technical developments, well-known personalities in the sciences, starting with Stephen hawking, Max Tegmark, Stuart Russell or Frank Wilczek, exposed their fears against the potential danger of the AI.

If no agreement can prohibit them, one of the solutions could be to humanize, so that they behave in a “moral” way on the battlefield. It would be a question of endowing them with a code of values ​​specific to combatants. This is also what the lieutenant-colonel of the French army recommends. Brice Erbland in his work Killer robots, published by Armand Colin. It shows what could be a SALA endowed with a AI able to exercise good judgment to behave like soldiers. In other words, these autonomous combat robots should integrate an artificial ethic. We would no longer speak of SALA, but rather of SALMA (Lethal, Morally Autonomous Weapons Systems).

However, for this to be the case, all states would have to play the game. Until then, a majority of them still agree on the need to maintain human control over lethal autonomous weapons systems. This is why the vast majority of the 88 member states want a new treaty to be proposed in 2019.

Interested in what you just read?

.

fs1