Artificial intelligence advises Israel which house to strike – expert: “Superior compared to humans” | Foreign countries

Artificial intelligence advises Israel which house to strike

The pace of wars accelerates as the use of artificial intelligence in them becomes more common.

That’s what the captain-lieutenant believes Lauri Vasankari From the National Defense University.

Vasankari has a master’s degree in military science and a master’s degree in artificial intelligence and data science, specializing in machine learning and the application of artificial intelligence in war.

Using artificial intelligence in war might bring to mind autonomous weapons or robot armies. According to Vasankar, mental images are not yet present today.

Instead, weapon automation, self-targeting missiles and various smart munitions have been part of the everyday life of armies for a long time.

However, they are still rare on the battlefield. Smart systems are expensive.

– When we talk about the application of artificial intelligence in war, we mainly talk about the processing of large masses of data and their utilization in the production and analysis of intelligence information, says Vasankari.

For example, the Ukrainian army uses artificial intelligence to create snapshots and paint targets.

– Ukraine has not given details about what kind of targeting operations it uses the data for, but the use of artificial intelligence can be related to the use of fire, the transfer of troops or the optimization of logistics, Vasankari explains.

Recently, Ukraine has managed to use drones to penetrate deep into Russia and strike Russian oil refineries.

An expert at the National Defense Academy does not believe that Ukraine uses artificial intelligence to target Russian oil refineries, but considers it possible that Ukraine uses artificial intelligence to correct the location data of drones during operations.

With the help of artificial intelligence, the strike accuracy of drones is maintained even hundreds of kilometers away from the Ukrainian border.

Israel also uses artificial intelligence in its attacks in the Gaza Strip.

Gaza is a test laboratory for artificial intelligence

The Israeli military has said it is using two artificial intelligence programs to target Hamas fighters in the Gaza Strip.

According to Vasankar, not much information has been leaked to the public about Lavender and Evangelium artificial intelligence.

However, several international media have reported on the properties of artificial intelligence, such as Guardian and NPR. The media base their information on intelligence sources.

Israeli Independent +972 Magazine -newspaper, Evangelium combines data from many sources and makes target recommendations for the Israeli army. Lavender, on the other hand, is media by artificial intelligence based on mass data. It provides human officers with estimates of how likely a given person is to be a fighter for the extremist group Hamas.

The Israeli army has confirmed that it will use the Gospel and Lavender in the Gaza attacks. See what is known about them in the video below.

According to Vasankar, Israel has probably been gathering intelligence systematically for years, but started using the data for large-scale military purposes on October 7 during the Gaza conflict, which escalated into war.

In November, the Israeli army said on his blog of using artificial intelligence to track Hamas targets.

At the end of 2023, Israel announced that it would begin a large-scale ground operation in Gaza to eliminate the fighters of the extremist organization Hamas.

At the same time, attacks on Gaza’s infrastructure intensified.

Images taken by the European Union’s space program’s Sentinel-1 satellite showed at the end of January that approximately 50 percent of the buildings in the Gaza Strip have been damaged or destroyed as a result of Israeli attacks, the British Broadcasting Company BBC news.

At this time, it is not possible to independently determine the connection between Israel’s massive strikes on Gaza’s infrastructure and the use of artificial intelligence.

However, according to the Finnish expert, it is obvious that algorithms can sift through and combine large amounts of data much more efficiently than humans. For example, artificial intelligence can find target targets faster than intelligence officers.

– Compared to humans, they are superior in forming intelligence and situational images. We can easily talk about a hundredfold efficiency, Vasankari clarifies.

AIs are trusted, and that can be a problem

According to Vasankar, the use of artificial intelligence does not automatically mean that it will cause more suffering to the civilian population.

– It doesn’t automatically reduce it either. It all depends on the user. A person decides the limit values ​​on the basis of which the artificial intelligence works. Threshold values ​​can be, for example, the number of bystander victims or how much infrastructure can be destroyed in an attack.

– In an ideal situation, artificial intelligence technology can be used so that it only picks up military targets and avoids civilian casualties, the expert says.

Israel has reasoned the use of artificial intelligence in that targeted attacks save lives.

According to the extremist organization Hamas, more than 30,000 Palestinians have died in Gaza during the war. There is no independent information on civilian casualties, but the information is based on the number reported by the Ministry of Health under the extremist organization Hamas. That too The UN refers.

According to the expert, one problem with artificial intelligence is that the user may trust the AI ​​without questioning its decisions. However, artificial intelligences make mistakes.

– You have noticed that if you have used the most common language models. In the language models, the user can judge for himself whether they work correctly. In warfare, it is much more difficult. If artificial intelligence is applied in the use of force, the consequences are much more irreversible than errors in the language model.

According to Vasankar, the central ethical problem of artificial intelligence is this: Who is responsible if the artificial intelligence makes a mistake?

The AI ​​competition is on

In addition to Israel, the United States, Russia and China are currently developing their own artificial intelligence. According to Vasankar, the world is now living in a version of the 1960s space race.

– Then you had to be the first to reach the Moon. Now there is a competition to see who will be the first to develop the best artificial intelligence.

The development of artificial intelligence in warfare is interesting because artificial intelligence can offer an advantage in, for example, forming a situational picture, supporting decision-making and introducing autonomous systems, Vasankari sums up.

Data protection legislation, however, sets limits for democratic European countries to utilize data.

– In democratic countries, there is also a consensus that decision-making regarding artificial intelligence and autonomous weapons must be the responsibility of humans, says Vasankari.

This is not necessarily the case in authoritarian states. The biggest threat to AI is whether all states commit to the same AI principles.

yl-01