Artificial Intelligence is being introduced into all segments of modern combat: notably for better reading of the battlefield and now AI is also making its mark underwater… The French Navy calls them golden ears , these are the analysts capable of identifying sounds captured under the surface of the sea, AI will revolutionize their profession, the objective: to move quickly to win the acoustic war.
Tock toc tac tac tac tac… This regular sound is the sound of an oil tanker as we hear it underwater, a characteristic noise. A submarine’s golden ear could say that this ship’s propeller has five blades and its shaft line rotates at 120 rpm. Crucial information for the Navy, in particular for the submarine, underlines Commander Vincent Magnan, commander of the acoustic interpretation and reconnaissance center, Cira in Toulon.
“ There’s a lot going on under the diopter, as they say in our community. To give you very specific examples, a commercial vessel is heard by the sonar of a frigate submarine, in particular by what is called radiated noise, which can be composed of several types of sounds. One of the characteristic sounds is what we call the number of Shaft Revolutions Minute, that is to say the speed of rotation of the shaft line which propels the ship which is also associated with a number of blades. And when we master this information, we know what the speed of the boat we are looking for is. And depending on the speed of this boat, we are able to put in place a maneuver idea. And so the real reflection is to say that passive acoustic warfare allows in complete discretion, without raising the crisis level, to capture technical information from which decisive tactical conclusions for operations arise. »
And this is all the more important for a submarine which by definition is blind, but the acoustic sensors are more and more powerful and consequently the golden ears are faced with an inflation of data, underlines Commander Magnan .
“ At the beginning of the 2000s, a Sonar operator had equipment that allowed him to hear approximately 20 km away and process around ten acoustic contacts simultaneously. Today, we are instead using sonars capable of detecting up to almost 200 km and allowing almost a hundred acoustic tracks to be processed simultaneously. This means that the volume of data to be processed has increased considerably. The direct consequence is that for the golden ears at sea, for the analysis of all these acoustic contacts, there is a human commitment which is much more important than before. »
Preligens algorithms
Artificial intelligence will make it possible to discriminate sounds much more quickly. And this is where a French gem comes in, Preligens, well known for its analyzes of spatial images, the company has put its algorithms at the service of acoustic warfare. A demonstrator was created last year, with a first experience. For twelve days, the Navy recorded all the sounds of the sea off the coast of Toulon.
“ These 12 days required being annotated in order to be able to train artificial intelligence algorithms. It took us almost forty days to annotate these 12 days of work », underlines Vincent Magnan. “ Now, with the algorithm and demonstrators obtained, we inject 12 days of acoustic recordings into the machine, and in around four hours, the machine produces the phases on which the analysts can bring their professional skills. Which means that from an initial 40 days, we went to 5-6 days. The goal is to be able to analyze more and more data. In 2020, CIRA received approximately one terabyte of data annually. In 2024, we are looking at 10 terabytes of acoustic data. We will certainly exceed 100 Terra by 2030. »
But AI cannot do everything, golden ears will always be decisive, assures Vincent Magnan. “ This is the objective of saying that once we have seen a boat, we will be able to see it again each time it enters our detection volume. Except for the nuance, which is still very important and which makes the application of artificial intelligence quite complex, it is that the same boat, seen in the Mediterranean in January and seen in the North Atlantic in December, will not the same noise. Because the acoustic environment will have changed, because perhaps the bearings of its shaft line will have been damaged or will have been corroded, perhaps because there will be concretions on its hull, which will modify its cavitation . And therefore the radiated noise will not be quite the same. And that’s why today, if artificial intelligence makes it possible to globally detect the main characteristics of a boat, it will also take human know-how to really look for the discordant elements in relation to a previous interception for example. »
Golden ears are rare, there are no more than thirty analysts in the Navy. The AI will allow them to concentrate on listening of interest, while the machine will allow it to exclude the noises of shrimps and sperm whales.