Neural networks can be silently infected with malware without degrading their performance. No security solution can detect them. Disturbing.
You will also be interested
[EN VIDÉO] What is a cyberattack? With the development of the Internet and the cloud, cyber attacks are more and more frequent and sophisticated. Who is behind these attacks and for what purpose? What are the hackers’ methods and what are the most massive cyber attacks?
L’artificial intelligence is capable of feats. She knows how to recognize objects in photos, generate text that looks like it was written by a human and is becoming more and more efficient in terms of speech Recognition. But, according to researchers at the University of California, San Diego and the University of Illinois, neural networks, which constitute AI, could also be used to hide formidable malware that would slip through the cracks of security solutions. It must be said that, by their very nature, these networks are designed to ingest enormous amounts of data to consolidate their learning. They can just as easily take in malicious code.
This code is placed in seemingly harmless data, using the steganography method, which is the art of passing unnoticed message in another message. To prove their theory, the researchers used what is now called a EvilModel, a suite of malicious code weighing 26.8MB. They inoculated it into a convolutional AI. They chose AlexNet, a 178MB AI specialized in image recognition. Fragmented, this malicious code did not really disrupt the neural network since its loss of precision was limited to 1%, according to their performance measurements.
This means that the user cannot suspect the presence of a problem since the neural network is not failing in its usual tasks. And above all, none of the antivirus was able to detect the presence of these 26.8 MB of malware. After this training incorporating this code, the researchers then increased the volume malware to 36.9MB. Trained with this new database, performance has declined by only 10%, which is still enough to be misleading. They also tested EvilModel on other AIs including VGG, Resnet, Inception, and Mobilenet with similar results.
Threat in Artificial Intelligence
The contamination by malicious code remains harmless as long as the user does not call this AI with a application it also previously infected with malware. It is the latter which will enable the viral load and perform, for example, a ransomware on the victim’s device. The concern is that AI is more and more embedded in mainstream applications. They could well become the preferred playground of hackers given the impossibility of detecting this malicious code which does not alter the functions of artificial intelligence.
To inoculate the malware into Artificial Intelligence, the attacker could offer corrupted functional learning models by hosting them on platforms like GitHub or TorchHub. In a more vicious way, since the data sent by the user’s applications is also used for learning the AI, the hacker could also contaminate it. via infected updates of these applications. As these stowaways are undetectable, the only way to avoid their activation is to detect malware by sleep on the user’s device, which is far from won.
Interested in what you just read?
.
fs1