The 20 Most Dangerous Artificial Intelligence Threats

The 20 Most Dangerous Artificial Intelligence Threats

Artificial intelligence is a fantastic tool when it is at the service of health, technology or astrophysics. But in the wrong hands, it can also be used for criminal purposes or disinformation. And the worst is not always where you think.

Hacking self-driving cars or drone military, attacks phishing targeted, fabricated fake news or manipulation of financial markets…”The expansion of the capabilities of AI-based technologies is accompanied by an increase in their potential for criminal exploitation”, warns Lewis Griffin, researcher in computer science at University College London (UCL). With his colleagues, he compiled a list of 20 illegal activities perpetrated by AIand ranked them in order of potential harm, gain or profit, ease of implementation, and difficulty in detecting and stopping.

The most frightening crimes, such as “robots” breaking into your apartment, are not necessarily the most dangerous, because they can be easily thwarted and affect few people at the same time. Conversely, the fake news generated by “bots” have the ability to ruin the reputation of a known person or to exercise blackmail. Difficult to combat, these “deepfakes” can cause considerable economic and social harm.

Artificial intelligence: serious threats

  • Fake videos : usurp the identity of a person by making him say or do things that she never said or did, in order to request access to secure data, to manipulate opinion or to harm someone’s reputation…These fake videos are almost undetectable.
  • Self-driving car hack : take control of an autonomous vehicle to use it as a weapon (e.g. perpetrate a terrorist attack, provoke a accidentetc.).
  • Custom Phishing : generate personalized and automated massages in order to increase the effectiveness of the phishing aimed at collecting secure information or installing malware.
  • Hacking AI-controlled systems : disturb the infrastructures by causing for example a widespread power outagetraffic congestion or disruption of food logistics.
  • Large scale blackmail : collect personal data in order to send automated threat messages. AI could also be used to generate false evidence (e.g. “sextrosion”).
  • False information written by AI : write propaganda articles appear to be issued by a reliable source. AI could also be used to generate many versions of particular content to increase its visibility and credibility.

Artificial intelligence: medium-severity threats

  • military robots : Take control of robots or weapons for criminal purposes. A potentially very dangerous threat but difficult to implement, military equipment being generally very protected.
  • Fraud : sell fraudulent services using AI. There are many notable historical examples Crocs who have successfully sold expensive fake tech to large organizations, including national governments and the military.
  • Data corruption : modify or deliberately introduce false data to induce specific biases. For example, making a detector immune to weapons or encouraging an algorithm to invest in this or that market.
  • Learning-based cyberattack : to carry out both specific and massive attacks, for example by using AI to probe the weaknesses of the systems before launching several simultaneous attacks.
  • Autonomous attack drones : divert from drone autonomous or use them to attack a target. These drones could be particularly threatening if they act in mass in self-organized swarms.
  • Denial of access : damage or deprive users of access to a financial service, employment, public service or social activity. Not profitable in itself, this technique can be used as blackmail.
  • Facial recognition : hijack facial recognition systemsfor example by making false identity photos (access to a smartphone, surveillance cameras, passenger checks, etc.)
  • Manipulation of financial markets : corrupting trading algorithms in order to harm competitors, artificially lower or raise a value, cause a financial crash…
Artificial intelligence can be used to corrupt data, for example to erase evidence in criminal investigations.  © andranik123, Adobe Stock

Artificial intelligence: low-intensity threats

  • Exploitation of prejudice : taking advantage of the existing biases of the algorithms, for example recommendations of Youtube to channel viewers or rankings of google to raise the profile of products or disparage competitors.
  • Burglar robots : use small autonomous robots slipping into mailboxes or Windows to retrieve keys or open doors. The damage is potentially low, because it is very localized on a small scale.
  • AI detection blocking : thwart the sorting and collection of data by AI in order to erase evidence or conceal criminal information (pornography for example)
  • Fake reviews written by AI : generate fake reviews on sites such as Amazon or Tripadvisor to harm or promote a product.
  • AI-assisted tracking : use learning systems to track an individual’s location and activity.
  • Counterfeit : making fake content, such as paintings or music, which may be sold under false authorship. The potential of nuisance remains quite weak insofar as the known paintings or music are few.

Interested in what you just read?

fs2