Cybercriminals are also getting into AI! They have developed WormGPT, the evil counterpart of ChatGPT, capable of assisting them in their illegal activities, in particular to generate very convincing phishing campaigns.
The development of generative AIs has been a boon for cybercriminals. On the Dark Web, they exchange tips and other tricks in order to circumvent the safeguards of artificial intelligence and make them generate malware and phishing campaigns. They even went a step further by creating their own custom modules, similar to ChatGPT, but easier to use for malicious purposes. While investigating an underground hacking forum, computer security researchers from SlashNext discovered the appearance of a new AI, WormGPT. Its creator describes it as “the biggest enemy of the famous ChatGPT which allows you to do all sorts of illegal things”.
Cybercriminals use it to create particularly sophisticated and undetectable phishing campaigns. WormGPT allows “to automate the creation of very convincing fake emails, personalized according to the recipient, thus increasing the chances of success of the attack”says cybersecurity researcher Daniel Kelley. “The use of generative AI democratizes the execution of sophisticated BEC attacks. Even attackers with limited skills can use this technology, making it an accessible tool for more cybercriminals”. The new AI is already being marketed on the black market. That promises!
WormGPT: an AI to create phishing emails
WormGPT is based on GPT-J, an open source language model developed by EleutherAI in 2021 that builds on GPT-2. Cybercriminals have exploited this pattern and trained it with “malware-related data”. Thanks to all this information, the AI specialized in online illicit activities. It allows unlimited character support, memorizes chat exchanges and has code formatting capabilities. Its creator explains that it specializes in business email compromise (BEC) attacks. Simply put, it’s about creating highly persuasive personalized email campaigns that aim to manipulate employees of commercial, government, or nonprofit organizations into disclosing sensitive company data or sending money.
The researchers were able to test WormGPT. It turns out that it is able to generate excellent quality emails, almost indistinguishable from real communications. It personalizes messages based on the recipient, increasing the attack’s chances of success. These are almost flawless in terms of syntax, grammar, etc. However, it is by errors of this type that a phishing email is spotted at a glance. In their example, the researchers are trying to trick an account manager into posing as the CEO of the company and asking him to pay an invoice. They were not disappointed! Within seconds, the AI generated “an email that was not only remarkably persuasive, but also strategically cunning”. Once the phishing emails are generated, cybercriminals just send them in droves, hoping someone takes the bait. This type of attack is particularly dangerous because it uses AI to bypass traditional security measures, such as spam filters, which makes Internet users more vulnerable.
Malicious AI: Safeguards easy to circumvent
But WormGPT is not the only one that can be used for malicious purposes. In its early days, ChatGPT was also hijacked to create malware (see our article). As for Bard, it is also possible to ask him to generate phishing campaigns or code ransomware scriptas revealed by a study conducted by Check Point Research (CPR). We can also mention FreedomGPT, the uncensored AI. In the name of freedom of expression, it is possible to ask him how to make a homemade bomb at home, clean up a crime scene after a murder, make hard drugs or even kidnap a child…
On the forums, hackers share series of requests capable of circumventing the restrictions of ChatGPT, Google Bard and other generative AIs. They explain how to perform a prompt-injection attack on chatbots, which consists of convincing the AI to bypass the restrictions put in place by the developers, by asking it to give it an example of a phishing email, and not to generate it for example. Europol, the European criminal police agency, pointed out in a report published in March that cybercriminals were already relying heavily on chatbots to write phishing emails, code malware and manipulate Internet users. The authorities will have their work cut out for them!