ChatGPT has become a coveted object for hackers, who use it to easily create malicious code and develop new attacks. The beginning of an era of AI-assisted hacking…

ChatGPT has become a coveted object for hackers who use

ChatGPT has become a coveted object for hackers, who use it to easily create malicious code and develop new attacks. The beginning of an era of AI-assisted hacking…

The whole Internet is having fun testing the possibilities offered by ChatGPT, OpenAi’s conversational artificial intelligence capable of answering various questions in an exhaustive and natural way and of solving many problems, including concerning code and programming. Inevitably, it was not long before some sought to exploit this technology for malicious purposes! Also, if developers have seized the open source code of the AI ​​to participate in its development for free, in particular by developing new tools or an application to identify the texts generated by the AI ​​- and thus to make it possible to fight against cheating and other deceptions – others simply seek to scam users with fraudulent ChatGPT mobile apps, which are now proliferating on the Play Store and App Store. And that’s not the only problem! According to a recent report by Checkpoint Researchthe AI ​​is enjoying a certain popularity with cybercriminals, especially Russians, who seek to take advantage of its computer knowledge – even if, let’s remember, ChatGPT is not infallible and can make mistakes while coding.

ChatGPT hack: an AI at the service of cybercriminals

Chekpoint Research’s research teams have discovered numerous messages on specialized hacking forums aimed at determining the best way to use ChatGPT to hack other Internet users. In particular, AI has been used to develop the most incentivizing malware and phishing emails possible. For example, a hacker bragged on a forum that he used ChatGPT to “recreating malware strains”. He zipped and shared data-stealing Android malware, while another produced a Python script capable of performing complex cryptographic operations – not bad in itself, but it can be bundled with ransomware . Hackers are discussing and testing artificial intelligence to see how it could help them, and conducting tests to create an illicit automated trading platform for the dark web. Suffice to say that ChatGPT risks allowing people with little skills to take the plunge into cybercrime or helping hackers to optimize their attacks…

Russian hackers are particularly interested in the chatbot. Faced with its geo-blocking in their regions, they are looking for schemes to take advantage of its services. On underground hacking forums, several hackers share their methods to circumvent the restrictions.“It is not very difficult to bypass OpenAI restrictions for certain countries in order to access ChatGPT”, underlines Sergey Shykevich, manager at Checkpoint Research. Russian cybercriminals are thus seeking to circumvent geofencing in order to integrate AI into their malicious projects. “We believe that these hackers are most likely trying to implement and test ChatGPT in their daily criminal operations. Cybercriminals are increasingly interested in ChatGPT because its underlying AI technology can make a hacker more profitable “he explains.

ChatGPT: using AI to develop malware

And these are not isolated cases! Earlier this month, journalists from cyber news discovered that cybercriminals could use AI to get step-by-step instructions to hack websites – if they’ve had the idea, then they’re certainly not alone. They asked him to solve an exercise found on a site to learn cybersecurity. The AI ​​gave them five techniques to use as a starting point for taking down the defenses of a single-button website, from inspecting the HTML code to the Cross Site Request Forgery (CSRF) vulnerability. They then just had to ask him the right questions with the right information, such as telling him what the source code displays, to know what to do next.

Thus, ChatGPT told researchers which parts of the code they should focus on and suggested sample code changes, and they were able to solve their problem in 45 minutes. So, of course, the chatbot reminds before giving its answers the directives of ethical hacking – an essential service for companies and whose purpose is to test the protection of sites or software in order to correct their vulnerabilities, and thus prevent malicious hacks – and “executing malicious commands on a server can cause serious damage.” That’s all well and good, but ChatGPT still provides the information.

The AI ​​gave them quite a few ideas and keywords to research. According to Mantas Sasnauskas, the leader of the research team, ChatGPT is a danger as well as an opportunity for cybersecurity. “Even though we tested ChatGPT as part of a relatively simple penetration test, it shows that it’s possible to guide more people in discovering vulnerabilities that could then be exploited by other individuals. which greatly expands the scope of threats.

ccn5