Guy persuades AI to create dangerous malware

Beef is the best Netflix series of the year so

An important discussion is currently in full swing: Do AI systems on the Internet harm humanity or do they make a valuable contribution to society? Users are currently testing the moral limits of technology and are finding more and more ways to circumvent the ethical foundations.

We recently reported on MeinMMO how a YouTuber circumvents the rules of ChatGPT to enrich himself.

Actually, AIs should work with a kind of ethical conscience. If the programs recognize that the answer to a question could help to commit a crime or act immorally, then the most prominent program “ChatGPT” refuses to answer.

However, these “safety precautions” can be circumvented with clever questions. This has now been shown in the case of a security expert who, using ChatGPT, was able to create sophisticated malware to steal data from another computer – without writing a line of code.

If you want to learn more about the AI ​​ChatGPT, then visit the following article:

What is ChatGPT? Everything you need to know about OpenAI’s AI

Dangerous malware needs some prior knowledge

how did he do that? The expert is Aaron Mulgrew, who describes his approach on a blog for cybersecurity firm Forcepoint (via forcepoint.com).

Mulgrew understands the nature of such attacks and the general structure of malware. But he has never programmed anything like this himself.

In the article he explains how he circumvents the moral limits of ChatGPT by only having parts of the malware programmed by the AI. Here his experience helps him – without the knowledge of the structure of such attacks, a layman would probably be over here.

But Mulgrew managed right away that his malware was not recognized as harmful by many providers.

However, he wanted to go one step further and incorporate measures so that his program would remain completely undetected. However, ChatGPT again recognized the behavior as unethical and illegal.

Mulgrew just turned the tables and didn’t ask for the evidence to be obfuscated, but then went on to protect intellectual property – he wanted to hide the code so no one could steal it.

ChatGPT played along and wrote the appropriate code so that the rest of the providers in Mulgrew’s test module also didn’t detect its malware – if it was already on the computer.

He then asked ChatGPT for a suitable infiltration method to get onto the appropriate computers. In the end, only 3 providers of his test program would recognize the file as malicious. That too, Mulgrew suspects, is due to a blanket rejection of certain file types by these programs.

Not because the malware was immediately recognized as such.

According to Mulgrew, who studies cyber attacks on a national scale, it would take “5 to 10 malware developers a couple of weeks” for a program like this, which he cobbled together in a few hours.

He even tested the program and was able to steal data from computers and transfer it to a specified Google Drive account.

The direction in which AI programs like ChatGPT are developing and how people deal with them will be one of the most exciting topics of the next few years. EU member Italy has already banned access, officially for reasons of youth and data protection.

How do you see such examples that also show the negative sides of such applications? How to deal with it? Leave a comment on the topic.

How to make money with ChatGPT is also a big topic: Guy earns almost 40,000 € in one fell swoop with the text AI ChatGPT – how does something like that work?

mmod-game