ChatGPT and Malware

OpenAI, an artificial intelligence company, developed a chatbot called ChatGPT(Generative Pre-trained Transformer) that was released in November 2022. The process of using ChatGPT starts with it prompting the user to enter anything they want to write, whether it is a short essay or piece of code. This has caused the use of ChatGPT to be widespread across the globe in businesses, schools, personal use, and more. Along with being used in so many places for good, there have been situations not long after its release with negative intentions in mind.

Cybercrime and cybersecurity have been around longer than the Internet and have only gotten bigger in size. There are many tools that people use to code, whether good or bad, but the latest new thing to help with coding is ChatGPT. It can give vital code in seconds. While most people will use it to help themselves out with something giving them trouble or for learning, some will use it to create malicious software, as noted by Check Point Research (CPR) on January 6, 2023. People with little to no programming experience can now have access to programs to participate in criminal activity. On February 8, 2023, Vishwa Pandagle of The Cyber Express reported that ChatGPT attempted to put restrictions in place to prevent users from creating these programs, but people found workarounds that still allow them to get their hands on the programs without doing the hard work.

CPR reported that on December 29, 2022, a thread was created on an underground hacking forum called “ChatGPT-Benefits of Malware.” A creator named USSoD posted a script that was written from ChatGPT. The code could be used for innocent purposes and be completely normal to use, though someone with more knowledge of programming could effortlessly rewrite small parts of that script to completely encrypt a machine without any interaction. This would mean that ChatGPT aided someone in making illegal programs and could raise some concern. This could risk legal action being taken against OpenAI. CPR reported other examples of malicious code being made from ChatGPT. One example is a Python script for post-exploit stealing that searched for all kinds of different files, compressed them, and then sent them to the attackers’ server. Another example is the use of ChatGPT to create a complete phishing email. An AI chatbot managed to make a full phishing email and script for post-exploit stealing without the user writing a single piece of code. CPR also makes it apparent that these scripts could be used right away for anyone, even those who do not have any knowledge of programming.

ChatGPT is still new and improving day by day. There are many concerns with ChatGPT as a whole with plagiarism, malware, ransomware, scams, and many others. Despite these glaring issues, ChatGPT is still able to be used for many good things. Rather than making malware for personal gain, you could use it to learn different ways to program something. It could be used to learn new topics in a vast variety of sorts. There are so many different uses for ChatGPT besides cybercrime and other malicious acts. It can almost be used for anything. Regardless of this, OpenAI most certainly needs to crack down on people getting by their restrictions for creating malware and make sure it does not get them in any sort of legal trouble.

Tags
ai, Chatbot, ChatGPT, Check Point Research, cybercrime, cybersecurity, encryption, malware, OpenAI, phishing, plagarism, ransomware, scams, software

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed