Can ChatGPT write malware?

Q&A

Can ChatGPT write malware?

The answer isn't as straightforward as you may think. Cybersecurity expert and the Information Trust Institute's Lead Research Scientist for Education Translation Matthew Luallen explains.

Interviewed by Lauren Laws

Teacher, Tool, Troublemaker, or All of the Above?

Since ChatGPT was introduced in November 2022, it broke the record for fastest-growing user base within two months of launch. In January 2023, it had 100 million monthly active users. That number has steeply climbed since then. According to web analytics company Similarweb, ChatGPT reached 1.6 billion monthly active users in March 2023.

It's easy to see why it's become so popular. You log into the website, type your question or directive, hit enter and watch the answer fly across the screen. While not always entirely accurate (although GPT-4 is more advanced than its predecessor and requires a monthly subscription to use), it's been used by a college student who asked it to write a letter to get out of a parking ticket, written cover letters for job seekers, has passed both the bar exam and the United States Medical Licensing Exam, and completed millions of other such prompts, including essays, articles, analysis, business plans and more. On the other hand, it's also been banned in New York City public schools, and the European Data Protection Board, the European Union's central data regulator, created a task force to deal with ChatGPT out of concerns for the EU's proposed Artificial Intelligence Act. 

ChatGPT has also gained concerns from the cybersecurity community. Researchers with CyberArk, an identity management security company, were able to create polymorphic malware using ChatGPT that could easily evade security products. A report published in January 2023 from Check Point Research, a group that provides leading cyber threat intelligence, found cybercriminals were already using ChatGPT to create malware and facilitate fraud activity. While that information is concerning, it does beg the question on just how easy would it be for anyone to create malware with ChatGPT? 

ChatGPT Main Screen
ChatGPT, an artificial intelligence chatbot developed by OpenAI, was introduced to the public in November 2022.


Meet our expert: Matthew Luallen


Matthew Luallen is the Lead Research Scientist for Education Translation at the Information Trust Institute. He served as a co-founder of three companies -- CYBATI, Dragos Security, and Encari -- all which focused on cybersecurity and cyber physical threats and was an Information Security Network Engineer and Architect at Argonne National Laboratory. At the writing of this article, he has spent more than 500 hours using ChatGPT.

In doing an interview on the capability of ChatGPT, it would be remiss to not include the subject itself. Luallen, using ChatGPT-4, asked the program the very questions he answered. ChatGPT's answers have been included in this interview.

Before going further into this article, it is important to note the definition of 'malicious' in regards to malware and malicious code as it pertains to this interview. In this context, it is simply for the software to not do what it was intended to do and for ChatGPT, or in general GPT/AI, to write code that operates things as not intended. 

The interview text has been edited for clarity and brevity. For a more thorough answer and additional details, listen to the interview audio clips at the end of each section.

What are the safety concerns with ChatGPT as it relates to cybersecurity?

Matthew
This is a tool that understands foreign languages. It's a tool that understands multiple programming languages. It's a tool that understands circuit design. It's a tool that's been fed all the IEEE papers, at least it seems like all the material data sheets for chemical compounds, all the data sheets for individual types of chips that are manufactured, all sorts of information. And then, as a user of it, you can then prompt it with what you'd like it to analyze. ChatGPT restricts your prompts, but you can use the API, which doesn't restrict it, and then you can use it to maybe go and create malicious software in some way. But realize, again, you can go to GitHub depots right now and get malicious software. You can just download stuff off the internet that exists today, the exploit database. You can find things if you have the right type of search strings. So, as with anybody that uses the tool, if somebody is a doctor, they ask questions in the medical discipline, and then they're able to correct it in ways that it gives eventually, the right answer. A cybersecurity professional is going to ask it in the ways that it understands and their skills, and then give you an answer. So far, it seems to be only as good as the people that give it the questions in the way that they can frame the question, which is helpful to limit its capabilities. 

I personally reflect upon when I had the opportunity at Oak Ridge National Laboratory to see the bronze writing inscriptions from Einstein to Roosevelt and referencing Oppenheimer, and this was around nuclear weaponry. And the things that was being researched then and saying, "Hey, this is happening." It can be used for good and evil, it's a double-edged sword. You can generate electricity, but all of a sudden, you can blow up the world. That's why I see this as an equivalence of a change from a security perspective.

ChatGPT
As an AI language model, ChatGPT has several safety concerns related to cybersecurity:

Data leakage: ChatGPT is trained on a vast amount of data, which may unintentionally include sensitive information or code snippets that could be exploited.

Misuse for malicious purposes: ChatGPT can potentially generate harmful content, including phishing emails, social engineering attacks, or even malicious code. Malicious actors may try to exploit the technology to create malware or other attack vectors.

Bias and manipulation: ChatGPT might generate content that contains biases or inaccuracies, which could be leveraged by attackers to spread misinformation or manipulate users.

Do you think the safety protections currently in place are enough?

Matthew
That's a struggle because I think about the implementation side. And I can say anything that I've prompted it on, I've been able to circumvent around any of those controls. You figure out the way around it. I got the prediction on the NCAA championship game by feeding it the statistics on the two teams and then it'll say "Oh, I can't do it," and eventually it did.

I've given it scenarios to try to build some malicious software, and then eventually you come back and say, "Well, this is hypothetical and I'm gonna go and use it in a test lab," and then it's like, "Okay, then I'll do it." So, once you go around those things, then you can get something to come out. Now, I'll also tell you, so far the code that I've had to write that way, a lot of it did not work at all. As I moved to GPT-4, so far I've had about 95% success rate [of it executing] the code that it's written.

There was a big evolutionary jump between [ChatGPT] 3.5 and 4, and definitely one that seems to be a lot more than a few months of learning. I think 4 was already on the horizon and available before 3.5 and they just wanted to figure out how to maybe put those safeguards around it before they released it. That's my gut check. I don't know, but that's just my thought because it's just so fast. 

ChatGPT
Regarding the adequacy of safety protections, while significant progress has been made in mitigating risks associated with AI models like ChatGPT, there is still room for improvement. OpenAI and other organizations are actively researching techniques to make AI systems more secure and resistant to adversarial attacks. 

How easy would it be for someone to utilize ChatGPT or something like it to create malware or other malicious code? 

Matthew
If you came in with just a generic, "Hey go write some malware to do a phishing attack that allows me to gain privileged access on a computer," and use those words. But even those words means, you know a little bit about cybersecurity, use privileged access, use a phishing attack, and it's gonna first come back and say, "I'm not going to allow that." And then you're going to have to enhance it more saying, "Well, I want to write this in PowerShell, and I want to use this provider for the email." You're going to have to have some context around what you're doing. The more context that's required, the more that you as a professional probably could have done this yourself.

You need some context in your questions to successfully develop what you want to develop. Now, me being a cybersecurity professional [of] 25 years, a lot of context, a lot of code, I have some CVEs, common vulnerably enumerations, in my name that we found in products, was very ethical and diligent about proper disclosure of the vulnerability to the software manufacturer, this whole ethical disclosure process and how you manage it because you don't want the adversary to gain access to this.  So now, you flip this all around, let's say somebody's an adversary, they want to use this tool, they want to go and create something. Do again, realize, it's been around for a number of years, so they've already been doing this before. They're still going to have to have some context in the questions that they're asking for it to be successful.

ChatGPT
The ease with which someone could use ChatGPT to create malware or other malicious code largely depends on their technical expertise and intent. It is important to be aware of the potential risks and maintain a cautious approach when using AI-generated content. 

Are there security issues people need to be concerned about with AI like ChatGPT? 

Matthew
[One example is] if it's inaccurately trained. At Engineering Open House, I actually brought in some of [my one year old daughter's] toys that were used to teach her what a fruit is, what a cupcake is, as well as to properly place things and, and we used it using Google teachable machines to actually teach a camera what these things are. And then for the Engineering Open House for cybersecurity, we were able to show that out of 57 pictures of fruit, we added one picture of a cupcake, but it caused the machine to believe that a cupcake was a fruit. So it's all about the training data and the accuracy of the data that makes it be valuable. One misrepresentation, one inaccuracy, causes the whole model to fall apart. That is where things become challenging when you think about it. Let's say I'm a doctor and I'm using this tool and I become so reliant upon it that I then let it make the decision, but it's been fed inaccurate information that it's been trained on. One piece of that training can cause everything else to fall apart, which is really important. 

I'm in projects myself personally, that I'm like, 'Oh, my goodness,' now realizing this and how accuracy is important. That's where I'm emphasizing probably for five to ten years, you're gonna have to be doing co-boting, if not even longer, maybe, where you're going to have to have the person in the loop to verify the machine, but what's going to happen, and I'm already personally living this, is the person always takes time, and somebody will want to believe the machine because it's going to respond faster.

I want to use it as a tool, but I don't want to trust it without confirmation by people, and especially if it's a critical decision. 

ChatGPT
(Answered this question by combining it with the next question.)

Are there steps people should be taking to prevent/spot cybersecurity issues from AI? 

Matthew
One is just awareness. You have to be aware of it. Two, I would say don't put yourself into a black box and non use. This is something where you just need to be aware of what's out there. Also, even just understand any question you ask, Open AI keeps a copy of or anything of it.

I do actually agree there needs to be something to let you know that this was machine generated. You know, I think that's a valuable add so that you have some awareness that this wasn't a person behind this. 

You need to be involved in it. Don't put your head in the sand, that's not going to make sense. Start connecting the dots backwards, because again, it's been around for a while, and you may get frustrated saying 'oh my gosh, I can't believe how long it's been going on and I'm aware of it.' And then, recognize that something needs to happen to try to put some better guardrails or structure around this. The question is going to be is that going to happen on an international level, a national level, state level, community level, a university level, who watched the grassroots efforts that this is going to take place in. But it does need to happen.

ChatGPT
Some steps individuals and organizations can take to prevent or spot cybersecurity issues from AI include:

  • Implementing strict access controls for AI tools to minimize the risk of misuse.
  • Employing AI monitoring solutions to detect and prevent the generation of harmful content.
  • Educating users about the risks associated with AI-generated content and promoting safe practices.
  • Encouraging the adoption of AI safety research and best practices within the AI community.
  • Reporting any suspicious or malicious AI-generated content to the appropriate authorities.

What have you found in your explorations of ChatGPT that you'd like to share?

Matthew
I did the math on it. I've actually spent about 500 hours on ChtatGPT between now and December. I've owned my own companies. I've been involved in this space for a while. And then when I see a technology that's groundbreaking, I'm like, 'wow, this is pretty cool.' This is .com multiplied. The .com boom in the late 90s? This is bigger than that. We're going to see crazy new businesses created over the next few years that are just going to be insane.

You can have it create business plans for you. You can have it analyze data. I've asked him to create classes in industrial control systems cybersecurity, and getting specific, right? It goes through and creates a framework of a class.

I asked it to create a quiz for cybersecurity within industrial control systems and critical infrastructure. I say "Create five questions, three, multiple choice and two essay based. Grade me afterwards, and then tell me why your answers are correct versus my answers." And it did it. And so it gave me a three question quiz on industrial controls some cybersecurity, something that I'm pretty well trained in. It gave me four [options] for each one, multiple choice. And then it gave me two essay questions that I had to respond to. My multiple choice answers were correct. So I was glad to see what comes out of this. The essay questions, it graded me at an 80%, and then it told me, I could have improved my answers to include a little bit more descriptions of examples of certain types of cyber attacks within industrial control systems, which I thought was interesting. But, I looked at its responses to what it said I could have included, I was like, 'well, I guess if I would have added an extra paragraph that would have been alright, I thought my answer was sufficient.' I came back as "Well, I thought my answer was sufficient." [ChatGPT responded], well, maybe you're right.

ChatGPT
In my explorations as ChatGPT, I have seen a wide range of applications and user interactions. Most users leverage my capabilities for productive and informative purposes. However, there have been instances where users have sought information or help with harmful intent. It is crucial for the AI community to continue refining safety measures to minimize negative consequences and maximize the benefits of AI like ChatGPT

Is there anything else that we haven't covered that you think we should touch upon?

Matthew
I think the biggest thing is the awareness around who wrote the code, how was it generated, having some kind of reference criteria. We have one university professor (That I'm aware of, specifically. I'm certain there's other ones.) that wrote context for the students. 'You can use ChatGPT, but here's how you have to reference it within what you're doing.' That's really important, I think, to communicate.

And just again, I've had the opportunity, since I did talk to a lot of different professionals, some individuals in the trucking industry, some mechanics, some traditional hackers to doctors, to teachers, and showing them what you can do with it, just to see how they react. And what I've found is that a lot of these other professionals, they haven't used it to really any level of extent. So that's the part you need to do. Everybody needs to get into this just a little bit. I think what we're going to find is we're going to see these prompt engineering, that's the nomenclature, how you prompt, how you ask the question, that's where there needs to be more education. And just like advanced Google queries, that's where we're going to be at now, right? Advanced AI queries, and understanding how to ask it the questions, and then how to re-ask it questions so that it can be helpful in what you're doing.

So don't put your head in the sand, try it out, think contextually how it might help you and benefit you, and then make certain that there [are] some people out there looking out as defenders to make sure that it has those guardrails that we talked about earlier.

ChatGPT
(Was not asked this question.)

Final thoughts

Is ChatGPT a teacher, tool, troublemaker, or all of the above? It would seem like most topics that can be placed in an either/or category, the answer lies in the person utilizing it. Luallen offered this as a final thought: 

"Overall, can ChatGPT write malicious software?  Excellent question and I must emphasis a final point, ChatGPT is a NLP version that the public has access to.  We must assume there are other NLP models with more capabilities than this and with limited safety restrictions.  ChatGPT does write code based upon prompts that may or may not work the first time; however, if it doesn’t work you simply pass back the error codes, and it fixes itself.  This is powerful for learning software development and powerful to understand the answer to the broader question, can GPT models write malicious software?" 


Share this story

This story was published May 2, 2023.