How AI Platforms May Be Used To Craft Malicious Code
Would you like to learn more?
Download our Pentest Sourcing Guide to learn everything you need to know to successfully plan, scope, and execute your penetration testing projects.
Cybersecurity is a significant concern for all industries, including those increasingly leveraging AI. While AI has offset the burden of mundane tasks off the shoulders of employees and reduced human errors, researchers found that cybercriminals are utilizing AI to attack companies and write malicious AI code.
With the growing popularity of chatbots like ChatGPT, some people are experimenting to see if these AI tools can write sophisticated malicious code.
The Rise of AI Code by ChatGPT
AI platforms like ChatGPT, Jasper, and Dall.E are creating hype for their ability to solve complex problems or offer out-of-the-box content. The public beta launch of ChatGPT impressed many because it could imitate various kinds of writing—crafting poetry, drafting resumes, completing assignments, or generating unique paragraphs on different topics in seconds.
While many AI tools create opportunities for innovation, others are using them to create malicious code. In some cases, ChatGPT has been used to write code that can exploit vulnerabilities in software and applications. The AI platform writes the code quickly and efficiently without human intervention.
The Use of AI platforms is Getting More Alarming
These tools allow cyber criminals with limited technical skills to generate malicious codes. Many examples of code have been found recently on underground hacking forums. It shows that cybercriminals with limited technical and coding knowledge can also use these content-generating AI platforms to generate AI code for stealing sensitive data, attack a system, or make these tools draft a phishing email.
ChatGPT has taken steps to prevent its technology from being used for malicious purposes, but people have been finding creative solutions to bypass these measures. It is essential to be aware of these threats and take the necessary steps to secure networks from malicious AI code.
Tricking the AI ChatBot
Is tricking AI platforms for malicious intent easy? Although ChatGPT has content moderation measures, cybercriminals can trick it into developing AI code that can work as malware. Some ChatGPT users found ways to dupe the AI system into giving them information—such as conveying to ChatGPT that its guidelines and content-generating filters got deactivated. Other users tricked the chatbot by asking it to finish a conversation between friends about a banned subject.
Hadis Karimipour, an associate professor and Canada Research Chair in secure and resilient cyber-physical systems at the University of Calgary, said OpenAI's team refined those measures over the past six weeks. He added, "At the beginning, it might have been a lot easier for you not to be an expert or have no knowledge [of coding] to be able to develop an AI code that can be used for malicious purposes. But now, it's a lot more difficult. It's not like everyone can use ChatGPT and become a hacker."
AI Hackers: A History
Anthropic's announcement is perhaps the most high-profile example of companies claiming bad actors are using AI tools to carry out automated hacks.It is the kind of danger many have been worried about, but other AI companies have also claimed that nation-state hackers have used their products.
In February 2024, OpenAI published a blog post in collaboration with cyber experts from Microsoft saying it had disrupted five state-affiliated actors, including some from China.
"These actors generally sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks," the firm said at the time. Anthropic has not said how it concluded the hackers in this campaign were linked to the Chinese government.
Opportunities: AI Platform Misuse
AI platforms like ChatGPT, Copy.AI, or Jasper can be enabling tools for cybercriminals. Aleksander Essex, an associate professor of software engineering who runs Western University's information security and privacy research laboratory in London, Ontario, said that ChatGPT's malicious AI code was unlikely to be useful for high-level attacks.
Some low-level use of AI platforms by cybercriminals will be used to:
Craft compelling phishing emails to target an organization or individual. It can generate such emails in seconds. The attacker can send them straight without much modification.
For AI platforms like ChatGPT, encryption programs are not illegal. Thus, it can generate small scripts that steal and encrypt files in standalone computers.
It can help cybercriminals understand the vulnerabilities of a system based on various parameters.
As AI-generated code becomes more prevalent in development workflows, organizations need comprehensive visibility into their software supply chain to detect potentially malicious or vulnerable code. Preventative measures include:
Partnering with organizations such as Legit Security, which provides application security posture management that helps teams identify and mitigate risks from AI-assisted development, ensuring code integrity across the entire development lifecycle
Investing in continuous penetration testing
Deploying EDR that analyzes behavior patterns rather than static indicators
Using application allowlists and strict script execution policies
Updating policies to restrict unauthorized local model usage
Conclusion
The capability of AI is breaking new ground. Even Microsoft is ready to invest in ChatGPT and OpenAI to push ChatGPT's application in solving real-life problems. If proper filtering techniques are not implemented, the content moderation measures will be circumvented, allowing malicious Artificial Intelligence code to be generated. It is essential to secure networks from malicious AI code by implementing the necessary security measures.
Contact Us
Speak with an Account Executive
Interested in Pentesting?

Penetration Testing Methodology
Our Penetration Security Testing methodology is derived from the SANS Pentest Methodology, the MITRE ATT&CK framework, and the NIST SP800-115 to uncover security gaps.
Download Methodology
Pentest Sourcing Guide
Download our Pentest Sourcing Guide to learn everything you need to know to successfully plan, scope, and execute your penetration testing projects.
Download GuideFeatured Posts

November 26 - Blog
ChatGPT and other AI Platforms May Be Used To Craft Malicious Code
While many AI tools create opportunities for innovation, others are using them to create malicious code. Here's what you need to know about the rise of AI code by ChatGPT and other AI chatbots.

November 14 - Blog
The Rise of Hackers in APAC and Its Implications for Australia
While APAC is steadily emerging as a global innovation hub, the region's massive digitization post-pandemic has outpaced its cybersecurity preparedness and has led to a spike in breaches.

November 06 - Blog
9 AI Enabled Cybersecurity Tools in 2025
Discover 5 AI-powered cybersecurity tools that support red teaming, threat detection, and vulnerability discovery efforts.




