Categories: TECH

ChatGPT creates mutating malware that evades detection by EDR

A global sensation since its initial release at the end of last year, ChatGPT‘s popularity among consumers and IT professionals alike has stirred up cybersecurity nightmares about how it  can be used to exploit system vulnerabilities. A key problem, cybersecurity experts have demonstrated, is the ability of ChatGPT and other large language models (LLMs) to generate polymorphic, or mutating, code to evade endpoint detection and response (EDR) systems.

A recent series of proof-of-concept attacks show how a benign-seeming executable file can be crafted such that at every runtime, it makes an API call to ChatGPT. Rather than just reproduce examples of already-written code snippets, ChatGPT can be prompted to generate dynamic, mutating versions of malicious code at each call, making the resulting vulnerability exploits difficult to detect by cybersecurity tools. 

“ChatGPT lowers the bar for hackers, malicious actors that use AI models can be considered the modern ‘Script Kiddies’,” said Mackenzie Jackson, developer advocate at cybersecurity company GitGuardian. “The malware ChatGPT can be tricked into producing is far from ground-breaking but as the models get better, consume more sample data and different products come onto the market, AI may end up creating malware that can only be detected by other AI systems for defense. What side will win at this game is anyone’s guess.”

There have been various proof of concepts that showcase the tool’s potential to exploit its capabilities in developing advanced and polymorphic malware.

Prompts bypass filters to create malicious code

ChatGPT and other LLMs have content filters that prohibit them from obeying commands, or prompts, to generate harmful content, such as malicious code. But content filters can be bypassed. 

Almost all the reported exploits that can potentially be done through ChatGPT are achieved through what is being called as “prompt engineering,” the practice of modifying the input prompts to bypass the tool’s content filters and retrieve a desired output. Early users found, for example, that they could get ChatGPT to create content that it was not supposed to create — “jailbreaking” the program — by framing prompts as hypotheticals, for example asking it to do something as if it were not an AI but a malicious person intent on doing harm.

“ChatGPT has enacted a few restrictions on the system, such as filters which limit the scope of answers ChatGPT will provide by assessing the context of the question,” said Andrew Josephides, director of security research at KSOC, a cybersecurity company specializing in Kubernetes. “If you were to ask ChatGPT to write you a malicious code, it would deny the request. If you were to ask ChatGPT to write code which can do the effective function of the malicious code you intend to write, however ChatGPT is likely to build that code for you.”

With each update, ChatGPT gets harder to trick into being malicious, but as different models and products enter the market we cannot rely on content filters to prevent LLMs from being used for malicious purposes, Josephides said.

The ability to trick ChatGPT into utilizing things it knows but which are walled behind filters is what can cause users to make it generate effective malicious code. It can be used to render the code polymorphic by leveraging the tool’s capability to modify and finetune results for the same query if run multiple times.

For instance an apparently harmless Python executable can generate a query to send to the ChatGPT API for processing a different version of malicious code each time the executable is run. This way, the malicious action is performed outside of the exec() function. This technique can be used to form a mutating, polymorphic malware program that is difficult to detect by threat scanners. By Shweta Sharma

Olufemi Awoyinka

Olufemi Awoyinka is a highly accomplished individual with a diverse range of talents and accomplishments. With over three years of experience in the field of data analytics, he has established himself as a proficient expert in this rapidly evolving industry. Alongside his expertise in data analytics, Olufemi is also an accomplished entrepreneur, the founder of Abisfem Global Enterprise, and the creative force behind the popular Idan News blog.

Share
Published by
Olufemi Awoyinka

Recent Posts

Over 1,000 Kano APC members defect to NNPP

In a significant shift, over 1,000 members of the All Progressives Congress (APC) in Kano…

4 weeks ago

Alaafin Stool: Oyo High Court strike out Kingmaker case against Gov Makinde

Oyo High Court has struck out a case filed by Oyo kingmakers (Oyo Mesi) against…

4 weeks ago

Internet fraud training school proprietor bags jail term in Uyo

The Uyo Zonal Command of the Economic and Financial Crimes Commission, EFCC, has convicted and…

4 weeks ago

Farmers in Niger target four million metric tonnes of food this year

Niger State Chapter of the All Farmers Association of Nigeria (AFAN) has projected the production…

4 weeks ago

Bayern Munich shatter Gunners’ UCL dream as Real Madrid end Manchester City’s hope of winning back-to-back

Arsenal suffered another huge setback in their season after Bayern Munich knocked them out of…

4 weeks ago

Video: Gov Ododo visits Ex Kogi Gov. Yahaya Bello amidst EFCC siege

The incumbent Governor of Kogi State, Usman Ododo, has visited the embattled former governor of…

4 weeks ago

This website uses cookies.