+201063133306 info@igate-eg.com

A significant amount of media coverage followed the news that large language models (LLMs) intended for use by cybercriminals – including WormGPT and FraudGPT – were available for sale on underground forums. Many commenters expressed fears that such models would enable threat actors to create “mutating malware” and were part of a “frenzy” of related activity in underground forums.

The dual-use aspect of LLMs is undoubtedly a concern, and there is no doubt that threat actors will seek to leverage them for their own ends. Tools like WormGPT are an early indication of this (although the WormGPT developers have now shut the project down, ostensibly because they grew alarmed at the amount of media attention they received). What’s less clear is how threat actors more generally think about such tools, and what they’re actually using them for beyond a few publicly-reported incidents.

Sophos X-Ops decided to investigate LLM-related discussions and opinions on a selection of criminal forums, to get a better understanding of the current state of play, and to explore what the threat actors themselves actually think about the opportunities – and risks – posed by LLMs. We trawled through four prominent forums and marketplaces, looking specifically at what threat actors are using LLMs for; their perceptions of them; and their thoughts about tools like WormGPT.

A brief summary of our findings:

We found multiple GPT-derivatives claiming to offer capabilities similar to WormGPT and FraudGPT – including EvilGPT, DarkGPT, PentesterGPT, and XXXGPT. However, we also noted skepticism about some of these, including allegations that they’re scams (not unheard of on criminal forums)
In general, there is a lot of skepticism about tools like ChatGPT – including arguments that it is overrated, overhyped, redundant, and unsuitable for generating malware
Threat actors also have cybercrime-specific concerns about LLM-generated code, including operational security worries and AV/EDR detection
A lot of posts focus on jailbreaks (which also appear with regularity on social media and legitimate blogs) and compromised ChatGPT accounts
Real-world applications remain aspirational for the most part, and are generally limited to social engineering attacks, or tangential security-related tasks
We found only a few examples of threat actors using LLMs to generate malware and attack tools, and that was only in a proof-of-concept context
However, others are using it effectively for other work, such as mundane coding tasks
Unsurprisingly, unskilled ‘script kiddies’ are interested in using GPTs to generate malware, but are – again unsurprisingly – often unable to bypass prompt restrictions, or to understand errors in the resulting code
Some threat actors are using LLMs to enhance the forums they frequent, by creating chatbots and auto-responses – with varying levels of success – while others are using it to develop redundant or superfluous tools
We also noted examples of AI-related ‘thought leadership’ on the forums, suggesting that threat actors are wrestling with the same logistical, philosophical, and ethical questions as everyone else when it comes to this technology
While writing this article, which is based on our own independent research, we became aware that Trend Micro had recently published their own research on this topic. Our research in some areas confirms and validates some of their findings.

The forums
We focused on four forums for this research:

Exploit: a prominent Russian-language forum which prioritizes Access-a-a-Service (AaaS) listings, but also enables buying and selling of other illicit content (including malware, data leaks, infostealer logs, and credentials) and broader discussions about various cybercrime topics
XSS: a prominent Russian-language forum. Like Exploit, it’s well-established, and also hosts both a marketplace and wider discussions and initiatives
Breach Forums: Now in its second iteration, this English-language forum replaced RaidForums after its seizure in 2022; the first version of Breach Forums was similarly shut down in 2023. Breach Forums specializes in data leaks, including databases, credentials, and personal data
Hackforums: a long-running English-language forum which has a reputation for being populated by script kiddies, although some of its users have previously been linked to high-profile malware and incidents
A caveat before we begin: the opinions discussed here cannot be considered as representative of all threat actors’ attitudes and beliefs, and don’t come from qualitative surveys or interviews. Instead, this research should be considered as an exploratory assessment of LLM-related discussions and content as they currently appear on the above forums.

Digging in
One of the first things we noticed is that AI is not exactly a hot topic on any of the forums we looked at. On two of the forums, there were fewer than 100 posts on the subject – but almost 1,000 posts about cryptocurrencies across a comparative period.

While we’d want to do further research before drawing any firm conclusions about this discrepancy, the numbers suggest that there hasn’t been an explosion in LLM-related discussions in the forums – at least not to the extent that there has been on, say, LinkedIn. That could be because many cybercriminals see generative AI as still being in its infancy (at least compared to cryptocurrencies, which have a real-world relevance to them as an established and relatively mature technology). And, unlike some LinkedIn users, threat actors have little to gain from speculating about the implications of a nascent technology.

Of course, we only looked at the four forums mentioned above, and it’s entirely possible that more active discussions around LLMs are happening in other, less visible channels.