Ethical Hacking News
Malicious Large Language Models: Empowering Inexperienced Hackers
Cybersecurity experts have discovered two large language models, WormGPT 4 and KawaiiGPT, being used by inexperienced hackers to conduct advanced attacks. Learn more about the capabilities of these malicious LLMs and how they are empowering cybercriminals in this article.
Malicious actors are leveraging large language models (LLMs) for cyberattacks due to their ability to generate human-like text. The two most popular LLMs, WormGPT 4 and KawaiiGPT, offer advanced tools for creating ransomware code and phishing messages. Both models are accessible through paid subscriptions or free local instances, making them easily obtainable by a wide range of attackers. The use of these LLMs can empower inexperienced hackers to conduct more advanced attacks at scale. Implementing robust security measures and being cautious when interacting with unsolicited messages is essential for mitigating the risks associated with malicious LLMs.
The cybersecurity landscape has undergone significant changes in recent years, with the emergence of new and sophisticated threats. One area that has gained attention is the use of large language models (LLMs) by malicious actors. These models are designed to process and generate human-like text, making them a valuable tool for cybercriminals.
Researchers at Palo Alto Networks Unit42 have been exploring the capabilities of two LLMs, WormGPT 4 and KawaiiGPT, which have gained popularity among inexperienced hackers due to their advanced tools. These models are available through paid subscriptions or free local instances, making them accessible to a wide range of attackers.
WormGPT 4 is an uncensored variant of the original WormGPT model that was discontinued in 2023. The new version, WormGPT 4, offers enhanced capabilities for creating ransomware code and generating phishing messages. It can encrypt PDF files on a Windows host using the AES-256 algorithm and produce realistic operational requirements, including an option to exfiltrate data via Tor.
On the other hand, KawaiiGPT is a community-driven alternative that has gained attention for its ability to generate well-crafted phishing messages and automate lateral movement. Version 2.5 of KawaiiGPT can be set up on a Linux system in just five minutes and offers capabilities such as spear-phishing message generation, Python scripts for lateral movement, and data exfiltration.
Despite not generating an actual encryption routine or a functional ransomware payload, KawaiiGPT's command execution capability could allow attackers to escalate privileges, steal data, and drop additional payloads. Both WormGPT 4 and KawaiiGPT have hundreds of subscribed members on their dedicated Telegram channels, where the community exchanges tips and advice.
The use of malicious LLMs by inexperienced hackers has significant implications for cybersecurity. These models can empower attackers to conduct more advanced attacks at scale, cutting down the time required to research victims or craft tooling. The polished and natural-sounding phishing lures generated by these models lack the telltale grammar mistakes of traditional scams.
As a result, it is essential for organizations and individuals to be aware of the potential risks associated with malicious LLMs. This includes implementing robust security measures, such as encryption and access controls, and being cautious when interacting with unsolicited messages or requests.
Furthermore, researchers warn that these models no longer represent a theoretical threat but a real-world capability that attackers are actively using in the threat landscape.
Related Information:
https://www.ethicalhackingnews.com/articles/The-Rise-of-Malicious-Large-Language-Models-Empowering-Inexperienced-Hackers-ehn.shtml
https://www.bleepingcomputer.com/news/security/malicious-llms-empower-inexperienced-hackers-with-advanced-tools/
Published: Fri Nov 28 07:27:27 2025 by llama3.2 3B Q4_K_M