Ethical Hacking News
AWS Discloses Sophisticated AI-Assisted Cyber Attack on FortiGate Firewalls: A recent incident report from Amazon reveals a sophisticated cyber attack carried out by a Russian-speaking group using off-the-shelf generative AI tools to compromise over 600 internet-exposed FortiGate firewalls across 55 countries. The attack used machine learning algorithms and automation, allowing a relatively low-skilled outfit to run a large-scale campaign that would typically require more people or time.
The Russian-speaking group carried out a sophisticated cyber attack on over 600 internet-exposed FortiGate firewalls. The attackers used off-the-shelf generative AI tools to compromise the firewalls, relying heavily on machine learning algorithms and automation. The attackers scanned for exposed management interfaces, tried weak credentials, and hoovered up configuration files to gather information about victim networks. The attack highlighted the importance of basic hygiene measures such as keeping management interfaces off the public internet and enforcing multi-factor authentication. The use of AI tools in cyber attacks is becoming increasingly sophisticated, emphasizing the need for organizations to stay up-to-date with security measures and best practices.
AWS has recently disclosed a sophisticated cyber attack carried out by a Russian-speaking group using off-the-shelf generative AI tools to compromise over 600 internet-exposed FortiGate firewalls across 55 countries. The campaign, which lasted from mid-January to mid-February, relied heavily on the use of machine learning algorithms and automation, effectively allowing a relatively low-skilled outfit to run a large-scale attack that would typically require more people or time.
According to Amazon's security team, the attackers used multiple commercial AI tools to generate attack playbooks, scripts, and operational notes. This allowed them to automate their workflow, generating custom tooling with the aid of AI-assisted development. The tools were found to be functional but rough around the edges, with simplistic parsing logic and redundant comments that suggest a machine wrote the first draft.
The attackers scanned for exposed FortiGate management interfaces, tried commonly reused or weak credentials, and then hoovered up configuration files once inside, giving them a roadmap of victim networks. They also pulled administrator and VPN credentials, network topology details, and firewall rules from compromised firewalls. From there, they moved deeper into environments, going after Active Directory, dumping credentials, and probing for ways to move laterally.
Backup systems, including Veeam servers, were also on the shopping list. The attackers tended to abandon targets that put up too much resistance and move on to softer ones, reinforcing the idea that volume rather than finesse was the winning strategy. Geographically, the activity was opportunistic rather than tightly targeted, with victims spread across multiple regions.
The findings of this incident report lean heavily on the importance of basic hygiene measures such as keeping management interfaces off the public internet, enforcing multi-factor authentication, and not recycling passwords. These measures would have likely shut down much of the activity before it got going.
This cyber attack highlights the growing threat posed by AI-generated tools and the need for organizations to stay vigilant in protecting their networks against these types of attacks. As mentioned earlier, Google has recently warned that criminals are increasingly wiring generative AI directly into their operations, including its own Gemini AI chatbot, for tasks ranging from reconnaissance and target profiling to phishing and malware development.
The use of AI tools in cyber attacks is becoming increasingly sophisticated, making it essential for organizations to stay up-to-date with the latest security measures and best practices. By taking proactive steps to protect themselves against these types of threats, organizations can minimize their risk of being targeted by attackers using AI-generated tools.
Furthermore, this incident report demonstrates the potential risks associated with relying on cloud-based services and the importance of understanding the security implications of using off-the-shelf tools and generative AI. As more and more organizations move their operations to the cloud, it is essential for them to understand the potential risks associated with these types of attacks.
In conclusion, this incident report highlights the growing threat posed by AI-generated tools in cyber attacks. It emphasizes the importance of basic hygiene measures such as keeping management interfaces off the public internet, enforcing multi-factor authentication, and not recycling passwords. By taking proactive steps to protect themselves against these types of threats, organizations can minimize their risk of being targeted by attackers using AI-generated tools.
Related Information:
https://www.ethicalhackingnews.com/articles/AWS-Discloses-Sophisticated-AI-Assisted-Cyber-Attack-on-FortiGate-Firewalls-ehn.shtml
https://go.theregister.com/feed/www.theregister.com/2026/02/23/aws_fortigate_firewalls/
https://aws.amazon.com/blogs/security/ai-augmented-threat-actor-accesses-fortigate-devices-at-scale/
https://www.msn.com/en-us/news/technology/aws-says-more-than-600-fortigate-firewalls-hit-in-ai-augmented-campaign/ar-AA1WTDGA
Published: Mon Feb 23 06:57:30 2026 by llama3.2 3B Q4_K_M