Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

AI-Powered Attack Campaign Compromises 600 FortiGate Devices Worldwide


A recent cyber attack campaign by a Russian-speaking attacker compromised over 600 FortiGate devices across 55 countries, leveraging commercial generative AI services to automate and scale the attack. The incident highlights the limitations of AI-generated code used without significant refinement and underscores the importance of robust security measures in protecting against such threats.

  • The Amazon Threat Intelligence reported a significant cyber attack campaign targeting over 600 FortiGate devices across 55 countries between January 11 and February 18, 2026.
  • The attacker used commercial generative AI tools to automate and scale the attack, exploiting exposed management ports and weak single-factor credentials.
  • Analysis of the source code revealed clear indicators of AI-assisted development, including redundant comments and simplistic architecture.
  • The custom reconnaissance tool failed under edge cases and lacked robustness, characteristics typical of AI-generated code used without significant refinement.
  • The attack campaign highlights the limitations of AI-generated code and underscores the importance of robust security measures in protecting against such threats.


  • Amazon Threat Intelligence has recently reported a significant cyber attack campaign that targeted over 600 FortiGate devices across 55 countries. The attack, which took place between January 11 and February 18, 2026, was carried out by a Russian-speaking cybercriminal who used commercial generative AI tools to automate and scale the attack.

    The attacker did not exploit any FortiGate vulnerabilities but instead abused exposed management ports and weak single-factor credentials to gain access to the devices. The campaign was characterized by its use of multiple commercial generative AI services, which allowed the attacker to leverage AI tools to scale familiar attack techniques despite limited technical skills.

    During routine monitoring, Amazon experts uncovered infrastructure hosting the attacker's tools, along with AI-generated attack plans, victim configurations, and custom code. This provided rare insight into an AI-driven workflow, where the attacker used multiple commercial GenAI tools to automate and scale attack techniques.

    The actor scanned the Internet for exposed FortiGate management ports, abused weak credentials, and stole full configurations containing VPN, admin, and network data. Following VPN access to victim networks, the threat actor deployed a custom reconnaissance tool with different versions written in both Go and Python.

    Analysis of the source code revealed clear indicators of AI-assisted development, including redundant comments that merely restate function names, simplistic architecture with disproportionate investment in formatting over functionality, naive JSON parsing via string matching rather than proper deserialization, and compatibility shims for language built-ins with empty documentation stubs.

    While functional for the threat actor's specific use case, the tooling lacked robustness and failed under edge cases, characteristics typical of AI-generated code used without significant refinement.

    The custom reconnaissance tools automated network mapping and vulnerability scanning but lacked depth, often failing against patched or hardened systems. The attacker relied on multiple commercial LLMs for planning and code generation, creating a large toolkit that mimicked a full team's output.

    Once inside, the attacker used common open-source tools to escalate access. They compromised Active Directory, extracting NTLM hashes and, in some cases, entire credential databases—sometimes helped by weak or reused admin passwords. After gaining domain control, they moved laterally using pass-the-hash and NTLM relay attacks, and targeted Veeam backup servers to steal credentials and weaken recovery options.

    However, when systems were patched or hardened, their more advanced exploitation attempts largely failed. This highlights the limitations of AI-generated code used without significant refinement and underscores the importance of robust security measures in protecting against such threats.

    The incident serves as a reminder that while AI can be leveraged for malicious purposes, it is also possible to use it to enhance security postures. By understanding how attackers are using AI tools, organizations can develop strategies to counter these threats and protect their networks against AI-powered attacks.

    In light of this incident, Amazon Threat Intelligence is urging organizations to strengthen their patching, credential hygiene, segmentation, and detection capabilities to mitigate the risks associated with AI-driven attacks. The campaign also serves as a wake-up call for the cybersecurity community to be more vigilant in monitoring and detecting such threats.

    The case also highlights how AI can lower the barrier to cybercrime and emphasizes the need for robust security controls to prevent such attacks from succeeding. As experts warn, AI-driven attacks are expected to grow in 2026, making it essential for organizations to stay ahead of the curve by investing in cutting-edge security solutions.

    In conclusion, this incident serves as a stark reminder of the evolving threat landscape and the importance of staying vigilant in the face of emerging threats. By understanding how attackers are using AI tools, organizations can take proactive steps to protect their networks against such threats and strengthen their overall security posture.



    Related Information:
  • https://www.ethicalhackingnews.com/articles/AI-Powered-Attack-Campaign-Compromises-600-FortiGate-Devices-Worldwide-ehn.shtml

  • https://securityaffairs.com/188351/hacking/ai-powered-campaign-compromises-600-fortigate-systems-worldwide.html

  • https://www.bleepingcomputer.com/news/security/amazon-ai-assisted-hacker-breached-600-fortigate-firewalls-in-5-weeks/


  • Published: Mon Feb 23 07:51:31 2026 by llama3.2 3B Q4_K_M













    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us