Ethical Hacking News
A recent AWS intruder breach demonstrates the potential for AI-assisted cloud break-ins, with attackers gaining admin access in under 10 minutes. The incident highlights the need for organizations to prioritize security and implement effective countermeasures against these types of threats.
A digital intruder gained access to an AWS cloud environment with administrative privileges in under 10 minutes, using large language models (LLMs) to automate the attack. The attacker initially stole valid test credentials from public Amazon S3 buckets and then gained access by abusing IAM user permissions. The attackers used LLMs to write comprehensive exception handling logic, including plans to limit S3 bucket listings and increase Lambda execution timeouts. The intruder collected account IDs and attempted to assume OrganizationAccountAccessRole in all AWS environments, suggesting potential AI hallucinations. The attackers gained access to 19 AWS identities, including six different IAM roles across 14 sessions, and snatched up sensitive data. The attackers invoked multiple LLMs, including some that were not used by the account, and used the victim's S3 bucket for storage. The incident highlights the growing threat of AI-assisted cloud break-ins and the need for organizations to implement robust security measures to prevent such attacks.
A recent incident highlights the growing threat of AI-assisted cloud break-ins, as a digital intruder gained access to an AWS cloud environment and achieved administrative privileges in under 10 minutes. According to Sysdig's Threat Research Team, this attack stood out not only for its speed but also for the "multiple indicators" suggesting that the attackers used large language models (LLMs) to automate most phases of the attack.
The incident occurred on November 28, when the intruder initially gained access by stealing valid test credentials from public Amazon S3 buckets. The credentials belonged to an identity and access management (IAM) user with multiple read and write permissions on AWS Lambda and restricted permissions on AWS Bedrock. Additionally, the S3 bucket contained Retrieval-Augmented Generation (RAG) data for AI models, which would later prove useful during the attack.
The attackers initially attempted to gain access using usernames such as "sysadmin" and "netadmin," typically associated with admin-level privileges. However, they ultimately achieved privilege escalation through Lambda function code injection, abusing the compromised user's UpdateFunctionCode and UpdateFunctionConfiguration permissions.
To carry out this attack, the attackers replaced the code of an existing Lambda function named EC2-init three times, iterating on their target user. The first attempt targeted "adminGH," which lacked admin privileges, while subsequent attempts eventually succeeded in compromising the admin user "frick." The security sleuths noted that the comments in the code were written in Serbian, likely indicating the intruder's origin.
The attackers also wrote comprehensive exception handling logic in their code, including a plan to limit S3 bucket listings and an increase to the Lambda execution timeout from three seconds to 30 seconds. These factors, combined with the short time frame from credential theft to Lambda execution, "strongly suggest" that the code was written by an LLM, according to the threat hunters.
Following this initial gain of access, the intruder set about collecting account IDs and attempting to assume OrganizationAccountAccessRole in all AWS environments. They included account IDs that did not belong to the victim organization: two with ascending and descending digits (123456789012 and 210987654321), and one ID that appeared to belong to a legitimate external account.
This behavior is consistent with patterns often attributed to AI hallucinations, providing further potential evidence of LLM-assisted activity. In total, the attacker gained access to 19 AWS identities, including six different IAM roles across 14 sessions, plus five other IAM users.
With their new admin user account, the attackers snatched up a ton of sensitive data: secrets from Secrets Manager, SSM parameters from EC2 Systems Manager, CloudWatch logs, Lambda function source code, internal data from S3 buckets, and CloudTrail events. They then turned to the LLMjacking part of the attack, gaining access to the victim's cloud-hosted LLMs.
By abusing the user's Amazon Bedrock access, they invoked multiple models including Claude, DeepSeek, Llama, Amazon Nova Premier, Amazon Titan Image Generator, and Cohere Embed. Sysdig notes that "invoking Bedrock models that no one in the account uses is a red flag," and enterprises can create Service Control Policies (SCPs) to allow only certain models to be invoked.
The attackers focused on EC2, querying machine images suitable for deep learning applications. They also began using the victim's S3 bucket for storage, and one of the scripts stored therein looked like it was designed for ML training - but used a GitHub repository that doesn't exist, suggesting an LLM hallucinated the repo in generating the code.
The researchers say they can't determine the attacker's goal, possibly model training or reselling compute access. However, they note that the script launched a publicly accessible JupyterLab server on port 8888, providing a backdoor to the instance that doesn't require AWS credentials.
The attackers terminated the instance after five minutes for unknown reasons, leaving behind a trail of clues pointing to an AI-assisted attack. The incident highlights the growing threat of AI-assisted cloud break-ins and the need for organizations to implement robust security measures to prevent such attacks.
Related Information:
https://www.ethicalhackingnews.com/articles/AWS-Intruder-Achieves-Admin-Access-in-Under-10-Minutes-Thanks-to-AI-Assisted-Cloud-Break-in-ehn.shtml
https://go.theregister.com/feed/www.theregister.com/2026/02/04/aws_cloud_breakin_ai_assist/
https://www.esecurityplanet.com/threats/ai-driven-attack-gains-aws-admin-privileges-in-under-10-minutes/
https://www.sysdig.com/blog/ai-assisted-cloud-intrusion-achieves-admin-access-in-8-minutes
Published: Wed Feb 4 15:25:39 2026 by llama3.2 3B Q4_K_M