Prepare to be shocked: a digital intruder, with the help of AI, managed to infiltrate an AWS cloud environment and gain administrative access in less than 10 minutes! This is a terrifying glimpse into the future of cyberattacks, and it's happening right now.
The Sysdig Threat Research Team witnessed this break-in on November 28th, and it's not just the speed that's concerning; it's the way the criminals utilized large language models to automate almost every step of the attack. From initial reconnaissance to writing malicious code, these hackers left no stone unturned.
"The threat actor achieved administrative privileges in under 10 minutes, compromising 19 distinct AWS principals and abusing both Bedrock models and GPU compute resources," said Michael Clark and Alessandro Brucato, threat researchers at Sysdig. But here's where it gets controversial: they believe AI-assisted offensive operations were at play.
The attackers initially gained access by stealing valid test credentials from public Amazon S3 buckets. These credentials belonged to an IAM user with multiple permissions on AWS Lambda and restricted access to AWS Bedrock. And get this - the S3 bucket also contained valuable AI model data, which the attackers later used to their advantage.
To prevent such credential theft, Sysdig recommends organizations avoid leaving access keys in public buckets. They suggest using temporary credentials for IAM roles and, for those who must grant long-term credentials, ensure they're rotated periodically.
The attackers tried common admin-level usernames like "sysadmin" and "netadmin" but ultimately achieved privilege escalation through Lambda function code injection. The compromised user's UpdateFunctionCode and UpdateFunctionConfiguration permissions were abused in this process.
The security researchers noted that the code's comments were written in Serbian, possibly indicating the intruder's origin. The code itself listed all IAM users and their access keys, created access keys for "frick," and listed S3 buckets with their contents. It even contained "comprehensive" exception handling, including logic to limit S3 bucket listings and increase Lambda execution timeout.
These factors, combined with the speed of the attack, strongly suggest the code was written by an LLM, according to the threat hunters. But it gets even more intriguing.
The miscreant then collected account IDs and attempted to assume OrganizationAccountAccessRole in all AWS environments. Interestingly, they included IDs that didn't belong to the victim organization, which the researchers believe could be attributed to AI hallucinations.
"This behavior is consistent with patterns often attributed to AI hallucinations, providing further potential evidence of LLM-assisted activity," wrote Clark and Brucato.
In total, the attacker gained access to 19 AWS identities, including six different IAM roles across 14 sessions and five other IAM users. With their new admin user account, the criminals stole sensitive data, including secrets from Secrets Manager, SSM parameters from EC2 Systems Manager, CloudWatch logs, Lambda function source code, internal data from S3 buckets, and CloudTrail events.
Next, the attackers turned to LLMjacking - using a compromised cloud account to access cloud-hosted LLMs. They abused the user's Amazon Bedrock access to invoke multiple models, including Claude, DeepSeek, Llama, Amazon Nova Premier, Amazon Titan Image Generator, and Cohere Embed.
Sysdig notes that invoking Bedrock models that no one in the account uses is a red flag. Enterprises can create Service Control Policies (SCPs) to allow only certain models to be invoked, adding an extra layer of security.
After Bedrock, the intruder focused on EC2, querying machine images suitable for deep learning applications. They also used the victim's S3 bucket for storage, and one of the scripts stored there seemed designed for ML training but referenced a non-existent GitHub repository, suggesting an LLM hallucination.
The researchers couldn't determine the attacker's goal, but they noted that the script launched a publicly accessible JupyterLab server on port 8888, providing a backdoor to the instance that didn't require AWS credentials. However, the instance was terminated after five minutes for unknown reasons.
This incident is just one example of how attackers are increasingly relying on AI to help them at every stage of the attack chain. Some security experts warn that it's only a matter of time before criminals can fully automate attacks at scale.
To defend against such intrusions, organizations should focus on hardening identity security and access management. Apply principles of least privilege to all IAM users and roles, restrict UpdateFunctionConfiguration and PassRole permission in Lambda, limit UpdateFunctionCode permissions to specific functions, and ensure S3 buckets containing sensitive data are not publicly accessible. It's also a good idea to enable model invocation logging for Amazon Bedrock to detect unauthorized usage.
The future of cybersecurity is here, and it's time to adapt and stay vigilant. What are your thoughts on this AI-assisted cyberattack? Do you think we're prepared for the challenges ahead? Let's discuss in the comments!