Ethical Hacking News
Anthropic's Model Context Protocol (MCP) has been found to contain a critical design flaw that enables remote code execution, posing a significant threat to the artificial intelligence (AI) supply chain. This vulnerability arises from unsafe defaults in how MCP configuration works over the STDIO transport interface.
Anthropic's Model Context Protocol (MCP) contains a critical design flaw that enables remote code execution, posing a significant threat to the AI supply chain.The vulnerability allows attackers to access sensitive user data, internal databases, API keys, and chat histories on systems running vulnerable MCP implementations.The flaw arises from unsafe defaults in how MCP configuration works over the STDIO (standard input/output) transport interface.More than 7,000 publicly accessible servers and software packages totaling over 150 million downloads are affected by this vulnerability.Ackers can inject arbitrary commands via MCP configuration files, which can then be executed on the server.Security experts recommend blocking public IP access to sensitive services, monitoring MCP tool invocations, running MCP-enabled services in a sandbox, and treating external MCP configuration input as untrusted.
Anthropic's Model Context Protocol (MCP) has been found to contain a critical design flaw that enables remote code execution, posing a significant threat to the artificial intelligence (AI) supply chain. This vulnerability, which has been independently reported by several researchers over the past year, can allow attackers to access sensitive user data, internal databases, API keys, and chat histories on systems running vulnerable MCP implementations.
The MCP is an open-source protocol designed to enable secure communication between AI models and their environments. It is used in various applications, including language translation, text summarization, and chatbots. However, the protocol's architecture has been found to contain a fundamental flaw that makes it susceptible to remote code execution (RCE). This vulnerability arises from unsafe defaults in how MCP configuration works over the STDIO (standard input/output) transport interface.
According to OX Security researchers Moshe Siman Tov Bustan, Mustafa Naamnih, Nir Zadok, and Roni Bar, the systemic vulnerability is baked into Anthropic's official MCP software development kit (SDK) across any supported language, including Python, TypeScript, Java, and Rust. In all, it affects more than 7,000 publicly accessible servers and software packages totaling over 150 million downloads.
The researchers discovered that the flaw allows attackers to inject arbitrary commands via MCP configuration files, which can then be executed on the server. This vulnerability is triggered by the use of unsafe defaults in MCP configuration, which can bypass security measures and enable remote code execution. The researchers identified ten vulnerabilities spanning popular projects like LiteLLM, LangChain, LangFlow, Flowise, LettaAI, and LangBot.
The first vulnerability, CVE-2025-65720 (GPT Researcher), is related to the GPT research framework. The second vulnerability, CVE-2026-30623 (LiteLLM), has been patched by Anthropic, but other vulnerabilities remain unaddressed in Anthropic's MCP reference implementation.
"These vulnerabilities fall under four broad categories," said OX Security researchers. "Unauthenticated and authenticated command injection via MCP STDIO, unauthenticated command injection via direct STDIO configuration with hardening bypass, unauthenticated command injection via MCP configuration edit through zero-click prompt injection, and unauthenticated command injection through MCP marketplaces via network requests, triggering hidden STDIO configurations."
Anthropic has declined to modify the protocol's architecture, citing the behavior as "expected." However, some vendors have issued patches for specific vulnerabilities. The researchers emphasized that shifting responsibility to implementers does not transfer the risk. It just obscures who created it.
"The Anthropic MCP vulnerability is a supply chain event rather than a single CVE," said OX Security. "One architectural decision, made once, propagated silently into every language, every downstream library, and every project that trusted the protocol to be what it appeared to be. Shifting responsibility to implementers does not transfer the risk. It just obscures who created it."
To counter this threat, security experts recommend blocking public IP access to sensitive services, monitoring MCP tool invocations, running MCP-enabled services in a sandbox, treating external MCP configuration input as untrusted, and only installing MCP servers from verified sources.
The discovery of this critical flaw highlights the importance of robust testing and validation in open-source projects. It also underscores the need for developers to prioritize security in their applications, particularly when integrating third-party libraries or protocols.
What made this a supply chain event rather than a single CVE is that one architectural decision, made once, propagated silently into every language, every downstream library, and every project that trusted the protocol to be what it appeared to be. Shifting responsibility to implementers does not transfer the risk. It just obscures who created it.
As AI-powered integrations continue to expand the attack surface, it is essential for developers, vendors, and users to stay vigilant and prioritize security in their applications. The Anthropic MCP vulnerability serves as a stark reminder of the importance of robust testing, validation, and responsible open-source development practices.
Related Information:
https://www.ethicalhackingnews.com/articles/A-Critical-Flaw-in-Anthropics-MCP-Design-Exposes-AI-Supply-Chain-to-Remote-Code-Execution-ehn.shtml
Published: Mon Apr 20 06:46:03 2026 by llama3.2 3B Q4_K_M