Ethical Hacking News
Anthropic's Model Context Protocol (MCP) has been found to contain a catastrophic flaw that can be exploited to gain complete control over a system. Over 200,000 servers and millions of downstream users are at risk due to the vulnerability, which was identified by security researchers at Ox Research.
The Anthropic Model Context Protocol (MCP) has a catastrophic flaw that can be exploited by attackers to gain complete control over systems.The vulnerability affects over 200,000 servers and millions of downstream users, primarily due to the use of STDIO as a local transport mechanism for an AI application.Four types of vulnerabilities have been identified through MCP, including unauthenticated and authenticated command injection, hardening bypass, zero-click prompt injection across IDEs, and exploitation of MCP marketplaces.Several popular AI agents and frameworks are vulnerable to these attacks, including LangFlow, GPT Researcher, Upsonic, Flowise, Windsurf, Claude Code, Cursor, Gemini-CLI, and GitHub Copilot.Anthropic has been criticized for its handling of the vulnerability and has only recently released an updated security policy that includes guidance on using MCP adapters with caution.
The recent discovery of a catastrophic flaw in Anthropic's Model Context Protocol (MCP) has sent shockwaves through the cybersecurity community. The vulnerability, which affects over 200,000 servers and millions of downstream users, has been identified by security researchers at Ox Research as a design flaw that can be exploited to gain complete control over a system.
The MCP protocol is an open-source framework used by Anthropic's large language models (LLMs) to connect to external data, systems, and one another. It works across multiple programming languages, including Python, TypeScript, Java, and Rust, making it accessible to developers worldwide. However, this accessibility comes with a significant price: a root vulnerability that can be exploited to launch devastating attacks.
According to Ox Research, the root issue lies in MCP's use of STDIO (standard input/output) as a local transport mechanism for an AI application to spawn an MCP server as a subprocess. While this feature seems innocuous at first glance, it actually allows anyone to run any arbitrary OS command, which can be exploited to launch a remote code execution (RCE) attack.
The Ox researchers have identified four different types of vulnerabilities that can be delivered through MCP, including unauthenticated and authenticated command injection, hardening bypass, zero-click prompt injection across AI integrated development environments (IDEs), and exploitation of MCP marketplaces. These vulnerabilities can be used to compromise systems, steal sensitive information, and even launch devastating attacks.
One of the most critical vulnerabilities is unauthenticated and authenticated command injection, which allows attackers to enter user-controlled commands that will run directly on the server without authentication or sanitization. This type of attack can lead to total system compromise and is particularly worrying for AI frameworks with publicly facing UIs.
The Ox researchers have also identified a second vulnerability, unauthenticated command injection with hardening bypass, which allows miscreants to bypass protections and user input sanitization implemented by developers to run commands directly on the server. This type of attack can be particularly difficult to detect and has already been exploited in multiple cases.
Furthermore, the researchers have discovered that several popular AI agents and frameworks, including LangFlow, GPT Researcher, Upsonic, Flowise, Windsurf, Claude Code, Cursor, Gemini-CLI, and GitHub Copilot, are vulnerable to these types of attacks. These vulnerabilities can be delivered through MCP adapters, which are used by developers to integrate MCP with their AI applications.
The Ox researchers have also identified a fourth vulnerability that can be delivered through MCP marketplaces, where malicious code can be uploaded and executed on unsuspecting users' systems. This type of attack can be particularly devastating, as it allows attackers to install malware on thousands of developer machines before detection.
Anthropic, the company behind MCP, has been criticized for its handling of this vulnerability. The company initially declined to patch the issue, citing the behavior as "expected," despite 10 high- and critical-severity CVEs being issued for individual open-source tools and AI agents that use MCP. However, after repeated requests from Ox Research, Anthropic finally released an updated security policy that includes guidance on using MCP adapters with caution.
The discovery of this catastrophic flaw highlights the need for greater transparency and accountability in the development of open-source software. It also underscores the importance of responsible disclosure practices, where researchers and developers work together to identify and address vulnerabilities before they can be exploited by malicious actors.
In conclusion, the Anthropic MCP flaw is a wake-up call for the cybersecurity community, highlighting the need for vigilance and proactive measures to protect against emerging threats. As the use of AI and machine learning continues to grow, it is essential that developers and users prioritize security and take steps to mitigate vulnerabilities like this one.
Related Information:
https://www.ethicalhackingnews.com/articles/The-Anthropic-MCP-Flaw-A-Catastrophic-Vulnerability-Exposed-ehn.shtml
https://go.theregister.com/feed/www.theregister.com/2026/04/16/anthropic_mcp_design_flaw/
https://www.theregister.com/2026/04/16/anthropic_mcp_design_flaw/?td=keepreading
https://medium.com/@cdcore/mcp-is-broken-and-anthropic-just-admitted-it-7eeb8ee41933
https://cybersecuritynews.com/chatgpt-malware-and-phishing/
https://westoahu.hawaii.edu/cyber/global-weekly-exec-summary/ai-in-apt-attacks/
https://thehackernews.com/2026/03/critical-langflow-flaw-cve-2026-33017.html
https://www.infosecurity-magazine.com/news/hackers-exploit-critical-langflow/
https://www.socinvestigation.com/comprehensive-list-of-apt-threat-groups-motives-and-attack-methods/
https://breach-hq.com/threat-actors
https://www.msn.com/en-us/news/technology/are-you-using-these-tp-link-routers-russian-hackers-are-targeting-them/ar-AA20ouQm
https://en.wikipedia.org/wiki/List_of_hacker_groups
https://hackmag.com/news/claude-hacker
https://www.anthropic.com/news/disrupting-AI-espionage
https://www.itpro.com/technology/artificial-intelligence/google-says-hacker-groups-are-using-gemini-to-augment-attacks-and-companies-are-even-stealing-its-models
https://nationalcioreview.com/articles-insights/extra-bytes/google-discloses-gemini-ai-abuse-by-apt-groups-for-recon-and-exploit-research/
https://cybersecuritynews.com/apt32-hackers-weaponizing-github/
https://cyberpress.org/exploit-github-copilot-vulnerability/
Published: Thu Apr 16 23:40:09 2026 by llama3.2 3B Q4_K_M