Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

Ai-Powered Collaboration Tool Claude Leaked Security Vulnerabilities: A Cautionary Tale for Enterprise Adoption


Ai-Powered Collaboration Tool Claude Leaked Security Vulnerabilities: A Cautionary Tale for Enterprise Adoption

A recent discovery has revealed multiple security vulnerabilities in Anthropic's AI coding tool, Claude, which could have allowed attackers to remotely execute code on users' machines and steal API keys by injecting malicious configurations into repositories. The flaws were discovered by Check Point Software researchers Aviv Donenfeld and Oded Vanunu, who reported their findings to Anthropic in July 2025.

  • The Anthropic AI coding tool Claude has been found to contain multiple security vulnerabilities.
  • Claude's design allows project-level configuration files to be embedded directly within repositories, making it easier for development teams to collaborate but also introducing security risks.
  • Three identified flaws allow attackers to remotely execute code on users' machines and steal API keys by exploiting Hooks, MCP consent bypass, and repository-controlled configuration settings.
  • The vulnerabilities demonstrate a supply chain threat as enterprises incorporate AI coding tools like Claude into their development processes.


  • Anthropic's AI coding tool, Claude, has been found to contain multiple security vulnerabilities that could have allowed attackers to remotely execute code on users' machines and steal API keys by injecting malicious configurations into repositories. The flaws were discovered by Check Point Software researchers Aviv Donenfeld and Oded Vanunu, who reported their findings to Anthropic in July 2025.

    The vulnerabilities are a result of Claude's design, which is intended to make it easier for development teams to collaborate. The AI coding tool enables this by embedding project-level configuration files directly within repositories, so that when a developer clones a project, they automatically apply the same settings used by their teammates. Any contributor with commit access can modify these files.

    The first of the three flaws involved abusing Claude's Hooks feature to achieve remote code execution. Hooks are user-defined shell commands that execute at various points in the tool's lifecycle, ensuring that specific, predefined actions run when predetermined conditions are met, instead of allowing the model to choose. Because Hooks are defined in .claude/settings.json, the repository-controlled configuration file, anyone with commit access can define hooks that will execute shell commands on every other collaborator's machine when they work on the project.

    Donenfeld and Vanunu found that a malicious hook could be configured to open a calculator app when someone opened the project. While this is not inherently malicious, it demonstrates how an attacker could use this feature to execute any shell command – such as downloading and running a malicious payload like a reverse shell. The researchers also demonstrated in a video how they exploited this vulnerability to achieve remote code execution.

    The second vulnerability allowed RCE by abusing MCP consent bypass. Claude integrates with external tools using Model Context Protocol (MCP), and MCP servers can also be configured in the same repository via .mcp.json configuration file. Donenfeld and Vanunu found that two repository-controlled configuration settings could override safeguards and automatically approve all MCP servers.

    When they started Claude Code with this configuration, a severe vulnerability was revealed: their command executed immediately upon running Claude – before the user could even read the trust dialog. The researchers demonstrated in a video how they exploited this vulnerability to remotely execute a reverse shell and completely compromise a victim's machine.

    The third flaw allowed attackers to exploit API key theft. One variable, ANTHROPIC_BASE_URL, controlled the endpoint for all Claude API communications, and while it's supposed to point to Anthropic's servers, it can be overridden in the project's configuration files to instead point to attacker-controlled servers.

    Donenfeld and Vanunu configured ANTHROPIC_BASE_URL to route through their local proxy, and watched all Claude Code's API traffic in real time. Every one of Claude's calls to Anthropic servers "included the authorization header – our full Anthropic API key, completely exposed in plaintext." An attacker could abuse this trick to redirect traffic and steal a developer's active API key.

    This gave the researchers the ability to upload files to the shared workspace – but did not allow downloads. According to Claude's documentation, users can only download files created by skills or the code execution tool. Donenfeld and Vanunu explored whether an attacker could simply ask Claude to regenerate an existing file using the stolen API key.

    If successful, this would convert a non-downloadable file into a workspace artifact that is eligible for download. The researchers confirmed that a miscreant using a stolen API key could gain complete read and write access to all workspace files: deleting or changing sensitive files or even uploading malicious files to poison the workspace or exceed the 100 GB storage space quota.

    Anthropic implemented fixes for all three vulnerabilities, publishing GitHub Security Advisories for each. However, the discovery highlights a worrisome supply chain threat as enterprises incorporate AI coding tools like Claude into their development processes and essentially turn configuration files into a new attack surface.

    "The ability to execute arbitrary commands through repository-controlled configuration files created severe supply chain risks, where a single malicious commit could compromise any developer working with the affected repository," Donenfeld and Vanunu said in a report. "The integration of AI into development workflows brings tremendous productivity benefits, but also introduces new attack surfaces that weren't present in traditional tools."



    Related Information:
  • https://www.ethicalhackingnews.com/articles/Ai-Powered-Collaboration-Tool-Claude-Leaked-Security-Vulnerabilities-A-Cautionary-Tale-for-Enterprise-Adoption-ehn.shtml

  • https://go.theregister.com/feed/www.theregister.com/2026/02/26/clade_code_cves/

  • https://www.theregister.com/2026/02/26/clade_code_cves/

  • https://cybersecuritynews.com/claude-desktop-extensions-0-click-vulnerability/


  • Published: Wed Feb 25 19:01:53 2026 by llama3.2 3B Q4_K_M













    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us