Ethical Hacking News
New Flaws Discovered in AI Coding Tools: A Threat to Data Security and Remote Code Execution
Researchers have uncovered over 30 security vulnerabilities in various AI-powered Integrated Development Environments (IDEs), which can be exploited to enable data theft and remote code execution attacks. The discovery highlights the importance of "Secure for AI," a new paradigm that emphasizes securing AI features from the ground up.
Over 30 security vulnerabilities have been discovered in various AI-powered Integrated Development Environments (IDEs). The identified flaws, collectively known as IDEsaster, can be exploited to enable data theft and remote code execution attacks. The most common attack vectors are prompt injection primitives, auto-approved tool calls, and triggering legitimate features of the IDE to break out of security boundaries. Some notable examples of identified attacks include reading sensitive files or writing JSON files with remote JSON schema, editing IDE settings files for code execution, and overriding workspace configuration files. Experts recommend several steps to mitigate these vulnerabilities, including only using trusted projects and files, monitoring MCP servers, reviewing data flow, and applying the principle of least privilege to LLM tools.
Researchers have recently uncovered over 30 security vulnerabilities in various artificial intelligence (AI)-powered Integrated Development Environments (IDEs), which can be exploited to enable data theft and remote code execution attacks. The discovery, led by security researcher Ari Marzouk, has significant implications for developers and organizations relying on these tools.
The identified flaws, collectively known as IDEsaster, were found in popular IDEs and extensions such as Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, and Cline. Of these, 24 have been assigned CVE identifiers, indicating that they can be tracked and mitigated by software developers.
According to Marzouk, the security shortcomings of these AI-powered IDEs are largely due to their treatment of features as inherently safe because they have been around for years. However, when combined with autonomous AI agents, these same features can be weaponized into data exfiltration and remote code execution primitives.
The vulnerabilities were discovered through a combination of automated testing and manual analysis. The researchers identified three primary attack vectors that are common to many AI-driven IDEs:
1. Bypassing the guardrails of large language models (LLMs) using prompt injection primitives.
2. Performing actions without requiring any user interaction via an AI agent's auto-approved tool calls.
3. Triggering legitimate features of the IDE to break out of security boundaries and leak sensitive data or execute arbitrary commands.
The first attack vector, prompt injection, can be achieved through various means, including the use of hidden characters or context references that are not visible to humans but can be parsed by LLMs. The second attack vector relies on an AI agent being configured to auto-approve file writes, which subsequently allows an attacker with the ability to influence prompts to cause malicious workspace settings to be written.
Some examples of identified attacks made possible by these exploit chains include:
* CVE-2025-49150 (Cursor), CVE-2025-53097 (Roo Code), and CVE-2025-58335 (JetBrains Junie) - using a prompt injection to read sensitive files or write JSON files with remote JSON schema hosted on an attacker-controlled domain.
* CVE-2025-53773 (GitHub Copilot), CVE-2025-54130 (Cursor), CVE-2025-53536 (Roo Code), and CVE-2025-55012 (Zed.dev) - using a prompt injection to edit IDE settings files and achieve code execution by setting PHP validate executable paths or PATH_TO_GIT to the path of an executable file containing malicious code.
* CVE-2025-64660 (GitHub Copilot), CVE-2025-61590 (Cursor), and CVE-2025-58372 (Roo Code) - using a prompt injection to edit workspace configuration files and override multi-root workspace settings to achieve code execution.
To mitigate these vulnerabilities, Marzouk recommends several steps:
* Only use AI IDEs (and AI agents) with trusted projects and files.
* Only connect to trusted MCP servers and continuously monitor these servers for changes.
* Review and understand the data flow of MCP tools and manually review sources you add.
* Developers of AI agents and AI IDEs should apply the principle of least privilege to LLM tools, minimize prompt injection vectors, harden the system prompt, use sandboxing to run commands, perform security testing for path traversal, information leakage, and command injection.
The discovery of these vulnerabilities highlights the importance of "Secure for AI," a new paradigm coined by Marzouk to tackle security challenges introduced by AI features. This principle emphasizes that products should be not only secure by default and secure by design but also conceived keeping in mind how AI components can be abused over time.
As agentic AI tools become increasingly popular in enterprise environments, these findings demonstrate how AI tools expand the attack surface of development machines, often by leveraging an LLM's inability to distinguish between instructions provided by a user to complete a task and content that it may ingest from an external source.
"The fact that multiple universal attack chains affected each and every AI IDE tested is the most surprising finding of this research," Marzouk said. "All AI IDEs (and coding assistants that integrate with them) effectively ignore the base software (IDE) in their threat model. They treat their features as inherently safe because they've been there for years. However, once you add AI agents that can act autonomously, the same features can be weaponized into data exfiltration and RCE primitives."
The discovery of these vulnerabilities serves as a wake-up call for developers and organizations relying on AI-powered IDEs to take proactive measures to secure their systems.
Related Information:
https://www.ethicalhackingnews.com/articles/New-Flaws-Discovered-in-AI-Coding-Tools-A-Threat-to-Data-Security-and-Remote-Code-Execution-ehn.shtml
https://thehackernews.com/2025/12/researchers-uncover-30-flaws-in-ai.html
https://nvd.nist.gov/vuln/detail/CVE-2025-49150
https://www.cvedetails.com/cve/CVE-2025-49150/
https://nvd.nist.gov/vuln/detail/CVE-2025-53097
https://www.cvedetails.com/cve/CVE-2025-53097/
https://nvd.nist.gov/vuln/detail/CVE-2025-58335
https://www.cvedetails.com/cve/CVE-2025-58335/
https://nvd.nist.gov/vuln/detail/CVE-2025-53773
https://www.cvedetails.com/cve/CVE-2025-53773/
https://nvd.nist.gov/vuln/detail/CVE-2025-54130
https://www.cvedetails.com/cve/CVE-2025-54130/
https://nvd.nist.gov/vuln/detail/CVE-2025-53536
https://www.cvedetails.com/cve/CVE-2025-53536/
https://nvd.nist.gov/vuln/detail/CVE-2025-55012
https://www.cvedetails.com/cve/CVE-2025-55012/
https://nvd.nist.gov/vuln/detail/CVE-2025-64660
https://www.cvedetails.com/cve/CVE-2025-64660/
https://nvd.nist.gov/vuln/detail/CVE-2025-61590
https://www.cvedetails.com/cve/CVE-2025-61590/
https://nvd.nist.gov/vuln/detail/CVE-2025-58372
https://www.cvedetails.com/cve/CVE-2025-58372/
Published: Sat Dec 6 10:32:31 2025 by llama3.2 3B Q4_K_M