Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

A Persistent Threat Lurks in the Shadows: The Vulnerability in Cursor AI YOLO Mode




A persistent remote code execution bug has been discovered in popular AI-powered coding tool Cursor, allowing an attacker to secretly modify the Model Context Protocol (MCP) configuration and execute malicious commands silently on the victim's machine. The vulnerability highlights a critical weakness in the trust model behind AI-assisted development environments and underscores the need for greater security awareness and testing of these emerging technologies.

  • A persistent remote code execution bug, dubbed "MCPoison," has been discovered in Cursor's AI-powered coding tool.
  • The vulnerability allows an attacker to secretly modify the Model Context Protocol (MCP) configuration without user prompt.
  • An attacker can exploit this trust by adding a benign MCP configuration with a harmless command and later modifying it to execute a malicious command silently on the victim's machine.
  • This highlights a critical weakness in the trust model behind AI-assisted development environments, raising security concerns for teams integrating LLMs and automation into their workflows.



  • In a recent discovery that highlights the growing risks associated with artificial intelligence (AI) and machine learning (ML), cybersecurity firm Check Point has uncovered a persistent remote code execution bug in popular AI-powered coding tool Cursor. This vulnerability, dubbed "MCPoison," allows an attacker to secretly modify the Model Context Protocol (MCP) configuration of the tool, silently swapping it for a malicious command without any user prompt.

    The discovery was made by Check Point researchers Andrey Charikov, Roman Zaikin, and Oded Vanunu, who evaluated the trust and validation model for MCP execution in Cursor. They found that once Cursor approves an initial MCP configuration, it trusts all future modifications without requiring any new validation. This means that an attacker could easily exploit this trust by adding a benign MCP configuration with a harmless command to a shared repository, waiting for someone to approve it, and then later changing the same entry so it executes a malicious command, which will then be executed silently on the victim's machine every time Cursor is reopened.

    The Check Point team published a proof-of-concept demonstrating this type of persistent remote code execution. They first got a non-malicious MCP command approved and then replaced it with a reverse-shell payload, thus gaining access to the victim's machine every time they open the Cursor project. This vulnerability highlights a critical weakness in the trust model behind AI-assisted development environments, raising the stakes for teams integrating LLMs and automation into their workflows.

    The discovery of this vulnerability comes as no surprise, given the growing risks associated with AI and ML platforms. In recent months, Check Point researchers have uncovered several other vulnerabilities in developer-focused AI platforms, including a bug in Anthropic's SQLite MCP server that was not fixed by the vendor. These discoveries highlight the need for greater scrutiny and testing of these emerging technologies to ensure they are secure.

    The vulnerability in Cursor AI YOLO Mode also underscores the importance of user approval and validation in collaborative development scenarios. In such scenarios, changes to an existing configuration can be common, and any gaps in validation could lead to command injection, code execution, or persistent compromise. This means that developers must be vigilant in ensuring that their tools and platforms are secure and trustworthy.

    The Cursor AI YOLO Mode vulnerability is a wake-up call for developers and users of AI-powered coding tools. It highlights the need for greater security awareness and testing of these emerging technologies to ensure they are secure and trustworthy. As AI continues to shape modern software workflows, it is essential that we prioritize security and take steps to mitigate the risks associated with these platforms.

    In conclusion, the vulnerability in Cursor AI YOLO Mode is a serious discovery that highlights the growing risks associated with AI and ML platforms. It underscores the need for greater scrutiny and testing of these emerging technologies and emphasizes the importance of user approval and validation in collaborative development scenarios. As we move forward, it is essential that we prioritize security and take steps to mitigate the risks associated with AI-powered coding tools.



    Related Information:
  • https://www.ethicalhackingnews.com/articles/A-Persistent-Threat-Lurks-in-the-Shadows-The-Vulnerability-in-Cursor-AI-YOLO-Mode-ehn.shtml

  • https://go.theregister.com/feed/www.theregister.com/2025/08/05/mcpoison_bug_abuses_cursor_mcp/


  • Published: Tue Aug 5 19:26:21 2025 by llama3.2 3B Q4_K_M













    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us