Ethical Hacking News
A previously unknown vulnerability in ChatGPT allows data to be smuggled out through DNS, raising concerns about the security of AI services deployed in regulated industries. OpenAI has since patched this issue, but it highlights the need for ongoing vigilance and stringent monitoring of AI systems.
A vulnerability was discovered in OpenAI's ChatGPT AI service that allows data to be smuggled out through the Domain Name System (DNS) side channel.The discovery was made by a team of security researchers who created proof-of-concept attacks demonstrating how this vulnerability might be exploited.The breach highlights the potential risks associated with relying on AI services in critical sectors such as healthcare and finance.OpenAI's security controls do not extend to data transmitted through DNS, making it vulnerable to exploitation.The incident emphasizes the need for thorough testing of AI services for security vulnerabilities before deployment.
In a recent revelation, researchers from Check Point have exposed a previously unknown vulnerability in OpenAI's ChatGPT AI service that allows for data to be smuggled out through the Domain Name System (DNS) side channel. This finding raises serious concerns about the security of AI services deployed in regulated industries.
The discovery was made by a team of security researchers who created three proof-of-concept attacks demonstrating how this vulnerability might be exploited. One of these attacks involved a "GPT" application that served as a personal health analyst, which transmitted sensitive data to a remote server controlled by the attacker. This breach highlights the potential risks associated with relying on AI services in critical sectors such as healthcare and finance.
According to Check Point, the vulnerability existed because OpenAI's security controls did not extend to data transmitted through DNS. While the company claims that it prevents ChatGPT from communicating with the internet without authorization, this does not necessarily prevent data from being smuggled out through other channels.
The researchers pointed out that OpenAI's approach to securing ChatGPT is more robust when defending against bots scraping conversations or protecting its own content. However, their findings suggest that a single vulnerability in DNS could be used to bypass these safeguards and transmit sensitive information.
This incident serves as a stark reminder of the importance of thoroughly testing AI services for security vulnerabilities before deployment. OpenAI has since patched this issue on February 20, 2026, but it underscores the need for ongoing vigilance and stringent monitoring of AI systems in regulated industries.
Related Information:
https://www.ethicalhackingnews.com/articles/Exposing-the-Dark-Side-of-AI-The-DNS-Data-Smuggling-Flaw-in-ChatGPT-ehn.shtml
https://go.theregister.com/feed/www.theregister.com/2026/03/30/openai_chatgpt_dns_data_snuggling_flaw/
https://www.theregister.com/2026/03/30/openai_chatgpt_dns_data_snuggling_flaw/
https://thehackernews.com/2026/03/openai-patches-chatgpt-data.html
Published: Mon Mar 30 17:17:56 2026 by llama3.2 3B Q4_K_M