Ethical Hacking News
OpenAI's ChatGPT has been collecting user queries without consent, raising concerns about data privacy and security in the age of AI. Can users trust AI chatbots with their personal information, or are they putting themselves at risk of data breaches and misuse?
OpenAI's ChatGPT records and potentially stores every question asked, which may be accessible to search engines. Users are warned that sharing queries or saving them for later use makes them searchable on Google, but many users did not heed these warnings. OpenAI has removed the option to make searches discoverable after receiving complaints about users' questions being shared without consent. The company is under a federal court order to preserve all user conversations from ChatGPT. Large Language Models can be used to steal data, and the more information given to AI services, the more it can be used against you. Google's Gemini AI chatbot has introduced an update that enables automatic recall of key details from past chats, raising concerns about data privacy and security. The need for better safeguards to protect users' personal data and prevent the misuse of AI chatbots is highlighted.
In a recent exposé, tech journalist Steven J. Vaughan-Nichols shed light on the dark side of AI chatbots like OpenAI's ChatGPT. The article highlights the unsettling reality that every question you ask an AI chatbot is being recorded and potentially stored for search engines to access. This raises important questions about data privacy and security in the age of AI.
According to Vaughan-Nichols, OpenAI explicitly warned users that sharing their queries with others or saving them for later use would make them searchable on Google. However, it appears that many users did not read these warnings or think through the implications. As a result, OpenAI removed the option to make searches discoverable after receiving numerous complaints about users' questions being shared without consent.
But this is not an isolated incident. The article also mentions that OpenAI is currently under a federal court order to preserve all user conversations from ChatGPT on its consumer-facing tiers. This means that even if you think you've deleted your queries, they may still be stored and potentially resurfaced in a Google or AI search.
This raises concerns about the potential for data breaches and the misuse of personal information. Large Language Models (LLMs) like OpenAI's ChatGPT can be used to steal data just as easily as if they were company insiders. The more data you give any AI service, the more that information can potentially be used against you.
The article also touches on the issue of AI safety guidelines and their implementation. Google has begun rolling out an update to its Gemini AI chatbot, which enables it to automatically remember key details from past chats. While this feature may seem helpful, it also raises concerns about data privacy and security.
For instance, if a user asks about "dog treats" but later asks about "3D-printed guns," the AI may recall that previous question and provide information that could be sensitive or embarrassing. This highlights the need for better safeguards to protect users' personal data and prevent the misuse of AI chatbots.
In conclusion, OpenAI's ChatGPT has exposed a darker side of AI chatbots: the collection and storage of personal data without consent. As we continue to rely on these technologies, it is essential that we prioritize data privacy and security to ensure that our information remains protected.
Related Information:
https://www.ethicalhackingnews.com/articles/OpenAIs-ChatGPT-and-the-Unsettling-Reality-of-Personal-Data-Collection-ehn.shtml
https://go.theregister.com/feed/www.theregister.com/2025/08/18/opinion_column_ai_surveillance/
Published: Mon Aug 18 06:48:25 2025 by llama3.2 3B Q4_K_M