Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

The Dark Side of AI: How Chinese Agents Exploited ChatGPT for Covert Operations



Chinese agents have been using OpenAI's ChatGPT chatbot for covert operations against opponents, including smear campaigns targeting high-profile individuals such as Japan's first female prime minister Sanae Takaichi. The malicious activities were detected in mid-October 2025 and included generating status reports on operations targeting Chinese dissidents and CCP critics.

  • Chinese agents used ChatGPT for covert operations against opponents, including smear campaigns and targeting critics of the Chinese Communist Party.
  • A user with links to Chinese law enforcement attempted to use ChatGPT to plan a smear campaign against Japan's first female prime minister Sanae Takaichi in mid-October 2025.
  • ChatGPT refused to cooperate, and the user allegedly used other companies' models to carry out the campaign.
  • The malicious activity included generating status reports on operations targeting Chinese dissidents and CCP critics, including Takaichi.
  • The Chinese agents exerted social and psychological pressure to silence critics, such as hacking livestreams and reporting social media accounts for phony violations.
  • OpenAI has taken steps to address the malicious activities, banning suspected Chinese accounts using ChatGPT for such operations.
  • Campaigns similar to this "Spamouflage" have been attributed to individuals connected to Chinese law enforcement in previous years.



  • OpenAI has recently released a report detailing how a Chinese agent used its popular chatbot ChatGPT to plan and execute covert operations against opponents, including smear campaigns targeting high-profile individuals such as Japan's first female prime minister Sanae Takaichi. The report highlights the sinister use of AI technology by Chinese agents to silence critics of the Chinese Communist Party (CCP) both domestically and internationally.

    The malicious activity in question began in mid-October 2025, when a user with links to Chinese law enforcement attempted to use ChatGPT to plan a smear campaign against Takaichi. The user provided prompts for ChatGPT to post negative comments about the prime minister on social media, as well as create fake email accounts purporting to be foreign residents to send complaints to other Japanese politicians. When ChatGPT refused to cooperate, the user allegedly used other companies' models to carry out the campaign.

    Further analysis revealed that the malicious activity included generating status reports on operations targeting Chinese dissidents and CCP critics, including the specific covert operation against Takaichi. The latter followed a similar structure to the original draft plan to discredit the Japanese politician, focusing on five key areas: negative comments, immigration, living conditions, far-right links, and tariffs.

    The malicious activity also included exerting social and psychological pressure to silence critics, such as targeting dissidents' mental health and their families, hacking their livestreams, and reporting their social media accounts for phony violations. In one notable incident, the user created a fake obituary and gravestone photos claiming that dissident Jie Lijian had died before mass posting these messages online.

    This activity echoes earlier China-based influence ops campaigns dubbed "Spamouflage" by research teams, which were attributed to individuals connected to Chinese law enforcement in Meta's August 2023 threat report. The fact that the user's ChatGPT inputs included hashtags and references to fake social media accounts indicates wider cross-internet activity.

    OpenAI has taken steps to address these malicious activities, banning suspected Chinese accounts using ChatGPT for such operations. However, concerns remain about the potential misuse of AI technology by hostile actors.

    The report highlights the need for greater awareness and vigilance in the use of AI technology by governments and individuals alike. As AI becomes increasingly pervasive in our lives, it is essential that we understand its capabilities and limitations to prevent similar incidents in the future.

    In conclusion, the case of Chinese agents exploiting ChatGPT for covert operations serves as a stark reminder of the dark side of AI. While AI has the potential to bring about numerous benefits, it also poses significant risks when used maliciously by hostile actors.



    Related Information:
  • https://www.ethicalhackingnews.com/articles/The-Dark-Side-of-AI-How-Chinese-Agents-Exploited-ChatGPT-for-Covert-Operations-ehn.shtml

  • https://go.theregister.com/feed/www.theregister.com/2026/02/25/chinese_law_enforcement_chatgpt_abuse/

  • https://www.theregister.com/2026/02/25/chinese_law_enforcement_chatgpt_abuse/

  • https://www.msn.com/en-us/technology/artificial-intelligence/suspected-chinese-government-operatives-used-chatgpt-to-shape-mass-surveillance-proposals-openai-says/ar-AA1O0Euh


  • Published: Wed Feb 25 05:31:41 2026 by llama3.2 3B Q4_K_M













    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us