Ethical Hacking News
Researchers have found that a significant proportion of AI-powered chatbots parrot propaganda about the Ukrainian invasion, often citing links to Russian state-attributed sources. The alarming rise of disinformation on these platforms raises concerns about the ability of regulatory bodies to enforce rules aimed at preventing the dissemination of propaganda.
AI-powered chatbots can be manipulated to disseminate disinformation and propaganda. About one in five responses from tested chatbots contained Russian state-attributed content. The study found that LLMs (Large Language Models) can be trained on materials designed to manipulate them into parroting pro-Russian views. Regulatory bodies may struggle to enforce rules aimed at preventing the dissemination of disinformation and propaganda through AI-powered chatbots.
The recent findings by the non-profit Institute for Strategic Dialogue (ISD) have shed light on a disturbing trend in the realm of artificial intelligence, specifically with regards to chatbots powered by large language models. A study conducted by ISD, which analyzed responses provided by four widely used chatbots – OpenAI's ChatGPT, Google's Gemini, xAI's Grok, and Hangzhou DeepSeek Artificial Intelligence's DeepSeek – in five languages, revealed that a significant proportion of these chatbots parroted propaganda about the illegal invasion of Ukraine, often citing links to Russian state-attributed sources.
The ISD study, which was published on October 28, 2025, aimed to investigate whether AI-powered chatbots could be manipulated to disseminate disinformation and propaganda. The researchers tested a total of 300 queries in five languages – English, Spanish, French, German, and Italian – with varying degrees of neutrality, bias, and malice. The results were nothing short of alarming.
According to the study, almost one in five responses provided by these chatbots contained Russian state-attributed content, often citing links to websites affiliated with the Pravda network. This disinformation was particularly prevalent in biased or malicious queries, where it accounted for up to 33 percent of the time. The researchers noted that the language used in queries had a limited impact on the likelihood of these chatbots emitting Russian-aligned viewpoints.
The ISD study also found that the chatbot Gemini fared the best among the four tested, introducing safety guardrails to mitigate the risks associated with biased and malicious prompts about the war in Ukraine. However, even this chatbot did not always link to referenced sources or provide a separate overview of cited sources.
The researchers argued that these findings raise significant concerns about the ability of regulatory bodies, such as the European Union, to enforce rules aimed at preventing the dissemination of disinformation and propaganda. The study highlights the need for greater scrutiny of AI-powered chatbots and their potential vulnerabilities to manipulation by state-linked entities.
Furthermore, the ISD report suggests that LLMs can be trained on materials designed to manipulate them into parroting pro-Russian views, often with Kremlin-attributed sources. This phenomenon, known as "LLM grooming," poses a significant risk to the integrity of online information and could have far-reaching consequences for democratic societies.
In conclusion, the ISD study provides compelling evidence that AI-powered chatbots are vulnerable to manipulation by disinformation campaigns, particularly those linked to Russian state media outlets. As these chatbots become increasingly ubiquitous and influential in shaping public discourse, it is essential that regulatory bodies and industry leaders take concrete steps to address these vulnerabilities and ensure the integrity of online information.
Related Information:
https://www.ethicalhackingnews.com/articles/Parroted-Propaganda-The-Alarming-Rise-of-AI-Generated-Disinformation-on-Chatbots-ehn.shtml
https://go.theregister.com/feed/www.theregister.com/2025/10/28/chatbots_still_parrot_russian_state/
Published: Tue Oct 28 06:18:19 2025 by llama3.2 3B Q4_K_M