Ethical Hacking News
Sanctioned Russian propaganda has been found to be spreading through popular AI-powered chatbots in Europe, raising concerns about the ability of these platforms to restrict access to sanctioned media sources. The use of these chatbots by malicious actors poses a significant threat to fundamental rights, public security, and well-being. As chatbots continue to grow in popularity, it is essential that their providers prioritize addressing this issue through robust safeguards and responsible design practices.
Chatbots are vulnerable to spreading sanctioned Russian propaganda, posing significant concerns for European regulators. Almost one-fifth of responses from popular chatbots cited Russian state-attributed sources, highlighting a significant issue with chatbots referencing sanctioned media in the EU. The more biased or malicious the query, the more frequently the chatbots would deliver Russian state-attributed information. Around 18% of all prompts and languages returned results linked to state-funded Russian media or disinformation networks.
The use of artificial intelligence (AI) chatbots has become increasingly popular among users, particularly in Europe. However, a new study by the Institute of Strategic Dialogue (ISD) has revealed that these chatbots are vulnerable to spreading sanctioned Russian propaganda, posing significant concerns for European regulators.
The ISD researchers tested four popular chatbots - ChatGPT, Gemini, DeepSeek, and Grok - on 300 neutral, biased, and "malicious" questions relating to the perception of NATO, peace talks, Ukraine's military recruitment, Ukrainian refugees, and war crimes committed during the Russian invasion of Ukraine. The results showed that almost one-fifth of responses to these queries cited Russian state-attributed sources, highlighting a significant issue with chatbots referencing sanctioned media in the EU.
This finding raises questions regarding how chatbots should deal when referencing these sources, considering many of them are sanctioned in the EU. Pablo Maristany de las Casas, an analyst at the ISD who led the research, stated that "it raises questions regarding how chatbots should deal when referencing these sources, considering many of them are sanctioned in the EU."
The researchers also found that the more biased or malicious the query, the more frequently the chatbots would deliver Russian state-attributed information. This confirms the suspicions of experts that AI-powered chatbots can be exploited by malicious actors to spread false and misleading information.
One of the most concerning aspects of this study is the widespread use of sanctioned media sources by these chatbots. The ISD found that around 18 percent of all prompts, languages, and LLMs returned results linked to state-funded Russian media, sites "linked to" Russia's intelligence agencies, or disinformation networks.
This issue has significant implications for European regulators, who are already grappling with the challenges of regulating online content. The study suggests that chatbots may come under more pressure from EU regulators as their user base grows, particularly if they hit the threshold of 45 million average monthly users, which would trigger specific rules tackling the risk of illegal content and its impact on fundamental rights, public security, and well-being.
In response to these concerns, some experts have called for chatbots to be designed with built-in safeguards to prevent the spread of propaganda. This could include implementing filters to block access to sanctioned media sources or incorporating more robust fact-checking mechanisms into their algorithms.
Another potential solution is for chatbot providers to take responsibility for ensuring that their platforms do not facilitate the spread of propaganda. OpenAI, the company behind ChatGPT, has already taken steps to prevent users from using its platform to spread false or misleading information. However, experts argue that more needs to be done to address this issue comprehensively.
The study also raises questions about the role of AI in perpetuating "data voids" - areas where reliable information is scarce or hard to find. This can create an environment where malicious actors can thrive by flooding these gaps with false and misleading information.
In conclusion, the use of sanctioned Russian propaganda on chatbots poses a significant concern for European regulators and users alike. As chatbots become increasingly popular, it is essential that their providers prioritize addressing this issue through robust safeguards and responsible design practices.
Sanctioned Russian propaganda has been found to be spreading through popular AI-powered chatbots in Europe, raising concerns about the ability of these platforms to restrict access to sanctioned media sources. The use of these chatbots by malicious actors poses a significant threat to fundamental rights, public security, and well-being. As chatbots continue to grow in popularity, it is essential that their providers prioritize addressing this issue through robust safeguards and responsible design practices.
Related Information:
https://www.ethicalhackingnews.com/articles/Sanctioned-Russian-Propaganda-Spreads-Through-Chatbots-A-Growing-Concern-for-European-Regulators-ehn.shtml
https://www.wired.com/story/chatbots-are-pushing-sanctioned-russian-propaganda/
Published: Mon Oct 27 10:10:14 2025 by llama3.2 3B Q4_K_M