Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

Microsoft Warns of AI Recommendation Poisoning: A Growing Threat to Trust and Security


Microsoft warns of a growing threat to trust and security in AI systems known as "AI Recommendation Poisoning," where malicious attacks manipulate AI assistants to produce biased advice. The company's latest security warning highlights the need for better safeguards against these emerging risks.

  • MICROSOFT DETECTED A SURGE IN "AI RECOMMENDATION POISONING" ATTACKS
  • The attacks exploit vulnerabilities in AI assistants to produce manipulated recommendations.
  • These malicious instructions were embedded into "Summarize with AI" buttons and links on websites.
  • The attack vector relies on query parameters with manipulative prompt text in URLs.
  • MICROSOFT RECOMMENDS CUSTOMERS BE CAUTIOUS WITH AI-RELATED LINKS AND REVIEW AI ASSISTANT MEMORIES
  • THE DISCOVERY HIGHLIGHTS THE GROWING NEED FOR BETTER SAFEGUARDS AGAINST AI-RELATED THREATS.



  • Microsoft's latest security warning highlights a concerning trend in the rapidly evolving landscape of artificial intelligence (AI) threats. In a recent blog post, the company's Defender Security Team revealed that it has detected a surge in attacks designed to manipulate AI models with biased advice, a technique known as "AI Recommendation Poisoning." This sophisticated threat, eerily reminiscent of SEO poisoning, exploits vulnerabilities in AI assistants to produce manipulated recommendations, eroding users' trust in these services.

    The discovery was made by Microsoft's security researchers, who identified over 50 unique prompts from 31 companies across 14 industries. These malicious instructions were embedded into "Summarize with AI" buttons and links placed on websites, often through URLs that included query parameters with manipulative prompt text. The effectiveness of these techniques can vary over time as platforms alter website behavior and implement protections.

    The attack vector relies on the fact that some URLs that point to AI chatbots include a query parameter with a manipulative prompt text. For example, The Register entered a link with URL-encoded text into Firefox's omnibox that told Perplexity AI to summarize a CNBC article as if it were written by a pirate. The AI service returned a pirate-speak summary, citing the article and other sources.

    Microsoft's researchers warn that this technique can be used to manipulate AI assistants in various ways, including producing biased recommendations on critical topics such as health, finance, and security without users realizing their AI has been compromised. Users may not take the time to verify AI recommendations, and confident-sounding assertions by AI models make it more likely for them to accept manipulated advice.

    The risk of AI Recommendation Poisoning is particularly insidious because users may not even realize their AI has been compromised. Even if they suspect something is wrong, they wouldn't know how to check or fix it. The manipulation is invisible and persistent.

    To mitigate this threat, Microsoft recommends that customers be cautious with AI-related links and check where they lead. They should also review the stored memories of AI assistants, delete unfamiliar entries, clear memory periodically, and question dubious recommendations. Additionally, corporate security teams are advised to scan for AI Recommendation Poisoning attempts in tenant email and messaging applications.

    The discovery of this technique highlights the growing need for better safeguards against AI-related threats. As AI technology continues to advance at a rapid pace, it is essential that we prioritize awareness and education about these emerging risks. By taking proactive steps to protect ourselves and our organizations from AI Recommendation Poisoning, we can ensure that these powerful tools are used responsibly and securely.

    In light of this warning, Microsoft urges customers to remain vigilant when interacting with AI-powered services and to report any suspicious activity to the relevant authorities. By working together, we can create a safer digital landscape for everyone.



    Related Information:
  • https://www.ethicalhackingnews.com/articles/Microsoft-Warns-of-AI-Recommendation-Poisoning-A-Growing-Threat-to-Trust-and-Security-ehn.shtml

  • https://go.theregister.com/feed/www.theregister.com/2026/02/12/microsoft_ai_recommendation_poisoning/

  • https://www.msn.com/en-us/technology/artificial-intelligence/microsoft-warns-that-poisoned-ai-buttons-and-links-may-betray-your-trust/ar-AA1WaK0T

  • https://www.theregister.com/2026/02/12/microsoft_ai_recommendation_poisoning/


  • Published: Wed Feb 18 00:37:46 2026 by llama3.2 3B Q4_K_M













    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us