Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

AI-Powered Chatbots: The Unintended Consequences of Error-Prone Recommenders


AI-powered chatbots are increasingly being used to provide assistance to users, but a recent study has revealed that they can sometimes lead users astray. With 66% of GPT-4.1 models producing incorrect information, these AI-powered chatbots have become a new target for scammers looking to exploit vulnerabilities in search engines and trick users into divulging sensitive information.

  • A recent study by Netcraft found that AI-powered chatbots deliver incorrect information approximately 66% of the time when prompted with specific queries.
  • 29% of URLs provided by these chatbots point to dead or suspended sites, while a further 5% lead users to legitimate but incorrect sites.
  • The phenomenon is attributed to AI-powered chatbots' programming to look for words and associations rather than evaluating website credibility.
  • This mistake has significant implications in phishing attacks, as scammers can use chatbots to trick victims into divulging sensitive information or using poisoned code.
  • Netcraft's study highlights the need for developers and researchers to address issues related to AI-powered chatbot accuracy, particularly in cybersecurity contexts.



  • In an era where artificial intelligence (AI) has become an integral part of our daily lives, it is not surprising that AI-powered chatbots have started to emerge. These chatbots are designed to provide assistance and answer questions to users through various interfaces, including voice assistants like Alexa or Google Assistant. However, a recent study by Netcraft, a threat intelligence business, has revealed that these AI-powered chatbots can sometimes lead users astray.

    The study found that when prompted with specific queries about the websites of major companies such as Wells Fargo or Visa, the GPT-4.1 family of models would deliver incorrect information approximately 66 percent of the time. Furthermore, 29 percent of the URLs provided by these AI-powered chatbots pointed to dead or suspended sites, while a further five percent led users to legitimate sites that were not the ones they requested.

    This phenomenon is attributed to the fact that AI-powered chatbots are programmed to look for words and associations rather than evaluating the credibility of websites. As a result, they may produce incorrect results when faced with queries about specific companies or their official websites.

    The implications of this mistake are significant, particularly in the context of phishing attacks. Phishers, who have long relied on exploiting vulnerabilities in search engines to trick users into divulging sensitive information, have discovered that AI-powered chatbots can be used to their advantage. By asking for a URL and then capitalizing on the fact that the top result is an unregistered site, phishers can buy the domain and set up a phishing site that mimics the legitimate website.

    Netcraft's researchers were able to spot this kind of attack being used in various instances, including the case of the Solana blockchain API. Scammers created fake interfaces to tempt developers into using poisoned code, which they had carefully crafted to appear in search results generated by chatbots. The study highlights how the reliance on AI-powered chatbots has led to a new era of phishing attacks.

    Rob Duncan, Netcraft's lead of threat research, noted that "it's actually quite similar to some of the supply chain attacks we've seen before... It's a little bit different, because you're trying to trick somebody who's doing some vibe coding into using the wrong API." The similarity between this type of attack and traditional supply chain attacks is striking.

    The study serves as a reminder that while AI-powered chatbots can be incredibly useful tools in various contexts, their limitations should not be underestimated. As these technologies continue to advance, it will be crucial for developers and researchers to address issues such as the accuracy of search results generated by AI-powered chatbots.

    In conclusion, the recent findings of Netcraft highlight the unintended consequences of relying on AI-powered chatbots that are prone to error. These errors can have serious implications in areas like cybersecurity and phishing attacks, underscoring the need for developers and researchers to develop more robust solutions that prioritize accuracy over speed and novelty.



    Related Information:
  • https://www.ethicalhackingnews.com/articles/AI-Powered-Chatbots-The-Unintended-Consequences-of-Error-Prone-Recommenders-ehn.shtml

  • https://go.theregister.com/feed/www.theregister.com/2025/07/03/ai_phishing_websites/

  • https://www.theregister.com/2025/07/03/ai_phishing_websites/

  • https://www.msn.com/en-us/technology/artificial-intelligence/chatgpt-creates-phisher-s-paradise-by-recommending-the-wrong-urls-for-major-companies/ar-AA1HSnZ6


  • Published: Thu Jul 3 02:20:28 2025 by llama3.2 3B Q4_K_M













    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us