Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

The Dark Side of AI Code Generation: How LLMs are Sabotaging Software Supply Chains


AI code generation tools are being exploited to create malicious software packages that don't exist, putting entire systems at risk. As the use of Large Language Models (LLMs) becomes more widespread, it's essential to address these risks head-on and ensure that developers are aware of the potential pitfalls of relying on AI-powered code generation.

  • LLMs can hallucinate package names, suggesting software dependencies that don't exist, a phenomenon known as "slopsquatting".
  • The problem has significant consequences for the software supply chain, including failed installations, malware infections, and security breaches.
  • About 5.2% of package suggestions from commercial models didn't exist, while 21.7% from open-source models did.
  • Re-running the same hallucination-triggering prompt ten times resulted in 43% of hallucinated packages being repeated every time.
  • The issue is exacerbated by developers who copy and paste code suggestions without verifying their accuracy.
  • There's a need for greater accountability and transparency in AI-powered code generation, as well as education for developers on the potential pitfalls of relying on these tools.


  • The rise of Large Language Models (LLMs) has brought about a significant shift in how developers write software, but it also introduces new risks to the software supply chain. These AI-powered code generation tools have become an indispensable aid for many developers, making it easier to generate high-quality code quickly and efficiently. However, researchers have discovered that LLMs can hallucinate package names, suggesting software dependencies that don't exist. This phenomenon has been dubbed "slopsquatting" by security developer-in-residence at the Python Software Foundation, Seth Michael Larson.

    The problem of LLM hallucinations has significant consequences for the software supply chain. When a developer installs a non-existent package, it can lead to a range of issues, from failed installations to malware infections. The worst-case scenario is that a malicious actor creates a fake package with a convincing README and GitHub repository, only to have an AI code assistant re-hallucinate the same name. This can result in the installation of malware, potentially compromising the security of the entire system.

    A recent study by security firm Socket found that about 5.2% of package suggestions from commercial models didn't exist, compared to 21.7% from open-source models. Another study discovered that re-running the same hallucination-triggering prompt ten times resulted in 43% of hallucinated packages being repeated every time, while 39% never reappeared. This bimodal pattern suggests that certain prompts reliably produce phantom packages.

    Feross Aboukhadijeh, CEO of security firm Socket, notes that the problem is exacerbated by developers who copy and paste code suggestions without verifying their accuracy. "These code suggestions often include hallucinated package names that sound real but don't exist," he said. "I've seen this firsthand. You paste it into your terminal and the install fails – or worse, it doesn’t fail, because someone has slop-squatted that exact package name."

    Aboukhadijeh emphasizes the importance of users double-checking LLM outputs against reality before putting any information into operation. "We're in the very early days looking at this problem from an ecosystem level," he said. "It's difficult, and likely impossible, to quantify how many attempted installs are happening because of LLM hallucinations without more transparency from LLM providers."

    The issue is not limited to individual developers; it also has broader implications for the software industry as a whole. The rise of "vibe coding" – where developers prompt an AI tool, copy its suggestion, and move on without verifying its accuracy – highlights the need for greater accountability and transparency in AI-powered code generation.

    Seth Michael Larson highlights that there are many reasons why a developer might attempt to install a package that doesn't exist. These include mistyping the package name, incorrectly installing internal packages without checking public indexes, differences between package names and module names, and more.

    As LLMs continue to shape the way developers write software, it's essential to address these risks head-on. This involves not only improving the accuracy of AI code generation tools but also educating developers about the potential pitfalls of relying on these tools.

    In conclusion, the phenomenon of LLM hallucinations poses significant challenges for the software supply chain. By understanding the causes and consequences of this issue, we can work towards creating a more resilient ecosystem that balances the benefits of AI-powered code generation with the need for human oversight and accountability.

    Related Information:
  • https://www.ethicalhackingnews.com/articles/The-Dark-Side-of-AI-Code-Generation-How-LLMs-are-Sabotaging-Software-Supply-Chains-ehn.shtml

  • https://go.theregister.com/feed/www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/


  • Published: Sat Apr 12 14:47:54 2025 by llama3.2 3B Q4_K_M













    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us