Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

Faking It: How a Malicious Hugging Face Repository Impersonated OpenAI's "Privacy Filter" to Deliver Information-Stealing Malware




A fake OpenAI repository on Hugging Face impersonated the legitimate "Privacy Filter" project to deliver information-stealing malware to Windows users. The malicious campaign briefly reached #1 on Hugging Face and accumulated over 244,000 downloads before being removed by the platform. Experts warn that such attacks are becoming increasingly common, highlighting the need for increased vigilance and security measures to protect against AI model attacks.

  • Malicious actors impersonated a legitimate OpenAI repository on Hugging Face, creating a fake "Privacy Filter" project.
  • A fake AI model was uploaded to the platform, which included infostealer malware designed to steal sensitive data.
  • The malware was able to evade detection systems due to anti-analysis features and targeted sensitive data from various platforms.
  • The incident highlights the risks of relying on open platforms for AI model sharing and underscores the need for increased vigilance and security measures.
  • Users who downloaded files from the malicious repository are advised to take immediate action to protect themselves.



  • The world of artificial intelligence (AI) and machine learning (ML) has become increasingly important in our daily lives. Platforms like Hugging Face provide developers and researchers with a space to share AI models, datasets, and tools. However, this openness also creates opportunities for malicious actors to abuse these platforms and spread malware. Recently, a fake OpenAI repository on Hugging Face made headlines after it was discovered impersonating the legitimate "Privacy Filter" project from OpenAI.

    According to reports, researchers at HiddenLayer, a company focused on safeguarding AI and ML models against attacks, discovered the malicious repository named Open-OSS/privacy-filter on May 7. Upon closer inspection, they found that the repository had copied its model card nearly verbatim, including fake AI-related code designed to appear harmless. However, in the background, the 'loader.py' Python script executed infostealer malware on Windows machines.

    The malicious loader.py file was designed with anti-analysis features such as checks for virtual machines, sandboxes, debuggers, and analysis tools, all aimed at evading detection systems. The repository briefly reached #1 on Hugging Face and accumulated 244,000 downloads before the platform responded to reports and removed it.

    The malware itself is a Rust-based infostealer designed to target sensitive data including browser data from Chromium- and Gecko-based browsers, Discord tokens, local databases, master keys, cryptocurrency wallets, SSH, FTP, VPN credentials, system information, multi-monitor screenshots, and more. The stolen data was compressed and exfiltrated to a command-and-control (C2) server located at recargapopular[.]com.

    The incident highlights the risks associated with relying on open platforms for AI model sharing. While such platforms are crucial for advancing AI research, they also create vulnerabilities that can be exploited by malicious actors. The discovery of this fake repository on Hugging Face underscores the need for increased vigilance and security measures to protect against such attacks.

    The vast majority of the 667 accounts that liked the malicious repository appear to be auto-generated, suggesting an organized campaign by threat actors. By examining those repositories, researchers also uncovered other campaigns using the same loader infrastructure. The overlaps with npm typosquatting campaigns distributing the WinOS 4.0 implant further emphasize the malicious nature of this incident.

    Users who downloaded files from the malicious repository are advised to take immediate action, including reimagining their machines, rotating all stored credentials, replacing cryptocurrency wallets and seed phrases, invalidating browser sessions and tokens, and more.

    The discovery of this malicious Hugging Face repository serves as a stark reminder of the importance of security awareness and staying vigilant in today's digital landscape. As AI continues to advance at breakneck speed, so too must our defenses against malware and other cyber threats.



    Related Information:
  • https://www.ethicalhackingnews.com/articles/Faking-It-How-a-Malicious-Hugging-Face-Repository-Impersonated-OpenAIs-Privacy-Filter-to-Deliver-Information-Stealing-Malware-ehn.shtml

  • https://www.bleepingcomputer.com/news/security/fake-openai-repository-on-hugging-face-pushes-infostealer-malware/

  • https://www.hiddenlayer.com/insight/malware-found-in-trending-hugging-face-repository-open-oss-privacy-filter

  • https://www.breachsense.com/blog/infostealer-malware/

  • https://www.infosecurityeurope.com/en-gb/blog/threat-vectors/guide-infostealer-malware.html

  • https://apt.etda.or.th/cgi-bin/listgroups.cgi?t=Winos

  • https://research.checkpoint.com/2025/cracking-valleyrat-from-builder-secrets-to-kernel-rootkits/


  • Published: Sat May 9 10:29:11 2026 by llama3.2 3B Q4_K_M













    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us