Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

Vulnerabilities in Popular AI/ML Python Libraries: A Growing Concern for Data Security


Researchers have identified vulnerabilities in popular AI and ML Python libraries used in Hugging Face models, allowing remote attackers to execute arbitrary code. The affected libraries, including NeMo, Uni2TS, and FlexTok, use Hydra, a Python library maintained by Meta, which is vulnerable to remote code execution due to its instantiate() function.

  • Vulnerable metadata has been identified in popular AI and ML libraries, allowing arbitrary code execution on remote systems.
  • The affected libraries include NeMo, Uni2TS, and FlexTok, all of which use Hydra, a Python library maintained by Meta.
  • The vulnerability is due to Hydra's instantiate() function accepting callable objects as arguments, allowing attackers to execute built-in Python functions like eval() and os.system().
  • Nvidia has fixed the bug in NeMo version 2.3.2, but other libraries remain vulnerable.
  • The risk of exploitation is high due to inadequate safeguards against malicious metadata, allowing users to inadvertently load poisoned models.
  • The libraries are often integrated with popular AI frameworks and tools, increasing the attack surface.
  • Users must prioritize data security to prevent potential breaches as more individuals and organizations adopt these technologies.



  • Vulnerable metadata has been identified in various popular Artificial Intelligence (AI) and Machine Learning (ML) libraries, which can be exploited by malicious actors to execute arbitrary code on remote systems. This vulnerability, discovered by Palo Alto Networks' Unit 42 security researchers, affects multiple libraries used in Hugging Face models, with tens of millions of downloads.

    The affected libraries include NeMo, Uni2TS, and FlexTok, all of which use Hydra, a Python library maintained by Meta. Hydra's instantiate() function is particularly vulnerable to remote code execution (RCE) due to its ability to accept callable objects as arguments. This allows an attacker to execute built-in Python functions like eval() and os.system(), potentially leading to the compromise of sensitive data.

    Nvidia issued CVE-2025-23304 to track a high-severity bug in NeMo, which has since been fixed in version 2.3.2. However, the vulnerability remains present in other libraries, including Uni2TS and FlexTok. The risk of exploitation is further exacerbated by the fact that Hugging Face does not provide adequate safeguards against malicious metadata, allowing users to inadvertently load poisoned models.

    Salesforce models using Uni2TS have hundreds of thousands of downloads on Hugging Face, while NeMo models have over 700 publications across various developers. Furthermore, these libraries are often integrated with popular AI frameworks and tools, increasing the attack surface.

    The implications of this vulnerability are significant, as it highlights the need for greater awareness and vigilance among users of AI/ML libraries. As more individuals and organizations adopt these technologies, they must prioritize data security to prevent potential breaches.

    In response to this vulnerability, Meta has updated Hydra's documentation with a warning about RCE vulnerabilities and advised users to implement a block-list mechanism to detect potentially malicious metadata. However, the availability of such mechanisms in production releases remains limited.

    As researchers and developers continue to push the boundaries of AI/ML capabilities, it is essential that they also prioritize data security to prevent exploitation by malicious actors.



    Related Information:
  • https://www.ethicalhackingnews.com/articles/Vulnerabilities-in-Popular-AIML-Python-Libraries-A-Growing-Concern-for-Data-Security-ehn.shtml

  • https://go.theregister.com/feed/www.theregister.com/2026/01/13/ai_python_library_bugs_allow/

  • https://nvd.nist.gov/vuln/detail/CVE-2025-23304

  • https://www.cvedetails.com/cve/CVE-2025-23304/


  • Published: Tue Jan 13 15:27:32 2026 by llama3.2 3B Q4_K_M













    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us