Ethical Hacking News
Picklescan, a widely used tool for detecting suspicious imports or function calls in Python pickle files, has been found to be vulnerable to critical security flaws. The three identified vulnerabilities could potentially allow malicious actors to execute arbitrary code by loading untrusted PyTorch models, effectively bypassing the tool's protections.
Picklescan, a Python pickle file parser, has been found vulnerable to three critical security flaws (CVE-2025-10155, CVE-2025-10156, and CVE-2025-10157). The vulnerabilities allow malicious actors to execute arbitrary code by loading untrusted PyTorch models. The issue highlights a broader problem in machine learning security due to the increased reliance on serialization formats like pickle for model saving and loading. The discovered vulnerabilities were disclosed as part of a responsible disclosure process by JFrog. A patch has been released (version 0.0.31) to address all three security flaws, emphasizing the importance of responsible disclosure processes. The emergence of these vulnerabilities underscores the need for continuous adaptation and innovation in security scanning tools and frameworks. Developers and organizations must prioritize the development and deployment of robust and adaptable security solutions designed to detect and prevent adversarial attacks.
Picklescan, an open-source utility designed to parse Python pickle files and detect suspicious imports or function calls before they are executed, has been found to be vulnerable to critical security flaws. The three identified vulnerabilities, labeled as CVE-2025-10155, CVE-2025-10156, and CVE-2025-10157, respectively, can potentially allow malicious actors to execute arbitrary code by loading untrusted PyTorch models, effectively bypassing the tool's protections.
The emergence of these security flaws highlights a critical issue in the realm of machine learning security. Machine learning frameworks such as PyTorch have become an integral part of various applications and systems, but their use has also led to an increased reliance on serialization formats like pickle for model saving and loading. However, pickle files can be used to automatically trigger the execution of arbitrary Python code when they are loaded, necessitating that users and organizations load trusted models or load model weights from alternative frameworks.
The Picklescan vulnerabilities were discovered by JFrog and subsequently disclosed as part of a responsible disclosure process. According to David Cohen, a security researcher who contributed to the discovery, each of the identified vulnerabilities enables attackers to evade PickleScan's malware detection and potentially execute a large-scale supply chain attack by distributing malicious ML models that conceal undetectable malicious code.
The first vulnerability, CVE-2025-10155, is classified as a file extension bypass vulnerability. This means that an attacker can undermine Picklescan's ability to detect and flag suspicious files with a standard pickle file containing a PyTorch-related extension such as .bin or .pt. The second vulnerability, CVE-2025-10156, enables attackers to disable ZIP archive scanning by introducing a Cyclic Redundancy Check (CRC) error. Lastly, the third vulnerability, CVE-2025-10157, allows malicious actors to bypass Picklescan's unsafe globals check, leading to arbitrary code execution by getting around a blocklist of dangerous imports.
The successful exploitation of these vulnerabilities could potentially allow attackers to conceal malicious pickle payloads within files using common PyTorch extensions or introduce CRC errors into ZIP archives containing malicious models. Furthermore, attackers can craft malicious PyTorch models with embedded pickle payloads to bypass Picklescan's protections.
In response to the discovered vulnerabilities, Matthieu Maitre, the developer and maintainer of Picklescan, has released version 0.0.31, which addresses all three security flaws. This patch highlights the importance of responsible disclosure processes in identifying and addressing critical security vulnerabilities before they can be exploited by malicious actors.
The emergence of these vulnerabilities underscores a broader issue in the realm of machine learning security, highlighting the need for continuous adaptation and innovation in security scanning tools and frameworks designed to detect and prevent adversarial attacks. Security researcher David Cohen emphasizes that "AI libraries like PyTorch grow more complex by the day, introducing new features, model formats, and execution pathways faster than security scanning tools can adapt." This widening gap between innovation and protection leaves organizations exposed to emerging threats that conventional tools simply weren't designed to anticipate.
Closing this gap requires a research-backed security proxy for AI models, continuously informed by experts who think like both attackers and defenders. By actively analyzing new models, tracking library updates, and uncovering novel exploitation techniques, this approach delivers adaptive, intelligence-driven protection against the vulnerabilities that matter most.
In conclusion, the discovery of Picklescan-specific security flaws highlights a critical need for ongoing vigilance and adaptation in machine learning security frameworks. As the landscape of machine learning continues to evolve at an unprecedented pace, it is imperative that developers, researchers, and organizations prioritize the development and deployment of robust and adaptable security solutions designed to detect and prevent adversarial attacks.
Related Information:
https://www.ethicalhackingnews.com/articles/Pickle-Specific-Security-Flaws-Exposed-A-Glimpse-into-the-Unseen-Risks-of-Machine-Learning-ehn.shtml
https://thehackernews.com/2025/12/picklescan-bugs-allow-malicious-pytorch.html
https://www.infosecurity-magazine.com/news/picklescan-flaws-expose-ai-supply/
https://nvd.nist.gov/vuln/detail/CVE-2025-10155
https://www.cvedetails.com/cve/CVE-2025-10155/
https://nvd.nist.gov/vuln/detail/CVE-2025-10156
https://www.cvedetails.com/cve/CVE-2025-10156/
https://nvd.nist.gov/vuln/detail/CVE-2025-10157
https://www.cvedetails.com/cve/CVE-2025-10157/
Published: Wed Dec 3 03:49:50 2025 by llama3.2 3B Q4_K_M