Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

Critical LangChain Core Vulnerability Exposes Secrets via Serialization Injection


Critical LangChain Core Vulnerability Exposes Secrets via Serialization Injection. A critical security flaw in the package's serialization injection mechanism could be exploited by an attacker to steal sensitive secrets and influence LLM responses through prompt injection, carrying a CVSS score of 9.3 out of 10.0.

  • The LangChain Core package has a critical security vulnerability (CVE-2025-68664) that allows attackers to steal secrets and influence LLM responses through prompt injection.
  • The vulnerability is due to the improper escaping of user-controlled dictionaries during serialization and deserialization.
  • The issue, dubbed "LangGrinch," carries a CVSS score of 9.3 out of 10.0 and affects versions >= 1.0.0, < 1.2.5, and < 0.3.81.
  • Upgrading to a patched version is recommended to ensure optimal protection against this vulnerability.


  • Critical LangChain Core vulnerability exposes secrets via serialization injection.

    A critical security flaw has been disclosed in LangChain Core that could be exploited by an attacker to steal sensitive secrets and even influence large language model (LLM) responses through prompt injection. The vulnerability, tracked as CVE-2025-68664, carries a CVSS score of 9.3 out of 10.0.

    The LangChain ecosystem is a core Python package that provides the core interfaces and model-agnostic abstractions for building applications powered by LLMs. The core LangChain Core (i.e., langchain-core) is part of this ecosystem, providing essential functionalities that support the integration of language models with other AI technologies.

    However, in December 2025, a critical security vulnerability was identified in the LangChain Core package. This vulnerability allows an attacker to exploit the serialization injection mechanism in the package's dumps() and dumpd() functions, leading to potential exploitation of secrets stored within environment variables when deserialization is performed with "secrets_from_env=True".

    The key issue lies in the way LangChain handles user-controlled dictionaries containing the 'lc' key structure. In this context, the 'lc' marker represents LangChain objects in the framework's internal serialization format. Unfortunately, the LangChain Core package does not properly escape these dictionaries during serialization and deserialization, which allows an attacker to inject arbitrary data into the system.

    This critical vulnerability has been dubbed "LangGrinch" by security researchers and carries a CVSS score of 9.3 out of 10.0, indicating its high potential impact on system stability and data integrity.

    According to Yarden Porat, a security researcher who discovered this issue, the crux of the problem lies in the failure of the LangChain Core functions to properly escape user-controlled dictionaries with 'lc' keys during serialization and deserialization. This results in the treatment of these dictionaries as legitimate LangChain objects rather than plain user data.

    This vulnerability could have various outcomes for an attacker seeking to exploit it, including:

    - The extraction of secrets stored within environment variables through the exploitation of the "secrets_from_env=True" option.
    - The instantiation of arbitrary classes within pre-approved trusted namespaces, such as langchain_core, langchain, and langchain_community.
    - Potentially leading to arbitrary code execution via Jinja2 templates.

    To mitigate this vulnerability, LangChain has released a patch that introduces new restrictive defaults in the load() and loads() functions by means of an allowlist parameter called "allowed_objects". This feature enables users to specify which classes can be serialized/deserialized, thus limiting potential exploitation.

    Furthermore, Jinja2 templates are now blocked by default, and the "secrets_from_env" option is set to "False" to disable automatic secret loading from the environment. These measures help minimize the risk of exploitation but underscore the importance of keeping LangChain Core updated with the latest patches as soon as they become available.

    The vulnerability affects the following versions of langchain-core:

    - >= 1.0.0, < 1.2.5 (Fixed in 1.2.5)
    - < 0.3.81 (Fixed in 0.3.81)

    It is recommended that users update to a patched version as soon as possible to ensure optimal protection against this critical vulnerability.

    In light of the potential impact of LangChain Core's serialization injection vulnerability, it highlights the complex and ever-evolving nature of modern software security threats. It also underscores the importance of staying informed about emerging vulnerabilities in the technologies used within our applications and ensuring that all necessary updates are applied promptly to mitigate these risks.

    Related Information:
  • https://www.ethicalhackingnews.com/articles/Critical-LangChain-Core-Vulnerability-Exposes-Secrets-via-Serialization-Injection-ehn.shtml

  • https://thehackernews.com/2025/12/critical-langchain-core-vulnerability.html

  • https://nvd.nist.gov/vuln/detail/CVE-2025-68664

  • https://www.cvedetails.com/cve/CVE-2025-68664/


  • Published: Fri Dec 26 07:07:45 2025 by llama3.2 3B Q4_K_M













    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us