Ethical Hacking News
The Endpoints Enigma: Understanding the Risks of Exposed API Keys and NHIs in AI Infrastructure
Summary:
The rise of Large Language Models (LLMs) has brought about a new era of automation and efficiency, but it also poses significant security concerns. Exposed API keys and Non-Human Identities (NHIs) across configuration files and pipelines can lead to identity sprawl and static credentials that remain usable for extended periods of time. To mitigate these risks, organizations must prioritize endpoint privilege management and zero-trust security principles. By taking a proactive approach to securing LLM endpoints, they can reduce the risk of exposure and ensure that their AI infrastructure remains secure against cyber-attacks.
Exposure of API keys and Non-Human Identities (NHIs) across configuration files and pipelines increases security concerns in LLM environments. The scattering of credentials leads to "secrets sprawl" making it difficult to track and secure access to cloud services and internal tools. Static credentials pose a significant risk as long-lived credentials can remain usable for extended periods after an organization's infrastructure is breached. NHIs pose a security risk in LLM environments due to "identity sprawl" which occurs when broad permissions are granted without regular review or update. Compromised endpoints can allow cybercriminals to move laterally across systems that trust the LLM by default, posing risks through prompt-driven data exfiltration and tool-calling abuse.
The rise of Large Language Models (LLMs) has brought about a new era of automation and efficiency in various industries, including healthcare, finance, and customer service. However, as with any powerful technology, LLMs also come with their own set of security concerns. One area that requires particular attention is the exposure of API keys and Non-Human Identities (NHIs) across configuration files and pipelines.
API keys and NHIs are often used to authenticate and authorize access to cloud services and internal tools. However, when these credentials are scattered across multiple configuration files and pipelines, they become increasingly difficult to track and secure. This is known as "secrets sprawl." As a result, organizations that rely heavily on LLMs may be leaving themselves vulnerable to cyber-attacks.
Another issue that arises from the exposure of API keys and NHIs is the risk of static credentials. Many Network Function Entities (NFEs), such as service accounts, hold long-lived credentials that are rarely rotated. This means that if an organization's infrastructure is breached, the compromised credentials can remain usable for extended periods of time, allowing attackers to access sensitive data and perform malicious activities.
Furthermore, NHIs pose a significant security risk in LLM environments because models rely on them continuously. When broad permissions are granted to NFEs without being regularly reviewed or updated, it can lead to "identity sprawl." This occurs when the number of NHIs across different environments grows exponentially, making it increasingly difficult for security teams to monitor and manage access controls.
To understand the risks associated with exposed endpoints in LLM infrastructure, it's essential to consider the following factors:
- Publicly accessible APIs without authentication
- Weak or static tokens
- The assumption that internal means safe
- Temporary test endpoints that become permanent
- Cloud misconfigurations that expose services
When cybercriminals compromise a single LLM endpoint, they can often gain access to much more than the model itself. Unlike traditional APIs that perform one function, LLM endpoints are commonly integrated with databases, internal tools or cloud services to support automated workflows. This means that one compromised endpoint can allow cybercriminals to move quickly and laterally across systems that already trust the LLM by default.
Exposed endpoints can jeopardize LLM environments through prompt-driven data exfiltration, abuse of tool-calling permissions, and indirect prompt injection. In addition, NHIs pose a significant security risk in LLM environments because models rely on them continuously.
In order to mitigate these risks, organizations must prioritize endpoint privilege management and zero-trust security principles. This includes enforcing least-privilege access for human and machine users, using Just-in-Time (JIT) access, monitoring and recording privileged sessions, rotating secrets automatically, removing long-lived credentials when possible, and implementing robust authentication mechanisms for publicly accessible APIs.
By taking a proactive approach to securing LLM endpoints, organizations can reduce the risk of exposure and ensure that their AI infrastructure remains secure against cyber-attacks.
Related Information:
https://www.ethicalhackingnews.com/articles/The-Endpoints-Enigma-Understanding-the-Risks-of-Exposed-API-Keys-and-NHIs-in-AI-Infrastructure-ehn.shtml
https://thehackernews.com/2026/02/how-exposed-endpoints-increase-risk.html
https://blogs.cisco.com/security/detecting-exposed-llm-servers-shodan-case-study-on-ollama
Published: Mon Feb 23 07:07:18 2026 by llama3.2 3B Q4_K_M