Ethical Hacking News
Thousands of public Google Cloud API keys have been found to be exposed with Gemini access after API enablement, raising concerns about the potential for unauthorized access to sensitive data and abuse of AI capabilities. This discovery highlights the importance of continuous security testing and behavior profiling in identifying anomalies and actively blocking malicious activity.
Thousands of public Google Cloud API keys were found to be exposed with Gemini access after API enablement. The issue stems from API keys gaining surreptitious access to Gemini endpoints without warning or notice when enabled on a Google Cloud project. Create a new API key defaults to "Unrestricted" meaning it's applicable for every enabled API, including Gemini, making them live Gemini credentials sitting on the public internet. Attackers can access uploaded files, cached data, and charge LLM-usage with valid keys, highlighting the need for continuous security testing and vulnerability scanning. Google has addressed the problem by implementing measures to detect and block leaked API keys that attempt to access Gemini API. Users are advised to verify AI-related APIs are enabled and make sure the keys are rotated, especially if they are publicly accessible.
In a disturbing revelation, thousands of public Google Cloud API keys have been found to be exposed with Gemini access after API enablement. This discovery has sent shockwaves through the cybersecurity community, as researchers have identified nearly 3,000 Google API keys (identified by the prefix "AIza") embedded in client-side code to provide Google-related services like embedded maps on websites.
The issue stems from the fact that when users enable the Gemini API on a Google Cloud project, existing API keys in that project gain surreptitious access to Gemini endpoints without any warning or notice. This allows attackers who scrape websites to get hold of such API keys and use them for nefarious purposes, including accessing sensitive files via the /files and /cachedContents endpoints, as well as making Gemini API calls, racking up huge bills for the victims.
Truffle Security, a company that conducted the research, found that creating a new API key in Google Cloud defaults to "Unrestricted," meaning it's applicable for every enabled API in the project, including Gemini. This means that thousands of API keys were deployed as benign billing tokens but are now live Gemini credentials sitting on the public internet.
The vulnerability was first discovered by Truffle Security researcher Joe Leon, who stated that with a valid key, an attacker can access uploaded files, cached data, and charge LLM-usage to your account. Additionally, security researcher Tim Erlin from Wallarm noted that APIs are tricky in particular because changes in their operations or the data they can access aren't necessarily vulnerabilities, but they can directly increase risk.
The discovery highlights the importance of continuous security testing, vulnerability scanning, and behavior profiling for identifying anomalies and actively blocking malicious activity. It also underscores the need for organizations to profile AI-enabled endpoints' interactions with prompts, generated content, or connected cloud services in ways that expand the blast radius of a compromised key.
Google has since addressed the problem by implementing proactive measures to detect and block leaked API keys that attempt to access the Gemini API. However, it remains unclear if this issue was ever exploited in the wild.
Meanwhile, users who have set up Google Cloud projects are advised to check their APIs and services, and verify if artificial intelligence (AI)-related APIs are enabled. If they are enabled and publicly accessible (either in client-side JavaScript or checked into a public repository), make sure the keys are rotated.
As security expert Joe Leon from Truffle Security noted, starting with the oldest keys first is a good approach. Those are most likely to have been deployed publicly under the old guidance that API keys are safe to share and then retroactively gained Gemini privileges when someone on your team enabled the API.
The discovery of this vulnerability serves as a stark reminder of the ever-evolving nature of cybersecurity threats, where even seemingly benign systems can become vectors for malicious activity. As AI continues to transform industries and organizations, it is crucial that security protocols are continuously updated and refined to address these emerging risks.
Related Information:
https://www.ethicalhackingnews.com/articles/Thousands-of-Public-Google-Cloud-API-Keys-Exposed-with-Gemini-Access-After-API-Enablement-ehn.shtml
Published: Sat Feb 28 07:12:39 2026 by llama3.2 3B Q4_K_M