Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

Google Cloud's Vertex AI Vulnerability Exposes Sensitive Data and Private Artifacts


Google Cloud's Vertex AI Vulnerability Exposes Sensitive Data and Private Artifacts, highlighting the importance of proper configuration and access control when using cloud-based AI services. Organizations must take a proactive approach to securing their cloud environments and ensuring that their AI agents are properly configured to minimize the risk of unauthorized access.

  • The recent discovery of a security vulnerability in Google Cloud's Vertex AI platform highlights the importance of proper configuration and access control when using cloud-based AI services.
  • The vulnerability allows attackers to gain unauthorized access to sensitive data, compromise organizations' cloud environments, and create backdoors into critical systems by exploiting excessive permission scoping by default.
  • Google has updated its documentation and recommended customers use Bring Your Own Service Account (BYOSA) to replace the default service agent and enforce the principle of least privilege.
  • Organizations should take a proactive approach to securing their cloud environments, validating permission boundaries, restricting OAuth scopes, reviewing source integrity, and conducting controlled security testing before production rollout.



  • The recent disclosure of a security vulnerability in Google Cloud's Vertex AI platform has left many experts and users concerned about the potential risks associated with this popular artificial intelligence (AI) service. The issue, which was discovered by cybersecurity researchers at Palo Alto Networks Unit 42, highlights the importance of proper configuration and access control when using cloud-based AI services.

    According to a report shared with The Hacker News, Vertex AI's permission model can be misused by attackers who exploit the service agent's excessive permission scoping by default. This allows an attacker to gain unauthorized access to sensitive data, compromise the organization's cloud environment, and create backdoors into critical systems.

    The vulnerability was discovered when researchers found that the Per-Project, Per-Product Service Agent (P4SA) associated with a deployed AI agent built using Vertex AI's Agent Development Kit (ADK) had excessive permissions granted by default. This opened the door to a scenario where the P4SA's default permissions could be used to extract the credentials of a service agent and conduct actions on its behalf.

    After deploying the Vertex agent via Agent Engine, any call to the agent invokes Google's metadata service and exposes the credentials of the service agent, along with the Google Cloud Platform (GCP) project that hosts the AI agent, the identity of the AI agent, and the scopes of the machine that hosts the AI agent. Using this information, attackers were able to jump from the AI agent's execution context into the customer project, effectively undermining isolation guarantees and permitting unrestricted read access to all Google Cloud Storage buckets' data within that project.

    "This level of access constitutes a significant security risk, transforming the AI agent from a helpful tool into a potential insider threat," said Ofir Shaty, Unit 42 researcher. "Granting agents broad permissions by default violates the principle of least privilege and is a dangerous security flaw by design."

    In response to the vulnerability, Google has updated its official documentation to clearly spell out how Vertex AI uses resources, accounts, and agents. The tech giant has also recommended that customers use Bring Your Own Service Account (BYOSA) to replace the default service agent and enforce the principle of least privilege (PoLP) to ensure that the agent has only the permissions it needs to perform the task at hand.

    "Organizations should treat AI agent deployment with the same rigor as new production code. Validate permission boundaries, restrict OAuth scopes to least privilege, review source integrity and conduct controlled security testing before production rollout," Shaty added.

    The discovery of this vulnerability serves as a reminder of the importance of proper configuration and access control when using cloud-based AI services. It also highlights the need for organizations to take a more proactive approach to securing their cloud environments and ensuring that their AI agents are properly configured to minimize the risk of unauthorized access.

    Furthermore, the incident demonstrates the potential risks associated with misconfigured or compromised service agents in cloud-based AI platforms. In this scenario, the attacker was able to exploit the excessive permission scoping by default to extract credentials and gain unauthorized access to sensitive data and critical systems.

    The vulnerability also raises questions about the security of private artifacts stored within Google Cloud Storage buckets. The fact that attackers were able to download images from restricted repositories highlights the potential risks associated with storing sensitive data in cloud-based storage solutions.

    In light of this discovery, it is essential for organizations to take a more proactive approach to securing their cloud environments and ensuring that their AI agents are properly configured to minimize the risk of unauthorized access. This includes validating permission boundaries, restricting OAuth scopes to least privilege, reviewing source integrity, and conducting controlled security testing before production rollout.

    Additionally, organizations should also consider using Bring Your Own Service Account (BYOSA) to replace the default service agent and enforce the principle of least privilege (PoLP). This approach can help minimize the risk of unauthorized access and ensure that AI agents have only the permissions they need to perform their tasks.

    The incident also highlights the importance of proper configuration and access control when using cloud-based AI services. It serves as a reminder that even seemingly secure solutions can be vulnerable to exploitation if not properly configured or managed.

    In conclusion, the discovery of this vulnerability in Google Cloud's Vertex AI platform is a concerning incident that highlights the potential risks associated with cloud-based AI services. It emphasizes the importance of proper configuration and access control when using these services and serves as a reminder of the need for organizations to take a more proactive approach to securing their cloud environments.



    Related Information:
  • https://www.ethicalhackingnews.com/articles/Google-Clouds-Vertex-AI-Vulnerability-Exposes-Sensitive-Data-and-Private-Artifacts-ehn.shtml

  • https://thehackernews.com/2026/03/vertex-ai-vulnerability-exposes-google.html

  • https://cyberwebspider.com/the-hacker-news/vertex-ai-security-flaw-google-cloud/


  • Published: Tue Mar 31 10:29:39 2026 by llama3.2 3B Q4_K_M













    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us