Thursday, April 3, 2025

Researchers Sound Alarm on Privilege Escalation Risks Within Google’s Vertex AI Machine Learning Ecosystem

Researchers have identified two critical vulnerabilities in Google’s Vertex machine learning platform, which, if successfully exploited, could permit attackers to elevate privileges and extract trained models from cloud storage.

“Researchers at Palo Alto Networks’ Unit 42 discovered that by taking advantage of custom job permissions, they were able to exploit vulnerabilities and gain unauthorized access to sensitive information across all companies within the venture.”

Here is the rewritten text:

“A maliciously designed mannequin deployed in Vertex AI inadvertently triggered the unauthorized export of various finely tuned fashion models, posing a grave risk to sensitive intellectual property and confidential data.”

Vertex AI enables the development and deployment of custom machine learning models and artificial intelligence functionalities at large scales. The product was initially introduced in May 2021.

To effectively exploit the privilege escalation vulnerability, one crucial component is the “function”, which enables users to streamline and track machine learning operations (MLOps) pipelines through custom job automation and monitoring.

According to Unit 42’s assessment, exploiting the custom job pipeline enables attackers to elevate privileges and subsequently access previously restricted data or systems. The custom job orchestrates a meticulously crafted image that executes a reverse shell, thereby procuring covert access to the environment.

The custom job, as reported by the safety vendor, operates within a tenant’s venture, leveraging a service agent account with extensive privileges to monitor all activities, manage storage buckets, and interact with BigQuery tables, potentially allowing unauthorized access to Google Cloud repositories and compromising sensitive data, including images.

The second vulnerability involves deploying a malicious mannequin within a tenant’s environment, which, upon deployment to an endpoint, establishes a reverse shell and exploits the read-only permissions of the “custom-online-prediction” service account to identify Kubernetes clusters, extract their credentials, and execute arbitrary kubectl commands.

The transition allowed the team to seamlessly migrate from Google Cloud Platform (GCP) to a scalable and efficient Kubernetes environment. This lateral motion was enabled due to the integration of permissions between Google Cloud Platform (GCP) and Google Kubernetes Engine (GKE).

Can one leverage this entry to visually inspect the freshly generated image within the Kubernetes cluster, subsequently obtaining its unique identifier and extracting images externally using the authentication token linked to the “custom-online-prediction” service account?

On top of that, the malicious mannequin could potentially be weaponized to view, download, or export all large-scale language models () and similar entities.

If a developer unwittingly deploys a Trojaned model from a publicly accessible repository, malicious actors could potentially exploit this vulnerability to extract sensitive machine learning (ML) and large language models (LLMs), leading to severe consequences. After addressing all shortcomings in accordance with accountable disclosure requirements, Google has rectified any issues.

The study reveals that a solitary malicious mannequin deployment can imperil an entire AI environment, as highlighted by the researchers. A malicious actor could exploit the presence of an unchecked dummy device on a production network to steal sensitive data, thereby unleashing devastating dummy-based data breaches.

Effective management of mannequin deployments requires organizations to establish robust controls and monitor permissions to ensure secure and compliant implementation within tenant projects.

The revelation has emerged as Mozilla’s 0Day Investigative Community (0Din) uncovered a vulnerability allowing exploitation of OpenAI ChatGPT’s underlying sandbox environment (“dwelling/sandbox/.openai_internal/”) through prompts, granting capabilities to add and execute Python scripts, transfer files, and even obtain the LLM’s playbook.

It’s worth noting that OpenAI views such interactions as intentional or anticipated behavior when the code execution remains contained within a sandbox environment, minimizing the risk of unintended consequences.

“For those seeking access to OpenAI’s ChatGPT sandbox, it’s crucial to understand that most interactions within this containerized environment are intended as features rather than security vulnerabilities,” says Marco Figueroa, a safety researcher.

Within the confines of a controlled environment, tasks like data extraction, record imports, and command-line operations in languages such as Python can be engaging activities, provided that they remain within the boundaries set by the sandbox.

Discovered this text attention-grabbing? Observe us online and discover exclusive content that we submit.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles