New “Prompt Poaching” Attack Steals Users’ AI Conversations via Malicious Browser Extensions

In Cybersecurity News - Original News Source is cybersecuritynews.com by Blog Writer

For many users, engaging with an AI assistant requires opening a dedicated browser tab, which inherently isolates the AI from other browsing activities. While this separation improves privacy, it reduces usefulness and context.

To bridge this gap, AI-powered browser extensions have surged in popularity, allowing AI agents to seamlessly interact with emails, corporate portals, and personal documents across multiple tabs.

However, this convenience introduces a dangerous trade-off. Expel uncovered a new threat dubbed “prompt poaching,” in which malicious browser extensions silently monitor, copy, and exfiltrate sensitive AI conversations without user consent.

Prompt Poaching Attack

Security researchers have recently responded to dozens of incidents involving Chrome extensions secretly harvesting user interactions with AI assistants.

The mechanics of prompt poaching are straightforward but highly effective. Once installed, these rogue extensions actively monitor open browser tabs.

When they detect a loaded AI client, they utilize API interception or DOM scraping techniques to capture both the user’s inputs and the AI’s responses.

The extension then packages this collected data and quietly transmits it to external command-and-control servers operated by the developers.

Threat actors deploy these malicious capabilities through two primary vectors. The first method involves cloning popular, legitimate extensions and injecting them with data-stealing code.

For example, attackers have successfully distributed several malicious clones of tools originally developed by AITOPIA.

We have seen this with “Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI” using the extension ID fnmihdojmnkclgjpcoonokmkhjpjechg.

“AI Sidebar with Deepseek, ChatGPT, Claude, and more” operating under the ID inhcgfpbfdjbjogdfjbclgolkmhnooop. And “Talk to ChatGPT” utilizing the ID hoinfgbmegalflaolhknkdaajeafpilo.

The second method involves compromising an established tool with a wide user base.

A notable example is Urban VPN Proxy, tracked under the extension ID eppiocemhmnlbhjplcgkofciiegomcon, which operated as a legitimate service for some time.

According to Expel research, once a large enough audience was established, the developers silently introduced prompt poaching capabilities in a subsequent update, immediately exposing all existing users to data exfiltration.

Organizational Risks and Impact

The exfiltration of AI prompts presents severe risks to corporate security and personal privacy.

Employees frequently rely on AI assistants to draft strategic emails, summarize proprietary documents, or debug internal code, inadvertently feeding highly sensitive data directly into these tools.

When prompt poaching occurs, it exposes intellectual property, confidential customer data, and proprietary business logic.

This stolen information can easily fuel targeted phishing campaigns, facilitate identity theft, or end up brokered on underground hacker forums.

To combat the threat of prompt poaching, organizations must adopt strict browser management policies rather than relying on user discretion.

Security teams should proactively restrict unapproved plugins using Group Policy and centralized browser management consoles.

Furthermore, organizations should address internal productivity gaps by steering employees toward official desktop clients or first-party extensions developed directly by trusted AI vendors.

Finally, conducting periodic audits of installed extensions and monitoring network traffic for anomalous outbound connections can help identify and neutralize these stealthy threats before significant data loss occurs.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.