Agentic LLM Browsers Expose New Attack Surface for Prompt Injection and Data Theft

In Cybersecurity News - Original News Source is cybersecuritynews.com by Blog Writer

Artificial intelligence is changing how people browse the internet. AI-powered browsers no longer just show web pages — they read content, take actions, and complete tasks for the user.

These tools, called agentic LLM browsers, let users give simple commands like “book a meeting” or “summarize my emails,” and the browser handles the rest. While this sounds useful, it brings a serious security cost that is only now coming into view.

Agentic LLM browsers work by connecting an AI model directly to the browser’s internal systems, giving the AI the ability to click buttons, fill forms, and interact with files without asking the user to approve each step.

Well-known examples include Comet by Perplexity, Atlas by OpenAI, Microsoft Edge Copilot, and Brave Leo AI.

Each product is built differently, but they all share the same problem: to function properly, they must break through the security walls that traditional browsers spent decades building.

Varonis Threat Labs researchers identified architectural vulnerabilities across these agentic browsers. Their research found that the same design choices making these tools powerful also make them easy to exploit.

By linking the AI model to local browser processes through privileged extensions and internal channels, these browsers create a control path that security frameworks were never designed to handle.

The attack surface exposed is broad. A web vulnerability like Cross-Site Scripting (XSS), which in a standard browser typically affects one website, can now hand an attacker complete control over the entire browsing session.

Using a method called indirect prompt injection, a malicious webpage embeds hidden instructions into the AI’s view — ones the user never sees, but the AI follows without question.

These commands can force the agent to read private files, send emails as the user, navigate to phishing pages, or silently download malware onto the device, far exceeding the damage of any traditional browser attack.

These attacks are hard to detect since the agent acts using real user credentials, malicious activity looks identical to normal browser behavior, giving attackers time to operate undetected.

How the Communication Bridge Becomes a Weapon

The most dangerous element in agentic LLM browsers is the trusted communication channel between the AI backend and the browser’s internal components.

Comet uses a feature called externally_connectable, allowing approved domains such as perplexity.ai to send commands directly to a powerful background extension.

That extension carries the debugger permission, which grants full programmatic control over the browser — including the ability to click, scroll, type, and read content across any open tab.

The Comet Agent Extension Permissions (Source – Varonis)

This extension runs quietly and cannot be turned off through standard browser settings. If an attacker executes malicious JavaScript on any approved domain, they can use that trusted origin to push unauthorized commands through the same channel.

Varonis Threat Labs confirmed during testing that XSS on a trusted domain could allow an attacker to invoke the GetContent tool and pull local files from the user’s machine.

Using the Agent Extension’s GetContent Tool to Read a Local File from the OS Computer (Source – Varonis)

Microsoft Edge Copilot faces the same risk, as the researchers called the Edge.Context.GetDocumentBody tool in a continuous loop, capturing live page data and forwarding it to an external server, turning a basic reading tool into a live surveillance instrument.

Exfiltrating Content from a Private GitHub Repository (Source – Varonis)

Security teams should monitor browser processes for unexpected file reads, unusual outbound connections, or browser actions that carry user-level authority without clear user instruction.

Developers should enforce least-privilege policies for all extensions with elevated permissions and rigorously validate any external data the AI processes.

Individual users should keep browsers updated at all times, as Varonis confirmed that a prompt injection vulnerability discovered through embedded page titles was patched during the research period.

Organizations are encouraged to deploy data-aware detection tools that can identify browser activity appearing legitimate on the surface but lacking genuine user intent.

Follow us on Google News, LinkedIn, and X to Get More Instant UpdatesSet CSN as a Preferred Source in Google.