Critical Anthropic’s MCP Vulnerability Enables Remote Code Execution Attacks

In Cybersecurity News - Original News Source is cybersecuritynews.com by Blog Writer

A critical flaw in Anthropic’s Model Context Protocol (MCP) exposes over 150 million downloads to potential compromise. The vulnerability could enable full system takeover across up to 200,000 servers.

The OX Security Research team identified the flaw as a fundamental design decision embedded in Anthropic’s official MCP SDKs across every supported programming language, including Python, TypeScript, Java, and Rust.

Unlike a traditional coding bug, this vulnerability is architectural, meaning any developer building on Anthropic’s MCP foundation unknowingly inherits the exposure from the ground up.

The flaw enables Arbitrary Command Execution (RCE) on any system running a vulnerable MCP implementation.

Successful exploitation grants attackers direct access to sensitive user data, internal databases, API keys, and chat histories, effectively handing over complete control of the affected environment.

Researchers identified four distinct exploitation families:

  • Unauthenticated UI Injection targeting popular AI frameworks.
  • Hardening Bypasses in supposedly protected environments like Flowise.
  • Zero-Click Prompt Injection in AI IDEs, including Windsurf and Cursor.
  • Malicious Marketplace Distribution, with 9 out of 11 MCP registries successfully poisoned with a malicious test payload.

OX Security confirmed successful command execution on six live production platforms, including critical vulnerabilities in LiteLLM, LangChain, and IBM’s LangFlow.

The research produced at least 10 CVEs spanning multiple high-profile projects. Several critical flaws have been patched, including CVE-2026-30623 in LiteLLM and CVE-2026-33224 in Bisheng.

MCP Disclosure Timeline (Source: OX Security)

In contrast, others remain unpatched and in a “reported” state, covering tools like GPT Researcher, Agent Zero, Windsurf, and DocsGPT.

OX Security repeatedly recommended to Anthropic a protocol-level patch that would have immediately protected millions of downstream users.

Anthropic declined, describing the behavior as “expected.” The company did not object when researchers notified them of their intent to publish.

This response comes just days after Anthropic unveiled Claude Mythos, positioned as a tool to help secure the world’s software, a contrast researchers describe as a call to action for Anthropic to apply “Secure by Design” principles to its own infrastructure first.

How to Protect Your Environment

  • Block public internet access to AI services connected to sensitive APIs or databases.
  • Treat all external MCP configuration input as untrusted; block or restrict user-controlled inputs to STDIO parameters.
  • Install MCP servers only from verified sources such as the official GitHub MCP Registry.
  • Run MCP-enabled services inside sandboxes with restricted permissions.
  • Monitor all tool invocations for unexpected background activity or data exfiltration attempts.
  • Update all affected services to their latest patched versions immediately.

OX Security has shipped platform-level detections to identify unsafe STDIO MCP configurations in customer codebases and AI-generated code.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.