Hackers Exploit GitHub Copilot Flaw to Exfiltrate Sensitive Data

In Cybersecurity News - Original News Source is cybersecuritynews.com by Blog Writer

A recently disclosed high-severity vulnerability in GitHub Copilot Chat allowed attackers to silently siphon sensitive data from private repositories.

Tracked as CVE-2025-59145 with a near-perfect CVSS score of 9.6, the flaw enabled the theft of source code, API keys, and cloud secrets without requiring the execution of any malicious code.

Dubbed “CamoLeak,” this exploit highlights a growing threat in AI-assisted development.

A security researcher publicly disclosed the vulnerability in October 2025, shortly after GitHub patched the issue in August 2025 by disabling image rendering in Copilot Chat.

The CamoLeak Attack Chain

GitHub Copilot Chat reviews pull requests by reading descriptions, code, and repo files using the developer’s access permissions.

CamoLeak weaponized this trusted access by hiding malicious instructions inside GitHub’s invisible markdown comment syntax.

Because these comments do not render in the standard web interface, human reviewers saw nothing suspicious.

However, Copilot ingested the raw text and treated the hidden prompt as a legitimate command.

The attack unfolded in four distinct phases:

  • The attacker submitted a PR containing hidden prompt injection instructions in the description.
  • A developer with private repository access asked Copilot to review the PR, unknowingly feeding the hidden instructions to the AI.
  • The injected prompt directed Copilot to search the codebase for sensitive data, such as AWS keys, and encode the findings in base16.
  • Copilot embedded the encoded data into pre-signed image addresses, sending requests to the attacker’s server to reconstruct the stolen data character by character as the victim’s browser rendered the response.

The most sophisticated aspect of CamoLeak was its ability to bypass GitHub’s Content Security Policy (CSP).

Normally, a CSP blocks images from loading from untrusted external hosts to prevent exactly this kind of data leakage.

To evade this, attackers pre-computed a dictionary of valid, signed addresses for GitHub’s Camo image proxy.

Each address pointed to a transparent 1×1 pixel on the attacker’s server and represented a single encoded character.

Because the outbound traffic routed through GitHub’s own trusted infrastructure, it looked like normal image loading and bypassed standard network egress controls.

While CamoLeak was specific to GitHub, the underlying threat applies to any AI assistant with deep system access, such as Microsoft 365 Copilot or Google Gemini.

Whenever untrusted content can influence an AI’s instruction stream, it creates a covert data exfiltration pathway.

As traditional monitoring misses data exfiltration via trusted channels, security providers stress evolving defenses and stopping attacks at the endpoint to break the kill chain.

Solutions like BlackFog’s ADX platform focus on monitoring device outbound traffic, blocking sensitive information from leaving regardless of whether the transfer is initiated by an attacker or an exploited AI proxy.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.