EmailGPT Vulnerability Let Attackers Access Sensitive Data

In Cybersecurity News - Original News Source is cybersecuritynews.com by Blog Writer

Post Sharing

A new prompt injection vulnerability has been discovered in the EmailGPT service. This API service and Google Chrome plugin help users write emails in Gmail using OpenAI’s GPT model.

The prompt injection vulnerability arises when an attacker manipulates a large language model (LLM) using manipulated inputs, allowing the LLM to execute the attacker’s intentions deliberately. 

With a CVSS base score of 6.5, this vulnerability—CVE-2024-5184—indicates a medium severity level.

Analyze any MaliciousURL, Files & Emails & Configuration With ANY RUN Start your Analysis

“Exploitation of this vulnerability would lead to intellectual property leakage, denial-of-service, and direct financial loss through an attacker making repeated requests to the AI provider’s API which are pay-per-use”, Synopsys Cybersecurity Research Center (CyRC) shared with Cyber Security News.

Prompt Injection in EmailGPT Service

A large language model (LLM) is vulnerable to prompt injection when an attacker manipulates it with specially constructed inputs, leading the LLM to carry out the attacker’s plans unintentionally. 

This can be accomplished either directly—by “jailbreaking” the system prompt—or indirectly—by manipulating external inputs, which could result in social engineering, data exfiltration, and other issues.

Researchers identified a prompt injection vulnerability in the EmailGPT service.

A malicious user can inject a direct prompt and take control of the service logic since the service uses an API. 

Attackers can take advantage of this vulnerability by forcing the AI service to execute unwanted prompts or leak the usual hard-coded system prompts.

When a malicious prompt is submitted to EmailGPT, the system will react by giving the request for harmful information.

Anyone with access to the service can take advantage of this vulnerability.

The main EmailGPT software branch is impacted. Repeatedly requesting unapproved APIs poses serious threats, such as theft of intellectual property, denial-of-service attacks, and financial damage. 

Recommendation

To reduce any possible threats, CyRC therefore recommended users remove EmailGPT applications from their networks right away.

Looking for Full Data Breach Protection? Try Cynet's All-in-One Cybersecurity Platform for MSPs: Try Free Demo