Risks of Sharing Sensitive Corporate data into ChatGPT

In Cybersecurity News - Original News Source is cybersecuritynews.com by Blog Writer

Post Sharing

ChatGPt is the recent development in commercial AI technology developed by OpenAI, it was launched in November 2022.

Since its launch, the tool gained over 67 million users and has a monthly average of 21.1 million users.

ChatGPT inroads into Workplace

Earlier people started using ChatGPT to create poems, essays for school, and song lyrics. Later it moved to the workplace helping employees to be more productive.

According to a report from Cyberhaven, over 5.6% of employees are using chatGPT in the workplace and they feel it makes them 10 times more productive.

On the other hand problems with ChatGPt are also on the rise as employees paste sensitive company data into ChatGPT.

Cyberhaven found that over 4.9% of employees paste companies’ sensitive data into ChatGPT, as this tool uses content provided by users as training data to improve, it may pose serious risks.

“On March 1, our product detected a record 3,381 attempts to paste corporate data into ChatGPT per 100,000 employees, defined as “data egress” events in the chart below.”

Several people have already used this API to create impressive open-source analysis tools that can make the jobs of cybersecurity researchers much easier.

Leak of Sensitive Data to ChatGPT

Usage of ChatGPT is growing every day exponentially, on a weekly average of over 100,000 employees added confidential documents, source code, and client data.

“At the average company, just 0.9% of employees are responsible for 80% of egress events — incidents of pasting company data into the site.”

ChatGPT is also banned in companies like JP Morgan, and Verizon Education institutions like NYC Education Department.

Also, the ChatGPT is widely used by cybercriminals as part of a new technique they have been experimenting with.

Network Security Checklist – Download Free E-Book