77% of Employees Share Company Secrets on ChatGPT Compromising Enterprise Policies

In Cybersecurity News - Original News Source is cybersecuritynews.com by Blog Writer

Corporate data security faces an unprecedented crisis as new research reveals widespread employee misuse of generative AI platforms.

A comprehensive study examining enterprise browsing behavior has uncovered alarming patterns of sensitive data exposure across organizations worldwide.

The research, based on real-world telemetry from enterprise browsers, demonstrates that artificial intelligence tools have become the primary vector for unauthorized data transfer from corporate environments.

The study exposes how rapidly generative AI has integrated into workplace routines, with 45% of enterprise users now actively engaging with AI platforms.

ChatGPT dominates this landscape, capturing 43% of overall employee usage and representing 92% of all generative AI activity within organizations.

This remarkable adoption rate places AI tools alongside established enterprise categories like email and file sharing in terms of daily utilization.

Most concerning is the scale of sensitive information exposure through these platforms. The research reveals that 77% of employees regularly paste data into generative AI tools, with 82% of this activity occurring through unmanaged personal accounts that bypass corporate oversight.

This behavior has positioned generative AI as the leading channel for corporate-to-personal data exfiltration, accounting for 32% of all unauthorized data movement outside sanctioned environments.

LayerX Security analysts identified these patterns through comprehensive monitoring of enterprise browser activity, providing unprecedented visibility into employee interactions with AI platforms.

Their research methodology involved deploying security solutions directly within user browsers across multiple large-scale enterprises, capturing complete visibility into data flows between corporate systems and external AI services.

The financial and compliance implications are staggering, with 40% of files uploaded to generative AI platforms containing personally identifiable information (PII) or payment card industry (PCI) data.

Similarly, 22% of data pasted into these tools includes sensitive regulatory information. This exposure creates substantial risks for organizations subject to data protection regulations like GDPR, HIPAA, or SOX compliance requirements.

The research reveals a critical identity management crisis within enterprise environments, where traditional access controls have failed to contain employee behavior.

Personal account usage dominates high-risk categories, with 67% of generative AI access occurring through unmanaged accounts that exist outside corporate identity systems.

This pattern extends beyond AI tools, affecting business-critical applications including Salesforce (77% non-corporate access), Microsoft Online (68% non-corporate), and Zoom (64% non-corporate).

Even when employees use corporate credentials, authentication weaknesses persist across enterprise systems. The study found that 83% of ERP logins and 71% of CRM access occurs without single sign-on (SSO) federation, effectively treating corporate accounts like personal ones.

This creates massive visibility gaps where sensitive business workflows operate outside IT oversight and security controls.

The copy-paste behavior represents the most dangerous data transfer method, as it bypasses traditional data loss prevention (DLP) systems entirely.

Employees average 46 paste operations daily, with personal accounts generating an average of 15 pastes per day, including at least 4 containing sensitive data.

Popular destinations include ChatGPT, Google services, Databricks, LinkedIn, Snowflake, and Slack, demonstrating how corporate information flows into diverse external platforms through routine productivity activities.

Chat and instant messaging applications compound these risks, with 87% of activity occurring through unmanaged accounts while 62% of users paste PII/PCI data into these platforms.

This combination of high personal account usage and frequent sensitive data exposure makes messaging apps among the most dangerous channels for unauthorized information transfer.

Follow us on Google News, LinkedIn, and X to Get More Instant UpdatesSet CSN as a Preferred Source in Google.