Italy Blocks ChatGPT Temporarily Over Privacy Concerns

In Cybersecurity News - Original News Source is cybersecuritynews.com by Blog Writer

Post Sharing

According to the government’s privacy regulatory body, Italian authorities have recently placed a temporary hold on the ChatGPT due to concerns regarding data privacy.

With the recent emergence of artificial intelligence chatbots, the Italian government is the first country from the Western region to take action against one of these bots, ChatGPT.

As a result of the restriction, the web version of ChatGPT, one of the most popular writing assistants, cannot be used.

Italy Temporarily Blocks ChatGPT

On March 20th, 2023, ChatGPT experienced a data loss that resulted in a data breach of user conversations and payment information of paying customers.

While this data breach raised privacy concerns, and as a result, the Italian government decided to block ChatGPT over this issue.

The Privacy Guarantor has pointed out that OpenAI fails to inform users and parties whose data is collected. In short, in their provision, OpenAI lacks a legal justification for mass collecting and storing personal user information and uses it to train the platform’s algorithms.

The accuracy of personal data processing by ChatGPT is being compromised due to inconsistencies between the information provided and actual data, as per the results of the conducted checks.

Why does Italy Blocks ChatGPT?

Despite the minimum age requirement of 13, the regulatory body points out that the lack of age verification filters exposes minors to inappropriate responses beyond their level of development and self-awareness.

ChatGPT must respect privacy until it takes the action described by the Italian Data Protection Authority. A temporary limit is placed on the usage and storage of data that the company may hold on Italian users during the period under consideration.

The Italian watchdog has ordered OpenAI, Garante, to disclose the measures it has implemented to safeguard the privacy of users’ data within 20 days. 

While failure to comply could lead to a penalty of either 20 million euros (almost $22 million) or 4% of their yearly global revenue.

A representative from OpenAI responded to the critics by saying that:-

“Our AI systems are trained using less personal information, so we don’t want our AI learning about individuals but about the world as a whole.”

In addition, OpenAI believes that AI regulations are necessary to ensure a safe future. So, OpenAI affirmed that they are excited about the opportunity to work closely with Garante to teach them how they build and use their systems and how they can benefit from them.

Network Security Checklist – Download Free E-Book

Also Read:

ChatGPT Successfully Built Malware But Failed To Analyze The Complex Malware

6 Best Free Malware Analysis Tools to Break Down the Malware Samples – 2023

Risks of Sharing Sensitive Corporate data into ChatGPT

Hackers Exploiting ChatGPT’s Popularity to Spread Malware via Hacked FB Accounts

Europol Warns That Hackers Use ChatGPT to Conduct Cyber Attacks