OpenAI's Chatbot Breaks EU Privacy Law, Italian Agency Says

(January 29, 2024, 10:57 PM EST) -- Italy's data protection regulator revealed Monday that its investigation into OpenAI has found that the artificial intelligence research company's popular text generator ChatGPT isn't compliant with the European Union's stringent data protection rules. 

The agency's findings come less than a year after the regulator moved last March to temporarily suspend the ChatGPT service in Italy due to concerns that the company's data processing practices ran afoul of the EU's General Data Protection Regulation. 

The emergency interim ban was lifted less than a month later, after OpenAI responded to the deactivation by addressing the issues raised by the Italian data protection authority, also known as the Garante, in the time period allotted by the regulator. 

But despite lifting the suspension, the Garante continued its investigation into the way that Microsoft-backed OpenAI handles and uses consumer data used to fuel its ChatGPT tool, leading to the announcement Monday that it had notified Open AI that "the available evidence pointed to the existence of breaches of the provisions contained in the EU GDPR."

The Italian regulator gave OpenAI 30 days to submit its counterclaims concerning the alleged data privacy law breaches, although it didn't specify in its announcement how OpenAI purportedly violated the bloc's data protection rules. 

When it first launched its investigation into OpenAI last March, the Garante offered more details about its reservations with OpenAI's Chatbot. 

In imposing a temporary ban on OpenAI's processing of Italian users' data, the regulator at the time took specific issue with the company's data collection practices and its alleged failure to implement an age verification system to ensure children were being adequately protected. 

The Garante alleged that Open AI provides "no information" to users whose data is being collected and that the company didn't appear to have a valid legal basis to support its "massive collection and processing of personal data in order to 'train' the algorithms on which the platform relies." 

Additionally, the information being processed by ChatGPT wasn't always accurate, and the platform lacked an age verification mechanism that would prevent children from receiving responses that were "absolutely inappropriate to their age and awareness," the Italian regulator found in concluding that there was "no way" for ChatGPT to continue processing data lawfully. 

ChatGPT was given 20 days to notify the Italian regulator of the measures they implemented to address these issues, with the Garante noting that failure to do so could result in a fine of up to €20 million or 4% of its global annual revenue. 

While OpenAI's response was enough to have ChatGPT come back online in Italy and avoid an immediate penalty, the Garante forged ahead with its investigation into the service, leading to Monday's announcement. 

OpenAI said in a statement provided to Law360 that it plans to "continue to work constructively with the Garante." The company added that it believes that its practices "align with GDPR and other privacy laws, and we take additional steps to protect people's data and privacy."

"We want our AI to learn about the world, not about private individuals," the company said. "We actively work to reduce personal data in training our systems like ChatGPT, which also rejects requests for private or sensitive information about people."

The Italian regulator's action marks the latest move by policymakers around the globe to promote the safe and reliable deployment of the emerging AI technologies since OpenAI released its ChatGPT tool in late 2022. 

In the EU, policymakers last month inched closer to enacting the world's first comprehensive law to tackle the potential risks posed by AI systems such as ChatGPT after they reached an agreement on a common text for the landmark EU AI Act.

The groundbreaking law proposes taking a tiered risk-based approach to regulation that includes placing certain requirements on the use of systems that are identified as "high risk," and completely banning certain applications of the technology, including some uses of biometric systems, that pose an "unacceptable risk" to EU residents' fundamental rights.

Policymakers in the U.S. have also been focused on the topic, with the Federal Trade Commission revealing in July that it had opened an investigation into whether OpenAI has mishandled personal data or engaged in other consumer-protection violations and moving in November to approve a new process that will streamline its staff's ability to use civil investigative demands in investigations over AI-related products and services.

The White House has also taken several significant steps aimed at managing the risks and harnessing the rewards of AI, including issuing a comprehensive executive order in October that requires federal agencies to establish new standards for AI safety, security, privacy and innovation across a range of industries.

And at the state level, California is again among the leaders on this topic, with the state's unique Privacy Protection Agency in November proposing draft rules that would give consumers the ability to opt out of providing companies with personal information that could be used to fuel artificial intelligence and to obtain more information about businesses' use of this technology. 

--Editing by Emily Kokoll.

Update: This article has been updated to add comment from OpenAI. 

For a reprint of this article, please contact reprints@law360.com.

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!