On Friday, Italy’s privacy regulator announced it had imposed a temporary ban on the contentious AI application ChatGPT, citing concerns over user data protection and the inability to verify user age.
The Italian Data Protection Authority’s decision will lead to a temporary restriction on processing Italian user data by OpenAI, the American company that developed ChatGPT in collaboration with Microsoft. An investigation has been initiated by the agency.
Launched in November, ChatGPT is capable of answering complex questions, writing code, crafting sonnets, and composing essays, even aiding students in passing difficult exams. However, its introduction has sparked controversy, with educators worried about cheating and policymakers expressing concerns about the dissemination of false information.
The privacy regulator disclosed that on March 20, ChatGPT experienced a data breach involving user conversations and payment details. The authority stated there was no legal foundation for “the widespread gathering and storage of personal data for the purpose of ‘training’ the algorithms that underpin the platform’s functioning.”
Furthermore, the inability to verify users’ ages renders the app potentially harmful to minors, as it may provide inappropriate responses that do not align with their developmental stage and awareness.
The authority has given OpenAI a 20-day window to address the concerns raised, under threat of a fine of up to €20 million ($21.7 million) or 4% of the company’s annual revenue.
This action against ChatGPT in Italy follows Europol’s recent warning about criminals potentially exploiting the app for fraudulent activities and other cybercrimes, including phishing and malware distribution.