Austria’s data rights group Noyb has lodged a privacy complaint against OpenAI, alleging misinformation provided by its ChatGPT and potential breach of European Union secrecy regulations.
Prominent Artificial Intelligence (AI) developer OpenAI has been brought into the center of a new privacy complaint initiated by Austria’s data rights protection advocacy group.
OpenAI Faces Privacy Complaint
On April 29th, Noyb opened a complaint alleging that OpenAI has not corrected inaccurate information provided through its generative AI chatbot, ChatGPT. The group suggests that such actions or omissions could constitute a breach of privacy laws within the European Union.
According to the group, the complainant, an anonymous public figure, requested information about themselves from OpenAI’s chatbot and received consistently inaccurate information.
OpenAI has outright denied requests to either correct or delete public persona data, claiming it to be impossible. It has also denied revealing information about its training data sources.
Martje De Graaf, Noyb’s data protection lawyer, commented on the case in a statement:
“If a system cannot provide correct and transparent outcomes, it should not be used for profiling individuals. Technology must adhere to legal requirements, not the other way around.”
Noyb has submitted its complaint to Austria’s Data Protection Authority, requesting an investigation into OpenAI’s data processing and how it ensures the accuracy of personal data processed by its large language models.
De Graaf added, “It is clear that companies currently fall short of implementing European Union law regarding chatbots like ChatGPT.”
Noyb, also known as the European Center for Digital Rights, operates from Vienna, Austria, aiming to initiate “strategic litigation and media campaigns” supporting European General Data Protection Regulation laws.
This is not the first instance of chatbots being scrutinized by workers or researchers in Europe.
In December 2023, revelations from studies by two European non-profit organizations uncovered that Microsoft’s Bing AI chatbot, rebranded as Copilot, was disseminating misleading or false information regarding local elections in Germany and Switzerland.
Chatbots have been found providing incorrect references, misinformation about candidates, elections, scandals, and voting.
Another example, although not specifically in the EU, was observed with Google’s Gemini AI chatbot, which was providing incorrect and misleading imagery in its image generator.
Google apologized for the incident and announced plans to update its model.