OpenAI recently faced a privacy complaint from Norway due to the issue of false information generated by ChatGPT. The case was supported by privacy advocacy group Noyb, a complainant named Alf Harmar Hallmen. He was shocked and angry when he discovered that ChatGPT had mistakenly claimed that he had been convicted of murdering two children and attempting to kill a third child.
Past privacy complaints about ChatGPT have mainly involved some basic personal data errors, such as inaccurate date of birth or biographical information. A key problem is that OpenAI does not provide an effective way for individuals to correct error messages generated by AI. Typically, OpenAI prevents responses that generate such error messages. However, under the EU General Data Protection Regulation (GDPR), Europeans have a range of data access rights, including the right to correct personal data.
Noyb pointed out that GDPR stipulates that personal data must be accurate and that if the information is inaccurate, users have the right to request correction. Noyb's attorney Jokeem Soderbery said OpenAI simply added a disclaimer at the bottom, claiming that "ChatGPT may make mistakes" is not a disclaimer. According to GDPR, it is the responsibility of AI developers to ensure that the information they generate does not spread serious false content.
GDPR violations can result in fines up to 4% of global annual income. In the spring of 2023, Italy's data protection regulator temporarily blocked access to ChatGPT, a measure prompted OpenAI to adjust its user information disclosure. Nevertheless, in recent years, privacy regulators in Europe have taken a more cautious approach to generative AI in search of suitable regulatory solutions.
Noyb's new complaint is intended to draw regulators' concerns about the potential dangers of AI generating false information. They shared a screenshot of their interaction with ChatGPT, showing that the AI produced a completely false and disturbing history in response to a question about Hallmen. This incident is not isolated, and Noyb also points to cases where other users suffer similar misinformation damage.
Although OpenAI stopped false allegations against Hallmen after its model update, Noyb and Hallmen themselves were concerned that wrong information might be retained within the AI model. Noyb has filed a complaint with the Norwegian Data Protection Agency and looks forward to the regulators being able to investigate this.
Key points:
Noyb supports a Norwegian individual filing a privacy complaint against ChatGPT, accusing it of generating false information.
According to GDPR, personal data must be accurate and OpenAI failed to meet this requirement.
Noyb hopes that this complaint will attract the attention of regulators to AI false information issues.