A current controversy has once again brought the topic of artificial intelligence (AI) and data protection to the forefront of public discussion. A complaint by the Austrian data protection organization Noyb against Open AI, the company behind the well-known ChatGPT, is causing a stir.
The lawsuit alleges that AI systems misrepresent personal data and thus violate data protection regulations. This is what the “FAZ” reports. The bone of contention is the AI’s incorrect rendering of a public figure’s date of birth. This error appears to be symptomatic of a fundamental weakness in modern language models, which tend to generate untrue or misleading information – a phenomenon known as “hallucination.”
The European General Data Protection Regulation (GDPR) provides that data subjects have the right to correct or delete incorrect information. But according to Open AI, it is currently not possible to adjust the system so that it no longer outputs incorrect data. This statement raises questions about accountability and troubleshooting options.
This problem is not just theoretical. A lawyer in the US was fined for relying on false precedents generated by an AI. Likewise, the airline Air Canada had to offer a discount that was falsely promised by an AI chatbot. Such incidents show the real consequences that can result from the mistakes of artificial intelligence.
Although the language models are based on the principle of statistical language relationships and can therefore correctly represent facts as a side effect, they do not have any real knowledge. Accurate representation of information is often random and not the result of an understanding of facts, which can lead to misinformation.
Science is looking for ways to solve the problem of AI hallucinations. One approach is to train the models with high-quality data. Another is “cleaning” a language model of false information by subtracting a bad model from the parameters of a good model. In addition, it is proposed to equip AI models with internet access to enable them to search for current facts. Despite all efforts to improve AI models, hallucinations will still occur.