img

Ever since AI chatbots like ChatGPT became popular, people have been curious about what they might say about them. While many received harmless or factual answers, others encountered alarming inaccuracies. One such case involved Arve Hjalmar Holmen, a Norwegian man who was falsely accused by ChatGPT of murdering his own children, according to The Verge.

ChatGPT’s False Story About Holmen

Holmen, like many, simply wanted to see what ChatGPT would say about him. However, instead of a neutral or accurate response, the AI fabricated a disturbing lie. It claimed that Holmen had murdered two of his children, attempted to kill a third, and was serving a 21-year prison sentence.

What made this even more concerning was that ChatGPT mixed in real details, such as his hometown and the correct number and gender of his children. This false yet specific claim left Holmen in shock.

Legal Action Against OpenAI

Realizing the potential consequences of AI-generated misinformation, Holmen sought help from Noyb, an Austrian privacy rights group. Noyb then filed a formal complaint against OpenAI with Datatilsynet, Norway’s data protection authority.

The complaint argues that OpenAI violated privacy laws under the General Data Protection Regulation (GDPR), which requires companies to ensure that personal data is accurate and can be corrected if wrong.

Joakim Söderberg, a lawyer from Noyb, criticized OpenAI’s disclaimer, stating, "You can't just spread false information and then hide behind a tiny disclaimer saying it might not be true."

How Did This Happen?

Unlike search engines like Google, ChatGPT does not retrieve information from verified sources. Instead, it generates responses based on patterns in the data it has been trained on. This can lead to AI hallucinations, where the chatbot confidently presents incorrect or entirely fabricated information as fact.

While OpenAI warns users that ChatGPT can make mistakes, critics argue that such disclaimers are insufficient—especially when the misinformation is as serious as a false murder accusation.

OpenAI’s Response and the Bigger Issue

Currently, ChatGPT no longer falsely claims Holmen is a murderer. Instead, it only references news about the legal complaint. This suggests that OpenAI has taken steps to block the specific response.

This is not the first complaint Noyb has filed against OpenAI. Previously, they raised concerns over ChatGPT listing incorrect birth dates of public figures. While that mistake was minor in comparison, it highlights the same issue—what happens when AI gets facts wrong, and who is responsible for correcting them?

The Future of AI Accountability

Holmen’s case raises serious concerns about AI's potential to spread damaging falsehoods. As AI continues to evolve, the key question remains: How can we ensure AI-generated content does not destroy reputations and lives?


Read More: Vivo V50 Lite 5G Launched Globally with Android 15, 90W Charging & 512GB Storage: Price, Specs & Features