Neuron Solutions
  • Services
  • Digital workforce
  • AI Training & Workshop Offer
  • Generative AI Solutions
  • Industries
  • Projects
  • Blog
  • About us
  • Let’s talk
  • English
    • Magyar
    • English
    • Deutsch
2025.06.26

AI AND HALLUCINATION

AI AND HALLUCINATION
2025.06.26

AI AND HALLUCINATION

The development of artificial intelligence has brought fascinating opportunities, but it also carries serious societal risks. A striking example is the case of Norwegian citizen Arve Hjalmar Holmen, about whom a language model falsely claimed that he had killed his two children.
Holmen was merely curious to see what AI would say about him and asked ChatGPT: “Who is Arve Hjalmar Holmen?” The response was shocking: the system provided a detailed but entirely fabricated story stating that Holmen had murdered his two children and was serving a 21-year prison sentence. None of this is true.

This phenomenon, known in technical terms as a “hallucination,” refers to situations where AI generates content that may sound convincing and even logical, but has no basis in reality.

This was the topic of a conversation between Áron Kovács-Nagy and Levente Szabados, Associate Professor at the Frankfurt School of Finance & Management and Senior Consultant at Neuron Solutions.

AI systems were not originally designed to tell the truth, but rather to produce clever, linguistically persuasive responses.

In fact, they are like extremely talented “language actors” who skillfully play the roles assigned to them — even if those roles have nothing to do with reality.
This can be a great advantage when it comes to processing fact-based, verifiable information. But when we let the system run without factual oversight, it will generate catchy but incorrect content based solely on linguistic coherence. Anyone who naively accepts this can completely misinform themselves.

Challenges for AI Developers
At the same time, AI developers face the serious challenge of meeting the expectation to create informative, useful, and responsible systems. On one hand, no one wants the answer to every question to be: “I cannot answer that.” That would lead to a poor user experience. But it is equally unacceptable for an AI system to make factually false and offensive claims about someone — even if those claims seem linguistically coherent.

Service providers must therefore strike a balance: deciding when to allow AI systems to operate more freely, and when to impose constraints.
As users, it is important that we develop our critical thinking and engage with these technologies mindfully.

There is always a risk that these systems might falsely accuse someone without any basis.
That is why it is crucial for users to have access to some form of remedy — just like in journalism when incorrect information is published.

You can listen to the full conversation by clicking here.

Previous articleFilm and AI: A New Tool or a True Revolution?Next article China's AI Education Reform

Deep Reading Blog

Recent Posts

China’s AI Education Reform2025.07.08
AI AND HALLUCINATION2025.06.26
Film and AI: A New Tool or a True Revolution?2025.05.26
  • Magyar
  • English
  • Deutsch
Neuron Solutions

FOLLOW US!

Facebook Youtube Linkedin
  • Services
  • Generative AI Solutions
  • AI Training Academy
  • Industries
  • Projects
  • Knowledge base
  • About us
  • Let’s talk
  • Privacy Policy
  • Services
  • Generative AI Solutions
  • AI Training Academy
  • Industries
  • Projects
  • Knowledge base
  • About us
  • Let’s talk
  • Privacy Policy

NEURON SOLUTIONS LTD

Építész u. 8-12, H-1116 Budapest, Hungary

info@neuronsolutions.hu

NeuronBot