What is an AI hallucination?
We speak of an AI hallucination when a large generative language model (LLM) generates false information or facts that do not correspond to reality. The hallucinations often appear plausible - at least at first glance - as fluent, coherent texts are created. However, it is important to emphasize that LLMs do not intentionally lie, but simply have no awareness of the texts they generate.




How do I recognize AI hallucinations?
The easiest way to recognize or unmask an AI hallucination is to carefully check the information provided for correctness. As a user of generative AI, you should therefore always bear in mind that it can also make mistakes and proceed according to the “four-eyes principle” of AI and human.
How can AI hallucinations be prevented?
In order to prevent AI hallucinations and other challenges of AI systems, corresponding tests by independent third parties are recommended. In the best case scenario, vulnerabilities can be identified and minimized before applications are officially deployed.
“LLMs are powerful tools, but they also come with challenges such as the phenomenon of AI hallucination. Via comprehensive testing, we therefore support AI developers in identifying and minimizing existing risks in the best possible way and further strengthening confidence in the technology.”
Vasilios Danos,
Head of AI Security and Trustworthiness for Artificial Intelligence at TÜVIT