Back to GlossarySafety

Hallucination

Definition

When an AI model generates plausible-sounding but factually incorrect, fabricated, or unsupported information, presenting it with the same confidence as accurate content.

Hallucination is one of the most significant challenges facing deployed AI systems. Language models can confidently cite nonexistent research papers, invent statistics, describe events that never happened, or attribute quotes to people who never said them. This occurs because LLMs generate text based on statistical patterns rather than grounded knowledge — they produce the most likely next tokens without verifying factual accuracy. Mitigation strategies include Retrieval-Augmented Generation (grounding responses in retrieved documents), fine-tuning for factual accuracy, asking models to express uncertainty, and building verification systems. Despite improvements, hallucination remains an unsolved problem and a major barrier to deploying AI in high-stakes domains like healthcare, law, and finance where factual accuracy is critical.

Companies in Safety

View Safety companies →