Hallucination
Definition
When an AI model generates plausible-sounding but factually incorrect, fabricated, or unsupported information, presenting it with the same confidence as accurate content.
Hallucination is one of the most significant challenges facing deployed AI systems. Language models can confidently cite nonexistent research papers, invent statistics, describe events that never happened, or attribute quotes to people who never said them. This occurs because LLMs generate text based on statistical patterns rather than grounded knowledge — they produce the most likely next tokens without verifying factual accuracy. Mitigation strategies include Retrieval-Augmented Generation (grounding responses in retrieved documents), fine-tuning for factual accuracy, asking models to express uncertainty, and building verification systems. Despite improvements, hallucination remains an unsolved problem and a major barrier to deploying AI in high-stakes domains like healthcare, law, and finance where factual accuracy is critical.
Related Terms
AI Alignment
The research field focused on ensuring AI systems behave in accordance with human values, intentions...
Large Language Model
A neural network with billions of parameters trained on massive text datasets, capable of understand...
Guardrails
Safety mechanisms and filters built around AI systems to prevent harmful, inappropriate, or off-topi...
RAG (Retrieval-Augmented Generation)
A technique that enhances LLM responses by first retrieving relevant documents from an external know...