Hallucination
Also known as: AI Hallucination, Confabulation
When an AI model generates false or fabricated information presented as fact, a key reliability challenge in language models.
Hallucination in AI refers to when a model generates information that sounds plausible but is factually incorrect, fabricated, or nonsensical. The term borrows from psychology but describes a distinct technical phenomenon.
Why Hallucinations Happen
- LLMs predict statistically likely next tokens, not verified facts
- Training data contains errors and inconsistencies
- Models lack access to real-time information
- Pressure to provide an answer even when uncertain
Examples
- Citing nonexistent academic papers with plausible-sounding titles
- Inventing quotes from real people
- Creating fictional case law or legal precedents
- Generating false historical events or dates
Mitigation Strategies
- Retrieval-augmented generation (RAG)
- Chain-of-thought prompting
- Uncertainty quantification
- Human verification workflows
- Training models to say “I don’t know”
External Resources
Related Terms
Related Writing
The Shifting Value Proposition in the AI Era
December 24, 2025
Correction Penalty - The Plane of Infinite Tweaking
December 1, 2025
AI Ecosystem Capital Flows (updated)
November 20, 2025
Meta's Mind-Reading AI Sparks Urgent Call for Brain Data Privacy
November 13, 2025
Memetic Lexicon
October 7, 2025