AI & Generative Media

Hallucination

Also known as: AI Hallucination, Confabulation

When an AI model generates false or fabricated information presented as fact, a key reliability challenge in language models.

Hallucination in AI refers to when a model generates information that sounds plausible but is factually incorrect, fabricated, or nonsensical. The term borrows from psychology but describes a distinct technical phenomenon.

Why Hallucinations Happen

  • LLMs predict statistically likely next tokens, not verified facts
  • Training data contains errors and inconsistencies
  • Models lack access to real-time information
  • Pressure to provide an answer even when uncertain

Examples

  • Citing nonexistent academic papers with plausible-sounding titles
  • Inventing quotes from real people
  • Creating fictional case law or legal precedents
  • Generating false historical events or dates

Mitigation Strategies

  • Retrieval-augmented generation (RAG)
  • Chain-of-thought prompting
  • Uncertainty quantification
  • Human verification workflows
  • Training models to say “I don’t know”

External Resources