Skip to content

Hallucination

LLM & Language Models

When an AI confidently generates information that is factually incorrect, fabricated, or nonsensical — presenting it as truth.

Hallucination is arguably the biggest practical problem with current AI. Language models don't 'know' things the way humans do — they predict the most likely next token based on patterns. Sometimes those patterns lead to plausible-sounding but completely false statements.

Common hallucinations include: inventing fake citations (with realistic-sounding author names, journals, and dates), fabricating statistics, creating non-existent products or features, and confidently stating incorrect historical facts. The AI isn't lying — it genuinely can't distinguish its trained patterns from verified facts.

Mitigation strategies include: RAG (grounding responses in retrieved documents), web search integration, chain-of-thought reasoning, and citation requirements. Perplexity's approach of always citing sources is a direct response to the hallucination problem. As a user, always verify critical facts from AI output.

Real-World Example

If you ask an AI to cite research papers it might generate a perfect-looking citation — real-sounding journal, plausible author names — for a paper that doesn't exist. Always verify.

Related Terms

More in LLM & Language Models

FAQ

What is Hallucination?

When an AI confidently generates information that is factually incorrect, fabricated, or nonsensical — presenting it as truth.

How is Hallucination used in practice?

If you ask an AI to cite research papers it might generate a perfect-looking citation — real-sounding journal, plausible author names — for a paper that doesn't exist. Always verify.

What concepts are related to Hallucination?

Key related concepts include RAG (Retrieval-Augmented Generation), Grounding, LLM (Large Language Model), Temperature. Understanding these together gives a more complete picture of how Hallucination fits into the AI landscape.