What is Hallucination?
Hallucination — When an AI model generates false, nonsensical, or unverified information while presenting it as fact.
Hallucinations occur because LLMs generate text based on statistical patterns, not factual understanding. They are most common when models are asked about niche topics, recent events, or precise numerical data. RAG and grounding techniques are the primary defenses.
Frequently Asked Questions
Can hallucinations be completely eliminated?
Not entirely with current technology. They can be significantly reduced through RAG, grounding, temperature adjustments, and human-in-the-loop validation.
How do I detect hallucinations?
Cross-reference AI outputs against source documents. Automated fact-checking pipelines and confidence scoring can flag likely hallucinations before they reach end users.
Are some models more prone to hallucination?
Yes. Larger models and those with RAG integration tend to hallucinate less. Models asked about topics outside their training data hallucinate more frequently.