Definition
What is an AI Hallucination?
Last updated
A confidently-stated but factually incorrect output produced by a language model when it lacks grounded context for a claim.
Hallucinations occur when a model fills gaps in its knowledge with plausible-sounding text instead of deferring or retrieving. The most effective mitigation is context engineering: giving the model the specific, typed, current information it needs to answer, and a clear way to say 'I don't know.' Wire reduces hallucinations by exposing structured containers as MCP tools, so agents retrieve rather than guess.
Further reading
Articles about AI Hallucination
GPT-5.4-pro hallucinates more than GPT-5.4-nano
Vectara's 2026 benchmark shows OpenAI's flagship GPT-5.4-pro hallucinates at 8.3% while its nano variant stays at 3.1%. The reasoning-model tradeoff, explained.
How context engineering reduces AI hallucinations
Most AI inaccuracies in production are context quality failures, not model fabrications. Here's the research on what context engineering actually changes.
Why AI Hallucinations Are a Context Problem
GPT-5.2 hallucinates at 10.8%, o3-pro at 23.3%. The fix has less to do with better models and more to do with context engineering. Here's the research.
All terms
View full glossaryPut context into practice
Create your first context container and connect it to your AI tools in minutes.
Create Your First Container