Definition

What is an AI Hallucination?

Last updated

A confidently-stated but factually incorrect output produced by a language model when it lacks grounded context for a claim.

Hallucinations occur when a model fills gaps in its knowledge with plausible-sounding text instead of deferring or retrieving. The most effective mitigation is context engineering: giving the model the specific, typed, current information it needs to answer, and a clear way to say 'I don't know.' Wire reduces hallucinations by exposing structured containers as MCP tools, so agents retrieve rather than guess.

Put context into practice

Create your first context container and connect it to your AI tools in minutes.

Create Your First Container