Definition
What is Context Poisoning?
An attack in which malicious or false information is planted into an AI agent's memory, RAG index, or tool outputs so the model treats it as ground truth.
Unlike prompt injection, which ends when a session closes, context poisoning persists: the payload is written into sources the agent reads on every future run, such as a vector index, long-term memory store, or shared multi-agent workspace. It is classified as ASI06 in the OWASP Top 10 for Agentic Applications 2026. The root cause is a context engineering failure, since most agent pipelines ingest, store, and retrieve content without provenance tracking, source isolation, or trust boundaries.
The practice of deliberately designing, structuring, and managing the information provided to AI models to improve output quality and relevance.
A technique that retrieves relevant documents or data at inference time and injects them into the model's context window before generating a response.
An autonomous software program that uses a large language model to plan and execute multi-step tasks.
An architecture where multiple AI agents collaborate on a task, each with its own context window, tools, and responsibilities.
A portable, shareable unit of organized context (documents, data, and structured information) made accessible to AI agents through MCP tools.
All terms
View full glossaryPut context into practice
Create your first context container and connect it to your AI tools in minutes.
Create Your First Container