Definition
What is MCP (Model Context Protocol)?
Last updated
An open protocol, developed by Anthropic, that standardizes how AI applications provide context and tools to language models.
MCP defines a common interface for AI tools like Claude Desktop or Cursor to connect to external data sources, APIs, and capabilities. Instead of every AI tool building custom integrations with every backend, MCP gives both sides a shared protocol. The trade-off: MCP standardizes the transport but leaves the context flowing through it up to whoever implements the server.
- Open protocol from Anthropic for connecting LLM applications to external tools and data.
- Standardizes how clients discover tools, call them, and receive structured responses.
- Adopted across Claude Desktop, Cursor, Cline, Zed, and a growing list of AI tools.
- Does not define what good context looks like; server implementers decide what the model sees.
- Most production MCP failures are context engineering problems, not protocol problems.
How MCP works
MCP has three parts: a host (the AI application, like Claude Desktop), one or more clients inside the host (each one a connection to a single MCP server), and the servers themselves (the things exposing tools or data). The host spawns a client per server it wants to talk to; each client speaks the MCP protocol to its server over a single transport.
Each server declares:
- Tools it exposes, with names, descriptions, and JSON-schema argument definitions.
- Resources the model can read (files, database rows, API responses).
- Prompts the host can surface to the user as templated starting points.
At runtime, the client lists available tools to the host, the model decides which to call, the client forwards the call to the server, the server executes, and the result returns to the model’s context window. The protocol is transport-agnostic: stdio, SSE, and streamable HTTP bindings all work.
The design goal was to stop every AI app from reinventing its own plugin system. Build an MCP server once, and it works with every MCP-compatible client.
Why MCP matters
Before MCP, connecting AI tools to external systems meant per-vendor integrations: one plugin format for ChatGPT, another for Claude, custom work for every IDE. Each integration was throwaway work that rarely transferred.
MCP collapses this into one protocol. A database server, a filesystem server, or an internal knowledge-base server can be written once and used across tools. That matters for three reasons:
- Context portability: the same context follows the user across AI tools instead of being trapped in one vendor’s walled garden.
- Ecosystem leverage: a community server for a SaaS product benefits every MCP-compatible client simultaneously.
- Separation of concerns: the AI product focuses on reasoning; the server focuses on domain data and tool surface.
Common misconceptions about MCP
- “MCP fixes context.” It standardizes the transport. What comes through still needs to be designed: how big are responses, are they structured, is scope enforced per user? The most common MCP production failures are context engineering problems.
- “MCP is a security layer.” Authentication scopes who can call a tool. It does not validate what the tool returns. Tool descriptions and external content both enter the context window as trusted input unless you add your own perimeter.
- “Bigger tools are better tools.” An MCP tool that returns a 50KB raw API payload will blow out the context window on a single call. Shaping outputs, paginating, and summarizing at the server is usually the difference between a useful tool and an expensive one.
- “MCP replaces OpenAPI.” It doesn’t. MCP is how AI clients call tools. OpenAPI is how humans document REST APIs. Many MCP servers wrap OpenAPI endpoints, but MCP adds the affordances LLMs need (descriptions optimized for tool selection, typed responses, discovery).
MCP and Wire
Every Wire container ships with an MCP server. You get five built-in tools (wire_explore, wire_search, wire_navigate, wire_write, wire_delete) plus any custom tools generated from your container’s content. Connect any MCP client (Claude Desktop, Cursor, Cline) by pointing it at {org-slug}.mcp.usewire.io/container/{id}/mcp. The server handles auth via OAuth 2.1 with RFC 8707 resource indicators, enforces per-tool visibility, and shapes responses to keep context compact. You author context; Wire exposes it as MCP.
FAQ
Frequently asked questions
Common questions about MCP (Model Context Protocol).
What does MCP actually standardize?
Who uses MCP today?
Why do MCP integrations fail in production?
Is MCP a security boundary?
How is MCP different from a plugin system or a function-calling API?
Further reading
Articles about MCP (Model Context Protocol)
Tool poisoning: how MCP tool descriptions hijack agents
Tool poisoning hides instructions inside MCP tool descriptions the agent reads as trusted context. The MCPTox benchmark recorded a 72.8% attack success rate.
Tool-based agent memory: why 2026 benchmarks favor it
Tool-based agent memory exposes store, retrieve, and navigate as callable MCP tools. 2026 benchmarks from Mem0, Memanto, and Wire show why the pattern wins.
Provenance is a context engineering primitive, not a trust score
Retrieval provenance for AI agents isn't an audit log or a trust verdict. It's structural metadata (source, position, time, edges) agents use to plan.
One job per tool: why adding wire_navigate cut agent calls 24%
We restructured Wire's MCP surface from 2 overloaded tools to 3 single-purpose ones. The counterintuitive result: adding a tool cut total calls 24%.
All terms
View full glossaryPut context into practice
Create your first context container and connect it to your AI tools in minutes.
Create Your First Container