Definition

What is MCP (Model Context Protocol)?

Last updated

An open protocol, developed by Anthropic, that standardizes how AI applications provide context and tools to language models.

MCP defines a common interface for AI tools like Claude Desktop or Cursor to connect to external data sources, APIs, and capabilities. Instead of every AI tool building custom integrations with every backend, MCP gives both sides a shared protocol. The trade-off: MCP standardizes the transport but leaves the context flowing through it up to whoever implements the server.

  • Open protocol from Anthropic for connecting LLM applications to external tools and data.
  • Standardizes how clients discover tools, call them, and receive structured responses.
  • Adopted across Claude Desktop, Cursor, Cline, Zed, and a growing list of AI tools.
  • Does not define what good context looks like; server implementers decide what the model sees.
  • Most production MCP failures are context engineering problems, not protocol problems.

How MCP works

MCP has three parts: a host (the AI application, like Claude Desktop), one or more clients inside the host (each one a connection to a single MCP server), and the servers themselves (the things exposing tools or data). The host spawns a client per server it wants to talk to; each client speaks the MCP protocol to its server over a single transport.

Each server declares:

  • Tools it exposes, with names, descriptions, and JSON-schema argument definitions.
  • Resources the model can read (files, database rows, API responses).
  • Prompts the host can surface to the user as templated starting points.

At runtime, the client lists available tools to the host, the model decides which to call, the client forwards the call to the server, the server executes, and the result returns to the model’s context window. The protocol is transport-agnostic: stdio, SSE, and streamable HTTP bindings all work.

The design goal was to stop every AI app from reinventing its own plugin system. Build an MCP server once, and it works with every MCP-compatible client.

Why MCP matters

Before MCP, connecting AI tools to external systems meant per-vendor integrations: one plugin format for ChatGPT, another for Claude, custom work for every IDE. Each integration was throwaway work that rarely transferred.

MCP collapses this into one protocol. A database server, a filesystem server, or an internal knowledge-base server can be written once and used across tools. That matters for three reasons:

  • Context portability: the same context follows the user across AI tools instead of being trapped in one vendor’s walled garden.
  • Ecosystem leverage: a community server for a SaaS product benefits every MCP-compatible client simultaneously.
  • Separation of concerns: the AI product focuses on reasoning; the server focuses on domain data and tool surface.

Common misconceptions about MCP

  • “MCP fixes context.” It standardizes the transport. What comes through still needs to be designed: how big are responses, are they structured, is scope enforced per user? The most common MCP production failures are context engineering problems.
  • “MCP is a security layer.” Authentication scopes who can call a tool. It does not validate what the tool returns. Tool descriptions and external content both enter the context window as trusted input unless you add your own perimeter.
  • “Bigger tools are better tools.” An MCP tool that returns a 50KB raw API payload will blow out the context window on a single call. Shaping outputs, paginating, and summarizing at the server is usually the difference between a useful tool and an expensive one.
  • “MCP replaces OpenAPI.” It doesn’t. MCP is how AI clients call tools. OpenAPI is how humans document REST APIs. Many MCP servers wrap OpenAPI endpoints, but MCP adds the affordances LLMs need (descriptions optimized for tool selection, typed responses, discovery).

MCP and Wire

Every Wire container ships with an MCP server. You get five built-in tools (wire_explore, wire_search, wire_navigate, wire_write, wire_delete) plus any custom tools generated from your container’s content. Connect any MCP client (Claude Desktop, Cursor, Cline) by pointing it at {org-slug}.mcp.usewire.io/container/{id}/mcp. The server handles auth via OAuth 2.1 with RFC 8707 resource indicators, enforces per-tool visibility, and shapes responses to keep context compact. You author context; Wire exposes it as MCP.

FAQ

Frequently asked questions

Common questions about MCP (Model Context Protocol).

What does MCP actually standardize?
The transport and handshake. Clients discover which tools a server exposes, call them with structured arguments, and receive structured responses. MCP also defines prompt templates and resource exposure. What it does not define is how to shape tool responses, how to scope access, or how to keep outputs compact enough for the model to reason over.
Who uses MCP today?
Anthropic's Claude Desktop was the first major host. Cursor, Cline, Zed, Goose, and a growing number of IDEs and agent frameworks now support MCP as hosts. Thousands of community servers exist for databases, filesystems, SaaS tools, knowledge bases, and internal systems.
Why do MCP integrations fail in production?
A March 2026 study of 3,282 MCP bug reports found 88% of faults fall into server/tool configuration, server/host configuration, and server setting categories. The most common issue reported by practitioners (66.67%) is tool response handling: the tool runs, returns something, and the model can't work with it. This is a context engineering failure, not a protocol failure.
Is MCP a security boundary?
No. MCP authenticates connections and can scope what tools a client sees, but anything a tool returns enters the model's context window without a trust perimeter. Instructions embedded in third-party data arrive alongside your own system prompts, which is why prompt injection and context poisoning remain open problems for MCP-based agents.
How is MCP different from a plugin system or a function-calling API?
Plugin systems and provider-specific function calling are usually tied to one AI product. MCP is model- and vendor-neutral: the same server can serve Claude, Cursor, or any other MCP client. Anthropic designed it as an open standard with that portability in mind.

Put context into practice

Create your first context container and connect it to your AI tools in minutes.

Create Your First Container