What MCP's 2026 roadmap means for context delivery
Key takeaway
MCP resources are read-only, URI-addressable content that MCP servers expose for application-controlled loading, separate from tools that the model invokes autonomously and that cost tokens on every turn. The protocol ships three primitives, tools, resources, and prompts, but the ecosystem has converged on tools-only servers, which is the actual root cause of the context-bloat problem harnesses keep patching at their layer. Resources let you expose reference data, schemas, docs, and structured context without paying for them on every model turn, and they shift context delivery from the model deciding when to fetch to the application deciding what is loaded.
Every piece of MCP coverage in 2026 has been about tools. Tool design, tool security, tool sprawl, tool selection, tool poisoning. The protocol’s other primitives, resources and prompts, almost never come up. That is a problem, because the tool-bloat complaint that drives most of the criticism is not a protocol failure. It is a server-design failure that the protocol already shipped a fix for.
MCP resources are read-only, URI-addressable content that an MCP server exposes for application-controlled loading. The host or user decides when a resource is fetched, not the model. That single contract change is what makes resources the right primitive for most of what people currently wrap in tools, and what makes a tools-only MCP server a half-built MCP server.
The protocol defines three primitives. They are not interchangeable, and each one answers a different question.
| Primitive | Who controls it | When it loads | Token cost on every turn |
|---|---|---|---|
| Tools | The model | When the model decides to call | Yes, full schema in system prompt |
| Resources | The application or user | When explicitly requested via URI | No, zero until fetched |
| Prompts | The user | When the user triggers a templated workflow | No, only when invoked |
The asymmetry in the last column is the whole point. A tool an MCP server declares is paid for on every model turn the server is mounted, because the host has to keep the tool’s name, description, and argument schema in the model’s context so the model knows it exists. A resource costs nothing until the application reads it. A server that exposes ten tools and one resource pays for the ten tools on every turn. A server that exposes one tool and ten resources pays for one tool on every turn and pays for resources only when they are actually used.
Most MCP servers shipped today are in the first category. That is the gap.
The convergence on tool-only servers is overdetermined. Three reasons matter.
SDK quickstarts optimize for tools. Open the official MCP Python SDK README, the TypeScript SDK README, or any of the community starter templates. Every one of them shows a @server.tool() decorator within the first ten lines. Resources appear later, in a section most readers skim. New server authors learn the protocol through its tool primitive and rarely revisit the other two.
The function-calling mental model carried over. Before MCP, the dominant way to give an LLM access to external systems was provider-specific function calling: OpenAI’s tool calls, Anthropic’s tool use, Gemini’s function calling. The shape was always the same. The model decides when to invoke a named function with structured arguments. MCP’s tool primitive inherits that shape exactly, which means anyone with experience building function-calling integrations can write an MCP tool server on day one. Resources require a different mental model (URIs, hierarchical addressing, application-controlled loading) that the prior art does not teach.
Resources demand more design work upfront. A tool needs a name, a description, and an argument schema. A resource needs a URI scheme, a hierarchy, navigation semantics, and a story for how the client will discover and address it. That is not protocol complexity. It is the cost of designing a navigable surface instead of a function call, and most teams ship the simpler primitive first and never come back.
The result is an ecosystem where Claude Code reports more than 5,800 MCP servers in active registries and 97 million monthly SDK downloads, with most of those servers exposing tools and nothing else.
The cost is not theoretical. Every tool description is a token tax on every turn, and the tax compounds across servers.
Consider the worst common case. A team mounts a filesystem MCP server, a Postgres MCP server, a Slack MCP server, and a GitHub MCP server. Each server exposes 10 to 30 tools. Each tool description averages 200 to 500 tokens once arguments and JSON-schema definitions are included. The math runs into the tens of thousands of tokens before the user has typed a single prompt. Harness teams have responded by building tool gating, tool search, capability filtering, and tools/list_changed handling at the client layer. Claude Code’s tool-search feature defers tool definitions until needed. Cursor lets users enable and disable specific tools per session. These are real fixes, but they are working around a server-design problem at the client.
The deeper issue is that tools encode model-controlled retrieval, and most of what production agents need is not model-controlled. It is reference. Codebase files. Database schemas. API documentation. Configuration. Tickets. Issues. Notes. None of that requires the model to autonomously decide when to fetch. The application or the user decides, then loads the content into context as input. Wrapping that material in tools forces the model into a decision loop it does not need to be in, costs tokens for every tool description on every turn, and replaces a simple “load this” pattern with “model, please consider calling tool X”. Tools should do one job each is the right design principle inside the tool layer. The bigger principle is that not everything belongs in the tool layer.
Resources move reference content out of the tool layer entirely. They are URI-addressable, application-controlled, and absent from context until loaded. That changes four things about how an MCP server can be designed.
URI addressability lets the server describe a hierarchy instead of a function set. A docs server can expose docs://introduction, docs://api/auth, docs://api/errors, and let the client navigate by URI. A codebase server can expose codebase://src/components/Button.tsx. A schema server can expose schema://users and schema://users/columns. The structure is part of the contract, not buried inside a tool’s response.
Application-controlled loading removes the model from the decision. The user or the host application picks what to load. In Claude Code and Cursor, that surfaces as the @ mention pattern: typing @ autocompletes available resources from all connected servers, and resources appear in the autocomplete menu alongside files. The model sees the content only after it has been loaded into the conversation. That is the correct contract for almost all reference material.
Live updates without context cost. The notifications/resources/list_changed message lets a server tell connected clients to refresh the available resource list. New entries appear in autocomplete; the user can reference them immediately. None of that costs context until something is actually loaded.
Scale to thousands of items. The “Structured Context Engineering for File-Native Agentic Systems” study (9,649 experiments across 11 models and schemas from 10 to 10,000 tables) found that frontier models gain 2.7 percentage points of accuracy when context is delivered as a navigable file-style surface versus stuffed into a prompt. Format choice barely affected accuracy. Navigation structure did. Resources are the MCP-native version of that file-style surface. They scale where prompt-resident tool schemas cannot.
The shape that falls out of taking resources seriously:
Read-only reference data is a resource, never a tool. If the only thing your tool does is return content from an address the client could have asked for directly, you have written a resource as a tool. Convert it. Examples: getFileContents(path) becomes file://{path}. getSchema(table) becomes schema://{table}. getDocPage(slug) becomes docs://{slug}. The model no longer has to call the tool to get the content. The user @-mentions the URI, the host loads it, and the model sees the content as input. You stop paying for the tool description on every turn.
Tools should take parameters and produce side effects. A tool is the right primitive when the call needs arguments the user cannot reasonably predict or when the call mutates state. runQuery(sql), sendMessage(channel, body), createIssue(title, body, labels). Anything where the model needs to compose the call from the conversation. Read-only fetches with predictable URIs do not need this shape.
Design the URI hierarchy first. A resource scheme is a public contract. Start by sketching the address space. repo://{owner}/{name}/file/{path}, repo://{owner}/{name}/issue/{number}, repo://{owner}/{name}/pull/{number}. The hierarchy gives clients something to autocomplete against and lets the server expose new entries without renegotiating its tool list.
Use list_changed aggressively. When the underlying state changes (new file, new ticket, updated schema), send notifications/resources/list_changed. The client refreshes its resource list. The user sees the new entries immediately. The model sees them only when loaded.
Surface Prompts for reusable workflows. Prompts are the third primitive, and they are also underused. A prompt is a templated workflow the user can invoke, with optional arguments. They are the right home for “run our standard code review checklist on this PR” or “draft a release note from these commits”. Keep them in the server, not in client-side configuration files; that is how they stay portable.
Keep tool descriptions tight. When you do declare a tool, optimize its description for the model’s decision to call it, not for the human reading the docs. Long-form usage notes belong behind a documentation resource the model can load if it needs them. What MCP’s 2026 roadmap means for context delivery covers where the protocol is heading on this; the design discipline starts now.
Each Wire container is exposed as a remote MCP server on a per-organization subdomain. The container ships five named tools for the active surface (search, navigate, write, delete, analyze), while the same underlying entries are also addressable as resources, so a host can @-mention a specific entry directly without going through a tool call. The behavior worth noting is that the same per-container state is reachable through both tool-driven and resource-style access, which keeps the model out of decision loops it does not need to be in and pays full context cost only when the agent actually navigates.
The protocol is evolving toward making this design pattern explicit. The active Standards Enhancement Proposal for typed progressive disclosure would let servers expose typed SDK or library operations through a meta-tool with searchTools and getMode modes, so even tools themselves can be lazily disclosed in the way resources already are. Resource subscriptions, resource templates, and richer URI semantics are on the same path. Skills, the Claude-Code-side primitive that ships file-based procedures with progressive disclosure, are an adjacent expression of the same idea on the harness side.
The shared direction is clear. Context delivery is moving away from “the model sees every capability description on every turn” and toward “the model sees descriptions when they are relevant, and content when it is loaded”. MCP’s tool primitive is the part that everyone has built. The resources primitive is the other half. If your MCP server only exposes tools, you have built half a server. The cheap version of the fix is to take the read-only content currently behind a tool and re-expose it under a URI. The expensive version is to keep shipping every reference fetch as a tool and pay for it on every turn for the rest of the server’s life.
Tools are how the model acts. Resources are how the model reads. Build both.
Sources: Model Context Protocol: Resources spec · Anthropic: Equipping agents for the real world with Agent Skills · Claude Code MCP docs · WorkOS: Understanding MCP features · Speakeasy: What are MCP resources? · Solo.io: MCP Progressive Disclosure · MCP issue #1888: Progressive Disclosure for Typed Library Discovery · arXiv 2602.05447: Structured Context Engineering for File-Native Agentic Systems · Webfuse: MCP Cheat Sheet 2026
Related
Wire transforms your documents into structured, AI-optimized context containers. Upload files, get MCP tools instantly.
Create Your First Container