Model Context Protocol (MCP)

★ New
trial
AI / ML open-source MIT open-source

What It Does

The Model Context Protocol (MCP) is an open standard introduced by Anthropic in November 2024 that defines how AI assistants (LLMs) connect to external tools, data sources, and services. It provides a standardized JSON-RPC-based protocol with defined transports (stdio for local servers, HTTP with SSE for remote servers) so that any MCP-compatible client (Claude, ChatGPT, Cursor, VS Code, etc.) can discover and invoke tools exposed by any MCP server without custom integration code.

MCP solves the “N x M” integration problem: instead of every AI client needing a custom connector for every external service, both sides implement MCP and interoperate automatically. The protocol defines three core primitives: Tools (functions the AI can call), Resources (data the AI can read), and Prompts (templates for structured interactions).

Key Features

  • Standardized tool discovery: Servers declare available tools with JSON Schema-defined input/output contracts; clients discover them dynamically at connection time
  • Multiple transports: stdio (local processes, low-latency), HTTP+SSE (remote servers, OAuth-compatible), with Streamable HTTP as the emerging standard
  • OAuth 2.1 authentication: Specification-level support for OAuth flows, with enterprise IdP integration (Okta, Azure AD) on the Q2 2026 roadmap
  • Cross-vendor adoption: Supported by Anthropic (Claude), OpenAI (ChatGPT), Google DeepMind, Microsoft (VS Code/Copilot), and AWS as of early 2026
  • Open governance: Anthropic donated MCP to the Agentic AI Foundation in early 2026 to ensure vendor-neutral governance
  • Server ecosystem: 10,000+ public MCP servers, 97 million monthly SDK downloads, 5,800+ community-built servers
  • Multi-language SDKs: Official TypeScript and Python SDKs; community SDKs for Go, Rust, Java, C#, and others
  • Resource subscriptions: Clients can subscribe to resource updates for real-time data synchronization

Use Cases

  • AI-assisted development: IDE integrations (Cursor, VS Code) use MCP to give coding agents access to databases, APIs, documentation, and deployment tools
  • Content management: CMS platforms (Contentful, Sanity) expose MCP servers so AI agents can create, edit, and publish content
  • Enterprise automation: Business platforms (Salesforce, ServiceNow, Workday) use MCP to let AI agents interact with enterprise systems
  • AI agent sandboxing and governance: Tools like Leash by StrongDM intercept MCP traffic to enforce Cedar policies on tool-level access control
  • Local tool integration: Developers run local MCP servers to give AI assistants access to filesystem, databases, and custom scripts without cloud dependencies

Adoption Level Analysis

Small teams (<20 engineers): Excellent fit. Running a local MCP server via npx is trivial. The protocol adds near-zero operational overhead. Small teams benefit most from the “install once, use from any AI client” model. Community servers for common tools (GitHub, Postgres, file systems) are available out of the box.

Medium orgs (20-200 engineers): Good fit. MCP enables building internal tooling that multiple AI clients can consume. The challenge is governance: without a gateway or policy layer, any developer can connect any MCP server to their AI client, creating shadow integration risk. Teams should establish MCP server registries and permission policies.

Enterprise (200+ engineers): Growing fit with caveats. The protocol is now supported by every major AI provider, which de-risks adoption. However, enterprise requirements like SSO-integrated auth, centralized audit trails, gateway behavior, and configuration portability are still maturing. OAuth 2.1 with enterprise IdP integration is planned for Q2 2026 but not shipped yet. Early enterprise adopters report friction mapping MCP tools to internal systems and managing change across IT, security, and business users.

Alternatives

AlternativeKey DifferencePrefer when…
OpenAPI / RESTEstablished API description standard, no AI-specific featuresYou’re building traditional API integrations, not AI agent workflows
LangChain ToolsPython-centric tool abstraction, tightly coupled to LangChain frameworkYou’re already in the LangChain ecosystem and don’t need cross-client compatibility
Agent Skills SpecificationProvides knowledge/instructions to agents (complementary to MCP)You need to give agents procedural knowledge rather than runtime tool access
Custom function callingProvider-specific (OpenAI functions, Claude tools)You’re locked to one AI provider and want simplest integration

Evidence & Sources

Notes & Caveats

  • Security is the primary concern: Prompt injection, tool poisoning, credential theft, overly broad permissions, and rogue servers are all documented attack vectors. The 2025 Postmark MCP supply chain breach (malicious npm package created a backdoor in an MCP server for email) demonstrated real-world risk. Organizations must treat MCP servers as untrusted code with the same rigor as any third-party dependency.
  • Authentication gaps: Native SSO support is absent as of April 2026. OAuth 2.1 with enterprise IdP (Okta, Azure AD) is on the Q2 2026 roadmap but not shipped. Early implementations cut corners on consent flows and token exposure.
  • Specification still evolving: The current spec version is 2025-11-25. Breaking changes between spec versions are possible. The transition from SSE to Streamable HTTP transport is ongoing. Early adopters should expect to update MCP server implementations as the spec matures.
  • Anthropic’s strategic position: MCP was originated by Anthropic and donated to the Agentic AI Foundation. While genuinely open (MIT license), Anthropic benefits from being the de facto standards body for AI agent infrastructure. This is smart strategy that produces a real public good, but the governance dynamics should be watched.
  • Audit and observability: The protocol itself does not define audit logging, rate limiting, or observability standards. These must be layered on top (via gateways, policy engines like Leash, or custom middleware). Enterprise deployments without these layers are flying blind.
  • “10,000+ servers” metric needs context: Many public MCP servers are hobbyist or proof-of-concept quality. Production-grade, maintained MCP servers from established vendors are a much smaller subset. Evaluate individual servers on their own merits, not the ecosystem count.
  • MCP servers as attack vectors (Operation Pale Fire): Block’s January 2026 red team exercise on Goose demonstrated that MCP servers and MCP-consuming agents are vulnerable to prompt injection via poisoned tool responses, calendar events, and recipes containing invisible Unicode characters. Organizations deploying MCP infrastructure should treat MCP servers as untrusted code, implement server vetting processes, and deploy prompt injection detection. See Block Goose catalog entry for details.
  • Context-window overhead criticism growing. Pi Coding Agent (30.9k GitHub stars) deliberately omits MCP, citing 7-9% context window consumption per session. Independent reports corroborate: a developer documented 3 MCP servers consuming 22,000 tokens before any user input; another found 7 servers consuming 67,300 tokens (33.7% of 200k context). Dynamic toolsets (Speakeasy) and code execution approaches are emerging responses, claiming 90-98% token reductions. The overhead problem is real but increasingly addressed by the ecosystem.