Skip to content
Published on

MCP Complete Guide: Why Model Context Protocol Became the USB-C of AI with 97M Downloads

Authors

What Is MCP: The USB-C of AI

One Protocol to Connect Them All

Before USB existed, printers needed parallel ports, keyboards needed PS/2 connectors, and cameras required serial ports. Different cables, different drivers, different connection methods for every device. Just as USB ended that chaos, MCP (Model Context Protocol) has created the connectivity standard for the AI world.

MCP is an open standard protocol for AI models to communicate with external data sources, tools, and services. Anthropic released it as open source in November 2024, and in just over a year, it has become the de facto standard of the AI industry.

The World Before MCP: The N x M Problem

Think back to a world without MCP:

  • ChatGPT needs to access GitHub data? Requires an OpenAI-specific plugin.
  • Claude needs to read Slack messages? Requires an Anthropic-specific integration.
  • Gemini needs to query a database? Requires a Google-specific connector.

With N AI models and M external tools, you needed a total of N x M custom integrations. With 5 AI models and 100 external tools, that meant 500 individual integrations.

MCP reduces this problem to N + M. Each AI model only needs to implement one MCP client, and each tool only needs to implement one MCP server. Like USB, just match one end and everything connects.

The Timeline of Explosive Growth

The growth rate of MCP has been extraordinary:

DateEventSignificance
Nov 2024Anthropic open-sources MCPProtocol born
Jan 2025MCP integrated into Claude DesktopFirst major client
Mar 2025OpenAI adopts MCP in Agents SDKCompetitor adoption, industry standard confirmed
Apr 2025Google DeepMind supports MCP in Agent Development KitBig Three complete
Jun 2025OAuth 2.0 authentication spec addedEnterprise ready
Nov 2025MCP 1st anniversary update, enhanced remote server supportMaturity phase
Dec 2025Donated to Linux Foundation AAIFCommunity governance transition
Mar 2026Monthly SDK downloads surpass 97MExplosive adoption confirmed

Combining Python SDK and TypeScript SDK monthly downloads reaches 97 million. The MCP.so marketplace hosts tens of thousands of MCP servers, and searching "MCP server" on GitHub returns thousands of repositories.

The Core Problem MCP Solves

AI models are fundamentally isolated entities. No matter how intelligent a model is, it cannot know information after its training cutoff, cannot access internal company data, and cannot request actions from external systems.

MCP resolves this isolation in three ways:

  1. Resources: Provides data that AI can read (files, DB records, API responses)
  2. Tools: Provides functions that AI can execute (API calls, code execution, file creation)
  3. Prompts: Provides prompt templates optimized for specific tasks

These three elements form the pillars of the MCP protocol. We will examine the architecture in detail in the next section.


Architecture Deep Dive

Client-Server Model

MCP follows a client-server architecture, built on top of the JSON-RPC 2.0 protocol for lightweight and efficient communication.

MCP Architecture Overview

+--------------------------------------------------+
|              Host Application                     |
|  (Claude Desktop, VS Code, IDE, Custom App)       |
|                                                   |
|  +--------------+  +--------------+               |
|  | MCP Client   |  | MCP Client   |  ...         |
|  |     #1       |  |     #2       |               |
|  +------+-------+  +------+-------+               |
|         |                 |                        |
+---------+-----------------+------------------------+
          |                 |
     +----v-----+     +----v------+
     |   MCP    |     |   MCP     |
     |  Server  |     |  Server   |
     |  (Git)   |     |  (Slack)  |
     +----+-----+     +-----+----+
          |                 |
     +----v-----+     +----v------+
     | GitHub   |     |  Slack    |
     |   API    |     |   API     |
     +----------+     +-----------+

Here is a summary of the key components:

  • Host: The application running MCP clients (Claude Desktop, VS Code, custom apps)
  • Client: A connector within the host that communicates 1:1 with a specific MCP server
  • Server: A service that provides data and functionality from external systems in a standardized format

A single host app can run multiple MCP clients simultaneously. For example, Claude Desktop can connect to GitHub MCP server, Slack MCP server, and PostgreSQL MCP server all at the same time.

Three Core Concepts: Resources, Tools, Prompts

Resources

Resources are read-only data that MCP servers provide to clients. They are identified by URIs.

{
  "resources": [
    {
      "uri": "file:///project/src/main.py",
      "name": "Main source file",
      "mimeType": "text/x-python"
    },
    {
      "uri": "db://users/schema",
      "name": "Users table schema",
      "mimeType": "application/json"
    }
  ]
}

Resources enrich the context of AI models. Code files, database schemas, API documentation, configuration files -- any data that AI needs to reference can be provided as resources.

Tools

Tools are executable functions that AI models can invoke. Think of them as a standardized version of function calling.

{
  "tools": [
    {
      "name": "search_code",
      "description": "Search for code across all repositories",
      "inputSchema": {
        "type": "object",
        "properties": {
          "query": {
            "type": "string",
            "description": "Search query"
          },
          "language": {
            "type": "string",
            "description": "Programming language filter"
          }
        },
        "required": ["query"]
      }
    }
  ]
}

AI models inspect the list of available tools and select the appropriate one based on the user's request. Results are fed back to the model for response generation.

Prompts

Prompts are reusable prompt templates optimized for specific tasks.

{
  "prompts": [
    {
      "name": "code-review",
      "description": "Review code for best practices and potential issues",
      "arguments": [
        {
          "name": "language",
          "description": "Programming language",
          "required": true
        }
      ]
    }
  ]
}

Through prompts, domain experts can craft optimal instructions and distribute them via MCP servers.

Transport: Local vs Remote

MCP supports two transport mechanisms:

stdio (Standard Input/Output)

  • Runs the MCP server as a local process
  • Exchanges JSON-RPC messages via stdin/stdout
  • Simple setup with high security
  • Ideal for personal development environments

Streamable HTTP (formerly SSE)

  • Communicates with remote servers over HTTP
  • Evolved from SSE to Streamable HTTP in the November 2025 update
  • Ideal for team/organization-wide sharing
  • Deployable to Cloudflare, AWS, and more
Transport Comparison

stdio (Local)                    Streamable HTTP (Remote)
+----------+    stdin/stdout     +----------+
|  Client  |<------------------->|  Server  |
+----------+    (local process)  +----------+

+----------+    HTTP/HTTPS       +----------+
|  Client  |<------------------->|  Server  |
+----------+    (network)        +----------+

Authentication: OAuth 2.0

The authentication spec added in June 2025 was a critical update enabling enterprise adoption of MCP.

  • Standardized OAuth 2.0 authentication flow
  • Token-based permission management
  • Fine-grained scope control
  • Integration with existing Identity Providers (IdP)

This enables organizations to systematically manage which AI models can access which data.

Protocol Lifecycle

The lifecycle of an MCP connection proceeds as follows:

  1. Initialization: Client connects to server, negotiates protocol version and supported capabilities
  2. Discovery: Client queries the server for lists of resources, tools, and prompts
  3. Execution: Tool calls and resource reads are performed based on AI model requests
  4. Shutdown: Connection cleanup and resource release
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "initialize",
  "params": {
    "protocolVersion": "2025-11-05",
    "capabilities": {
      "tools": {},
      "resources": {},
      "prompts": {}
    },
    "clientInfo": {
      "name": "claude-desktop",
      "version": "1.0.0"
    }
  }
}

Why MCP Won

Fundamental Solution to the N x M Problem

As described earlier, the N x M problem was the biggest inefficiency in the AI ecosystem.

Before MCP: N x M Integrations

AI Models           External Tools
+---------+       +---------+
| ChatGPT |-------|  GitHub  |
|         |--+ +--|         |
+---------+  | |  +---------+
+---------+  | |  +---------+
|  Claude |--+-+--|  Slack   |
|         |--+-+--|         |
+---------+  | |  +---------+
+---------+  | |  +---------+
| Gemini  |--+ +--|   DB    |
|         |-------|         |
+---------+       +---------+
= 3 x 3 = 9 integrations needed

After MCP: N + M Integrations

AI Models       MCP       External Tools
+---------+   +-----+   +---------+
| ChatGPT |---|     |---|  GitHub  |
+---------+   |     |   +---------+
+---------+   | MCP |   +---------+
|  Claude |---|     |---|  Slack   |
+---------+   | Pro |   +---------+
+---------+   | to  |   +---------+
| Gemini  |---|     |---|   DB    |
+---------+   | col |   +---------+
              +-----+
= 3 + 3 = only 6 integrations needed

This efficiency becomes more dramatic as the ecosystem grows. With 10 AI models and 1,000 external tools, you need only 1,010 integrations instead of 10,000.

OpenAI Adoption: Accepting a Standard Built by a Competitor

In March 2025, OpenAI announced MCP support in their Agents SDK, which sent shockwaves through the industry. The largest competitor had adopted a protocol created by Anthropic.

Sam Altman noted that open standards benefit all participants, and that they want agents to have access to more tools.

The rationale behind this decision was pragmatic:

  • The existing MCP ecosystem could not be ignored
  • Creating a proprietary protocol would fragment the ecosystem
  • Developers were already building MCP servers
  • Standardization grows the entire market, benefiting everyone

Google DeepMind Joins

In April 2025, Google announced MCP support in their Agent Development Kit (ADK). With this, all three major AI companies now supported MCP.

Google joining confirmed that MCP had transitioned from a single company's project to an industry-wide standard.

Linux Foundation Donation: Community Governance

In December 2025, Anthropic donated MCP to the Linux Foundation's AI and Data Foundation (AAIF). This was a pivotal decision:

  • Neutrality: The community, not a single company, manages the standard
  • Sustainability: The protocol evolves independently of Anthropic's business direction
  • Trust: Competitors can confidently participate
  • Governance transparency: Open decision-making processes

This follows the same path as successful open-source projects like Linux, Kubernetes, and Node.js.

MCP vs Function Calling vs LangChain Tools

AspectMCPFunction CallingLangChain Tools
StandardizationOpen protocolVendor-specific specLibrary-level abstraction
ReusabilityBuild once, use with any AIVendor-lockedLangChain ecosystem only
Transportstdio, Streamable HTTPHTTP APIIn-process
DiscoveryDynamic tool/resource discoveryPre-defined requiredPre-defined required
AuthenticationOAuth 2.0 standardVendor-specificCustom implementation
State ManagementProtocol-level supportNoneLibrary-level
Ecosystem SizeTens of thousands of serversVendor-specific pluginsHundreds of tools

The biggest differentiator of MCP is vendor independence. An MCP server built once works with Claude, ChatGPT, Gemini, open-source LLMs, or any other AI model.


Building MCP Servers: A Practical Guide

Building a Python Server with FastMCP

FastMCP is a framework for quickly building MCP servers in Python. Let us build a simple weather information server.

# Install packages
pip install fastmcp httpx
# weather_server.py
from fastmcp import FastMCP
import httpx
import json

# Create MCP server
mcp = FastMCP("weather-server")

# Resource: List of supported cities
@mcp.resource("weather://cities")
def list_cities() -> str:
    """Returns the list of available cities."""
    cities = [
        "seoul", "tokyo", "new-york",
        "london", "paris", "berlin"
    ]
    return json.dumps(cities)

# Tool: Get weather
@mcp.tool()
async def get_weather(city: str) -> str:
    """
    Fetches current weather information for a specified city.

    Args:
        city: City name (e.g., seoul, tokyo, new-york)
    """
    async with httpx.AsyncClient() as client:
        response = await client.get(
            f"https://wttr.in/{city}",
            params={"format": "j1"}
        )
        data = response.json()
        current = data["current_condition"][0]

        return json.dumps({
            "city": city,
            "temperature_c": current["temp_C"],
            "humidity": current["humidity"],
            "description": current["weatherDesc"][0]["value"],
            "wind_speed_kmh": current["windspeedKmph"],
            "feels_like_c": current["FeelsLikeC"]
        })

# Tool: Get forecast
@mcp.tool()
async def get_forecast(city: str, days: int = 3) -> str:
    """
    Fetches weather forecast for a specified city.

    Args:
        city: City name
        days: Number of forecast days (1-3)
    """
    async with httpx.AsyncClient() as client:
        response = await client.get(
            f"https://wttr.in/{city}",
            params={"format": "j1"}
        )
        data = response.json()
        forecasts = []

        for day in data["weather"][:days]:
            forecasts.append({
                "date": day["date"],
                "max_temp_c": day["maxtempC"],
                "min_temp_c": day["mintempC"],
                "description": day["hourly"][4]["weatherDesc"][0]["value"]
            })

        return json.dumps(forecasts)

# Prompt: Weather briefing
@mcp.prompt()
def weather_briefing(city: str) -> str:
    """Prompt for city weather briefing"""
    return f"""Analyze the weather information for the following city and provide a briefing:

City: {city}

Please include:
1. Current temperature and feels-like temperature
2. Humidity and wind conditions
3. 3-day forecast summary
4. Recommendations for going out (umbrella, jacket, etc.)
"""

if __name__ == "__main__":
    mcp.run()

Once running, this server can be immediately connected from Claude Desktop or any other MCP client.

# Test directly
python weather_server.py

Building a Server with the TypeScript SDK

If you prefer TypeScript, you can use the official SDK.

# Initialize project
npm init -y
npm install @modelcontextprotocol/sdk zod
// src/index.ts
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'
import { z } from 'zod'

const server = new McpServer({
  name: 'todo-server',
  version: '1.0.0',
})

// In-memory todo store
interface TodoItem {
  id: string
  title: string
  completed: boolean
  createdAt: string
}

const todos: Map<string, TodoItem> = new Map()

// Tool: Add todo
server.tool(
  'add_todo',
  'Add a new todo item',
  {
    title: z.string().describe('The todo item title'),
  },
  async ({ title }) => {
    const id = crypto.randomUUID()
    const todo: TodoItem = {
      id,
      title,
      completed: false,
      createdAt: new Date().toISOString(),
    }
    todos.set(id, todo)

    return {
      content: [
        {
          type: 'text',
          text: JSON.stringify(todo, null, 2),
        },
      ],
    }
  }
)

// Tool: List todos
server.tool('list_todos', 'List all todo items', {}, async () => {
  const allTodos = Array.from(todos.values())
  return {
    content: [
      {
        type: 'text',
        text: JSON.stringify(allTodos, null, 2),
      },
    ],
  }
})

// Tool: Complete todo
server.tool(
  'complete_todo',
  'Mark a todo as completed',
  {
    id: z.string().describe('The todo item ID'),
  },
  async ({ id }) => {
    const todo = todos.get(id)
    if (!todo) {
      return {
        content: [{ type: 'text', text: 'Todo not found' }],
        isError: true,
      }
    }
    todo.completed = true
    return {
      content: [
        {
          type: 'text',
          text: JSON.stringify(todo, null, 2),
        },
      ],
    }
  }
)

// Resource: Todo stats
server.resource('todo-stats', 'todo://stats', async (uri) => {
  const allTodos = Array.from(todos.values())
  const stats = {
    total: allTodos.length,
    completed: allTodos.filter((t) => t.completed).length,
    pending: allTodos.filter((t) => !t.completed).length,
  }

  return {
    contents: [
      {
        uri: uri.href,
        mimeType: 'application/json',
        text: JSON.stringify(stats, null, 2),
      },
    ],
  }
})

// Start server
async function main() {
  const transport = new StdioServerTransport()
  await server.connect(transport)
  console.error('Todo MCP Server running on stdio')
}

main().catch(console.error)

Debugging with MCP Inspector

MCP Inspector is the official tool for testing and debugging MCP servers.

# Run Inspector
npx @modelcontextprotocol/inspector python weather_server.py

When running Inspector, you can perform the following in the browser:

  • View the list of tools, resources, and prompts offered by the server
  • Directly invoke each tool and inspect the results
  • Watch JSON-RPC message logs in real time
  • Debug error messages

Inspector is an essential tool during development. It lets you verify that the server works correctly before connecting any client.

Deploying a Remote Server to Cloudflare

Once local verification is complete, you can deploy remotely. Cloudflare Workers is particularly well-suited for MCP server deployment.

# Start with the Cloudflare MCP server template
npm create cloudflare@latest -- my-mcp-server \
  --template=cloudflare/ai/demos/remote-mcp-server
// src/index.ts (Cloudflare Workers)
import { McpAgent } from 'agents/mcp'
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'
import { z } from 'zod'

export class MyMCP extends McpAgent {
  server = new McpServer({
    name: 'my-remote-server',
    version: '1.0.0',
  })

  async init() {
    this.server.tool('hello', 'Say hello to someone', { name: z.string() }, async ({ name }) => ({
      content: [{ type: 'text', text: `Hello, ${name}!` }],
    }))
  }
}

export default {
  fetch(request: Request, env: Env, ctx: ExecutionContext) {
    const url = new URL(request.url)
    if (url.pathname === '/sse' || url.pathname === '/mcp') {
      return MyMCP.serve('/sse').fetch(request, env, ctx)
    }
    return new Response('MCP Server Running')
  },
}
# Deploy
npx wrangler deploy

After deployment, you can connect to the MCP server from anywhere via Streamable HTTP. An entire team can share a single remote MCP server.


Major MCP Server Ecosystem

GitHub MCP Server

The official GitHub MCP server is one of the most popular MCP servers.

Available tools:

  • Repository search and browsing
  • Issue creation, viewing, and editing
  • Pull request creation and review
  • Code search (regex support)
  • Branch management
  • File read/write

Claude Desktop configuration example:

{
  "mcpServers": {
    "github": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "-e",
        "GITHUB_PERSONAL_ACCESS_TOKEN",
        "ghcr.io/github/github-mcp-server"
      ],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_your_token_here"
      }
    }
  }
}

Google Drive / Docs MCP

An MCP server connecting Google Workspace with AI.

Key features:

  • Google Drive file search
  • Google Docs content reading
  • Google Sheets data retrieval
  • File metadata queries

Slack MCP

Connects the team communication tool Slack with AI.

Key features:

  • Channel message reading
  • Message search
  • Channel listing
  • Thread context awareness

Database MCP (PostgreSQL, MySQL)

Enables AI to directly access databases for querying and analysis.

Key features:

  • SQL query execution (read-only mode supported)
  • Schema exploration
  • Table listing and column information
  • Query result analysis
{
  "mcpServers": {
    "postgres": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-postgres",
        "postgresql://user:password@localhost:5432/mydb"
      ]
    }
  }
}

Filesystem MCP

A basic MCP server providing access to the local file system.

Key features:

  • File/directory reading
  • File search (glob patterns)
  • File creation/modification
  • Directory structure exploration
{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/username/projects"]
    }
  }
}

Brave Search MCP

Provides web search capabilities to AI, enabling it to search for and leverage the latest information.

Key features:

  • Web search
  • News search
  • Search result summarization
  • Domain filtering

Beyond these, tens of thousands of MCP servers are available on the MCP.so marketplace and GitHub. MCP servers exist for virtually every major service including Notion, Jira, Linear, Figma, AWS, and GCP.


Claude Code + MCP: Real-World Workflows

Connecting MCP Servers in Claude Code

Claude Code is Anthropic's CLI-based AI coding tool with native MCP support.

# Add MCP server (project scope)
claude mcp add github -- docker run -i --rm \
  -e GITHUB_PERSONAL_ACCESS_TOKEN \
  ghcr.io/github/github-mcp-server

# Add MCP server (global scope)
claude mcp add --scope user postgres -- \
  npx -y @modelcontextprotocol/server-postgres \
  postgresql://localhost:5432/mydb

# List registered MCP servers
claude mcp list

# Remove a specific MCP server
claude mcp remove github

You can also add servers directly to the Claude Code configuration file:

{
  "mcpServers": {
    "github": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "-e",
        "GITHUB_PERSONAL_ACCESS_TOKEN",
        "ghcr.io/github/github-mcp-server"
      ],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_xxxx"
      }
    },
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/username/projects"]
    }
  }
}

Real-World Example: From DB Query to PR Creation

The true power of MCP emerges when you combine multiple servers. Let us walk through a real workflow.

Scenario: "Add a last_login_at column to the users table, create a migration, and open a PR."

For this single request, Claude Code operates as follows:

  1. Queries the current users table schema via the PostgreSQL MCP server
  2. Analyzes the existing schema and writes migration SQL
  3. Creates the migration file via the Filesystem MCP server
  4. Updates related model code
  5. Creates a branch, commits, and opens a PR via the GitHub MCP server

The entire process happens automatically within a single conversation. The developer only needs to review the final PR.

Another scenario: "Summarize today's bug reports from Slack and create GitHub issues."

  1. Searches today's channel messages via the Slack MCP server
  2. Filters and summarizes bug-related messages
  3. Creates issues via the GitHub MCP server (with auto-assigned labels and assignees)

MCP is the critical infrastructure that enables AI agents to actually perform work.

MCP Server Debugging Tips

Useful debugging methods when MCP server connections have issues in Claude Code:

# Check MCP server status
claude mcp list

# Check server logs (for Claude Desktop)
# macOS: ~/Library/Logs/Claude/
# Windows: %APPDATA%/Claude/logs/

# Independent testing with Inspector
npx @modelcontextprotocol/inspector your-server-command

Common issues and solutions:

SymptomCauseSolution
Server not connectingExecutable path errorUse absolute paths, verify with which command
Tools not appearingInitialization failureTest server independently with Inspector
Timeout errorsServer response delayCheck async handling, adjust timeout settings
Authentication failureToken expired/invalidReset environment variables

MCP Roadmap Every Developer Should Know

2024-2025 Key Milestones

November 2024 - Birth

Anthropic released MCP as open source. Initially it was for Claude Desktop only, but it was designed as an open protocol that anyone could implement.

March 2025 - OpenAI Adoption

OpenAI announced MCP support in their Agents SDK. This was the turning point where MCP transcended being a single company's project and established itself as an industry standard.

June 2025 - Authentication Spec Added

OAuth 2.0-based authentication was added to the MCP specification, enabling safe enterprise use.

  • Token-based authentication flows
  • Fine-grained permission (scope) management
  • Integration with existing Identity Providers (IdP)
  • Audit log support

November 2025 - First Anniversary Major Update

Major updates were announced for MCP's first anniversary:

  • Streamable HTTP: A more efficient remote communication method replacing the previous SSE transport
  • Tool Annotations: Explicit declaration of tool side effects
  • Elicitation: Ability for servers to request additional information from users during execution
  • Structured Output: Defining schemas for tool outputs

December 2025 - Linux Foundation Donation

MCP was donated to the Linux Foundation's AI and Data Foundation (AAIF). This transitioned MCP's governance from Anthropic alone to community-driven.

2026 Outlook

Inter-Agent MCP Communication

Currently, MCP is primarily used for communication between AI models and external tools. In 2026, it is expected to expand to inter-agent communication.

Current: AI Model -> MCP -> External Tool
Future:  AI Agent A -> MCP -> AI Agent B

For example, a coding agent could request test execution from a testing agent via MCP, and the testing agent returns results via MCP.

Multimodal MCP

The ability to transmit images, audio, and video through MCP (not just text) is expected to expand. Some MCP servers already support image resources, and this trend will accelerate.

Enterprise MCP Gateway

Gateway solutions for centrally managing MCP server access in large enterprise environments are emerging:

  • Centralized authentication/authorization
  • Usage monitoring and rate limiting
  • Audit logs
  • Policy-based access control

MCP Registry

An official registry for discovering and installing MCP servers -- similar to npm or PyPI -- is anticipated. Currently MCP.so serves as an unofficial marketplace, but after the Linux Foundation donation, an official registry is being discussed.


Practice Quiz

Test your understanding of MCP.

Q1. What is the "N x M problem" that MCP solves?

Answer: When connecting N AI models to M external tools, a custom integration is needed for each combination, requiring N x M total integrations. MCP provides a standard protocol that reduces this to N + M. Each AI model implements just one MCP client, and each tool implements just one MCP server.

Q2. Explain the differences between MCP's three core concepts: Resources, Tools, and Prompts.

Answer:

  • Resources: Read-only data that AI can access. Provides context through files, DB schemas, API responses, etc. Identified by URIs.
  • Tools: Executable functions that AI can invoke. Performs actions like API calls, data creation/modification, and external system operations.
  • Prompts: Reusable prompt templates optimized for specific tasks. Domain experts can create and distribute optimal instructions.

In short: Resources are for "reading," Tools are for "executing," and Prompts are for "guiding."

Q3. Why did OpenAI adopt MCP, a protocol created by their competitor Anthropic?

Answer: It was a pragmatic decision. First, the existing MCP ecosystem could not be ignored. Second, creating a proprietary protocol would fragment the ecosystem, harming everyone. Third, developers were already building MCP servers, and network effects were in motion. Fourth, standardization grows the entire market, ultimately benefiting OpenAI as well. This follows the same logic as web browsers adopting W3C standards.

Q4. When are MCP's two transport mechanisms (stdio vs Streamable HTTP) each appropriate?

Answer:

  • stdio (Standard I/O): Best for local development environments. Runs the MCP server as a local process, communicating via stdin/stdout. Simple setup, no network required, and high security. Used by individual developers or single-machine environments.
  • Streamable HTTP: Best for remote/team environments. Communicates over HTTP, so it can be deployed to Cloudflare, AWS, etc. for the entire team to share. Enables authentication, access control, and monitoring.
Q5. Why is it significant that MCP was donated to the Linux Foundation?

Answer: The Linux Foundation donation is a critical decision for MCP's long-term success. First, neutrality is guaranteed as the community, not a single company, manages the standard. Second, the protocol's sustainability is ensured regardless of changes in Anthropic's business direction. Third, trust is strengthened as competitors can confidently participate. Fourth, governance transparency is secured through open decision-making processes. This follows the proven path of successful open-source projects like Linux, Kubernetes, and Node.js.


References

  1. MCP Official Site - modelcontextprotocol.io - Spec, documentation, quickstart guides
  2. MCP GitHub Repository - github.com/modelcontextprotocol - Source code, SDKs, examples
  3. Anthropic MCP Announcement Blog - anthropic.com/news/model-context-protocol - November 2024 initial release
  4. OpenAI MCP Adoption Announcement - openai.com/index/new-tools-for-building-agents - March 2025
  5. Google ADK MCP Support - google.github.io/adk-docs - Agent Development Kit documentation
  6. Linux Foundation AAIF Donation Announcement - linuxfoundation.org - December 2025
  7. MCP 1st Anniversary Update - modelcontextprotocol.io/blog - Streamable HTTP, Tool Annotations, etc.
  8. FastMCP Official Docs - gofastmcp.com - Python MCP framework
  9. MCP TypeScript SDK - npmjs.com/package/@modelcontextprotocol/sdk
  10. MCP Python SDK - pypi.org/project/mcp
  11. GitHub MCP Server - github.com/github/github-mcp-server
  12. Cloudflare MCP Deployment Guide - developers.cloudflare.com/agents/guides/remote-mcp-server
  13. MCP Inspector - github.com/modelcontextprotocol/inspector
  14. MCP.so Marketplace - mcp.so - Community MCP server directory
  15. Claude Code MCP Docs - docs.anthropic.com/en/docs/claude-code - MCP integration guide
  16. MCP Spec Documentation - spec.modelcontextprotocol.io - Detailed protocol specification