- Published on
Model Context Protocol (MCP) Complete Guide: AI Tool Standardization and Claude Integration
- Authors

- Name
- Youngju Kim
- @fjvbn20031
- Introduction
- MCP Architecture Overview
- MCP Core Primitives
- Building an MCP Server with Python FastMCP
- Building an MCP Server with TypeScript SDK
- stdio vs SSE Transport
- Claude Desktop Integration
- Building an MCP Client
- Agent Systems: LangGraph + MCP
- User Confirmation for Dangerous Tools
- Security Considerations
- MCP vs Traditional Function Calling
- Quiz
- Conclusion
Introduction
For AI to evolve from simple text generators into true agents that use real tools, we need a standardized communication layer between models and external systems. Model Context Protocol (MCP), announced by Anthropic in November 2024, is the open standard that solves exactly this problem.
Just as USB-C connects diverse devices through a single standard, MCP enables LLMs to communicate with any data source or tool through a unified protocol. This guide covers everything: MCP architecture, building production servers, Claude Desktop integration, and agent system design.
MCP Architecture Overview
JSON-RPC 2.0 Foundation
MCP runs on top of JSON-RPC 2.0. The client sends a request, the server responds — a simple structure with a powerful abstraction layer on top.
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "read_file",
"arguments": {
"path": "/tmp/example.txt"
}
}
}
Hosts / Clients / Servers Triangle
MCP comprises three distinct roles:
- Host: The application running the LLM — Claude Desktop, VS Code, or a custom AI app
- Client: The component inside the Host that manages a 1:1 session with an MCP server
- Server: A lightweight process that exposes tools, resources, and prompts to the outside world
[Host: Claude Desktop]
└── [Client] ──── stdio/SSE ──── [MCP Server: filesystem]
└── [Client] ──── stdio/SSE ──── [MCP Server: github]
└── [Client] ──── stdio/SSE ──── [MCP Server: database]
The Host connects to multiple MCP servers simultaneously and exposes each server's tools to the LLM.
MCP Core Primitives
Resources — Read-Only Data
Resources are read-only data that the server exposes to clients. Files, database records, API responses — all accessed through standardized URIs.
- URI format:
file:///home/user/docs/report.pdf,db://mydb/users/42 - No side effects — pure data reading only
- Supports both text and binary content
Tools — Function Execution
Tools are functions that the LLM can execute. Unlike Resources, Tools can have side effects.
- File creation/modification, API calls, database writes, etc.
- Input parameters defined with JSON Schema
- Returns results as text or images
Prompts — Reusable Templates
Prompts are prompt templates provided by the server. They standardize patterns that users repeatedly invoke.
@mcp.prompt()
def code_review_prompt(language: str, code: str) -> str:
return f"Review this {language} code and suggest improvements:\n\n{code}"
Sampling — Server-Side LLM Calls
Sampling is a unique MCP feature that allows servers to call the LLM in reverse through the client. Used when a server needs AI judgment during complex processing.
Building an MCP Server with Python FastMCP
Installation and Basic Structure
pip install fastmcp
FastMCP provides a decorator-based API for building MCP servers concisely.
from fastmcp import FastMCP
mcp = FastMCP("filesystem-server")
@mcp.tool()
def read_file(path: str) -> str:
"""Read and return the contents of a file."""
with open(path, "r", encoding="utf-8") as f:
return f.read()
@mcp.tool()
def write_file(path: str, content: str) -> str:
"""Write content to a file."""
with open(path, "w", encoding="utf-8") as f:
f.write(content)
return f"File saved: {path}"
@mcp.resource("file://{path}")
def get_file_resource(path: str) -> str:
"""Expose a file as a resource via URI."""
with open(path, "r", encoding="utf-8") as f:
return f.read()
if __name__ == "__main__":
mcp.run()
Adding Directory Navigation Tools
import os
from pathlib import Path
from fastmcp import FastMCP
mcp = FastMCP("filesystem-server")
@mcp.tool()
def list_directory(path: str = ".") -> list[str]:
"""Return a list of files in a directory."""
p = Path(path)
if not p.is_dir():
raise ValueError(f"{path} is not a directory")
return [str(item) for item in p.iterdir()]
@mcp.tool()
def search_files(directory: str, pattern: str) -> list[str]:
"""Search for files matching a pattern in a directory."""
p = Path(directory)
return [str(f) for f in p.rglob(pattern)]
if __name__ == "__main__":
mcp.run(transport="stdio")
Building an MCP Server with TypeScript SDK
Here we implement an MCP server that exposes database resources using TypeScript.
import { McpServer, ResourceTemplate } from '@modelcontextprotocol/sdk/server/mcp.js'
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'
import { z } from 'zod'
const server = new McpServer({
name: 'database-server',
version: '1.0.0',
})
// Resource: expose DB records via URI
server.resource(
'user',
new ResourceTemplate('db://users/{id}', { list: undefined }),
async (uri, { id }) => {
const user = await fetchUserById(id)
return {
contents: [
{
uri: uri.href,
text: JSON.stringify(user, null, 2),
mimeType: 'application/json',
},
],
}
}
)
// Tool: execute SQL query
server.tool('query_db', { sql: z.string().describe('SQL query to execute') }, async ({ sql }) => {
const results = await executeQuery(sql)
return {
content: [
{
type: 'text',
text: JSON.stringify(results, null, 2),
},
],
}
})
async function main() {
const transport = new StdioServerTransport()
await server.connect(transport)
}
main()
stdio vs SSE Transport
stdio Transport
Ideal for inter-process communication on the same machine. Used when Claude Desktop launches the server as a child process.
- Advantages: simple setup, no network required, secure (local only)
- Disadvantages: no remote access, single client only
- Use cases: Claude Desktop, local development, CI pipelines
SSE (Server-Sent Events) Transport
Ideal for HTTP-based remote communication. Used when multiple clients need to connect to a single server.
- Advantages: remote access, multiple clients, web-based integration
- Disadvantages: requires HTTP server setup, authentication implementation needed
- Use cases: team-shared MCP servers, cloud deployments, multi-user environments
# SSE server startup example
if __name__ == "__main__":
mcp.run(transport="sse", host="0.0.0.0", port=8000)
Claude Desktop Integration
Configuring claude_desktop_config.json
On macOS, edit ~/Library/Application Support/Claude/claude_desktop_config.json.
{
"mcpServers": {
"filesystem": {
"command": "python",
"args": ["/path/to/filesystem_server.py"],
"env": {
"ALLOWED_DIRS": "/home/user/documents,/tmp"
}
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_your_token_here"
}
},
"sqlite": {
"command": "uvx",
"args": ["mcp-server-sqlite", "--db-path", "/path/to/database.db"]
}
}
}
Verifying the Local Server
Restart Claude Desktop and the connected server's tools become automatically available. Check the tool icon at the bottom of the chat window to see the list of available MCP tools.
Building an MCP Client
You can build your own MCP client to connect to servers directly.
import asyncio
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
async def main():
server_params = StdioServerParameters(
command="python",
args=["filesystem_server.py"],
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
# Initialize
await session.initialize()
# List available tools
tools = await session.list_tools()
print("Available tools:", [t.name for t in tools.tools])
# Call a tool
result = await session.call_tool(
"read_file",
arguments={"path": "/tmp/test.txt"}
)
print("Result:", result.content[0].text)
asyncio.run(main())
Agent Systems: LangGraph + MCP
Connecting LangGraph with MCP lets a LangGraph agent use tools from MCP servers directly.
from langchain_mcp_adapters.tools import load_mcp_tools
from langgraph.prebuilt import create_react_agent
from langchain_anthropic import ChatAnthropic
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
import asyncio
async def create_mcp_agent():
model = ChatAnthropic(model="claude-3-5-sonnet-20241022")
server_params = StdioServerParameters(
command="python",
args=["filesystem_server.py"],
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# Convert MCP tools to LangChain tools
tools = await load_mcp_tools(session)
# Create a ReAct agent
agent = create_react_agent(model, tools)
result = await agent.ainvoke({
"messages": [{"role": "user", "content": "List the files in /tmp"}]
})
return result
asyncio.run(create_mcp_agent())
User Confirmation for Dangerous Tools
Implement a user confirmation pattern for sensitive operations.
from fastmcp import FastMCP
from fastmcp.exceptions import ToolError
mcp = FastMCP("safe-server")
DANGEROUS_PATHS = ["/etc", "/sys", "/proc", "/root"]
@mcp.tool()
def delete_file(path: str, confirmed: bool = False) -> str:
"""
Delete a file. You must explicitly set confirmed=True before deletion proceeds.
Args:
path: Path of the file to delete
confirmed: Explicitly confirm the deletion intent (default: False)
"""
import os
from pathlib import Path
abs_path = str(Path(path).resolve())
# Block dangerous paths
for danger in DANGEROUS_PATHS:
if abs_path.startswith(danger):
raise ToolError(f"Cannot delete files under {danger}")
# If called without confirmation, prompt the user
if not confirmed:
return (
f"Warning: You are about to permanently delete {abs_path}. "
f"Call again with confirmed=True to proceed."
)
os.remove(abs_path)
return f"Deleted: {abs_path}"
Security Considerations
Permission Model
MCP servers should follow the principle of least privilege.
- Allow access only to required directories and databases
- Manage allowed path lists via environment variables
- Defend against path traversal attacks
import os
from pathlib import Path
ALLOWED_DIRS = os.getenv("ALLOWED_DIRS", "/tmp").split(",")
def is_path_allowed(path: str) -> bool:
abs_path = Path(path).resolve()
return any(
str(abs_path).startswith(allowed.strip())
for allowed in ALLOWED_DIRS
)
Sensitive Data Handling
- Inject API keys and passwords via environment variables — never hardcode them
- Never log sensitive response content
- Never run remote MCP servers without TLS
Sandboxing
In production, isolate MCP servers in containers or virtual environments.
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY server.py .
# Run as a dedicated non-root user
RUN useradd -m mcpuser
USER mcpuser
CMD ["python", "server.py"]
Tool Call Validation
Always re-validate LLM-generated tool call parameters on the server side. Prompt injection attacks can inject malicious parameters via the LLM.
from pydantic import BaseModel, validator
class FileReadParams(BaseModel):
path: str
@validator("path")
def validate_path(cls, v):
p = Path(v).resolve()
if ".." in str(p):
raise ValueError("Path traversal attempt detected")
if not is_path_allowed(str(p)):
raise ValueError("Path not in allowed list")
return str(p)
MCP vs Traditional Function Calling
| Feature | OpenAI Function Calling | MCP |
|---|---|---|
| Standardization | Vendor lock-in | Open standard |
| Reusability | Must reimplement per model | One server works with all compatible models |
| Data access | Tools only | Resources expose data directly |
| Reverse LLM calls | Not available | Sampling lets servers call the LLM |
| Transports | HTTP only | stdio and SSE supported |
Quiz
Q1. What is the core difference between Resources and Tools in MCP?
Answer: Resources are for read-only data access with no side effects, while Tools are for function execution that can modify system state.
Explanation: Resources expose data through URIs like file:// or db:// without changing anything in the system. Tools perform operations like file writes, API calls, or database mutations that can alter state. The LLM uses Resources when it needs to read data and Tools when it needs to act or change something.
Q2. Why is MCP better than OpenAI function calling for tool standardization?
Answer: MCP is a vendor-neutral open standard, so a single MCP server implementation works across Claude, GPT, open-source models, and any other compatible LLM without rewriting.
Explanation: OpenAI function calling is tied to the OpenAI API format — switching models means rewriting all your tools. MCP servers are protocol-level services, completely decoupled from any specific model. They also add capabilities absent from function calling: Resources for data exposure, Prompts for template sharing, and Sampling for reverse LLM calls.
Q3. When should you use stdio vs SSE transport?
Answer: Use stdio for local single-client environments (Claude Desktop, personal development), and SSE for remote or multi-client environments (shared team servers, cloud deployments).
Explanation: stdio has the parent process directly launch the server as a child, requiring no network configuration and keeping things simple and secure. SSE runs an HTTP server that multiple clients can connect to simultaneously, making it ideal for cloud deployments or shared team infrastructure. SSE deployments require additional setup for authentication and TLS.
Q4. How does MCP Sampling work, and what are its security considerations?
Answer: Sampling lets an MCP server call the LLM in reverse through the client/host. It must only execute after user review and approval to prevent abuse.
Explanation: In the standard flow, the LLM calls server tools. Sampling reverses this: the server asks the LLM to "analyze this text" or "make a decision." The Host acts as an intermediary that inspects this request and only forwards it to the LLM after user approval. Without this gate, a malicious server could make unlimited LLM calls or extract sensitive information through carefully crafted Sampling requests.
Q5. How do you implement a user approval workflow for dangerous tools in an MCP server?
Answer: Use the confirmed parameter pattern: on the first call return a warning, and only execute the real operation when the caller explicitly sets confirmed=True on a second call.
Explanation: The LLM reads the tool's description and decides how to call it. When the first call with confirmed=False returns "I'm about to do X — call again with confirmed=True to proceed," the LLM presents this to the user and waits for their go-ahead. Once the user approves, the LLM re-calls with confirmed=True and the action executes. This pattern should always be paired with dangerous-path blocking and permission checks.
Conclusion
MCP is the USB-C of the AI agent ecosystem. Define the standard once, and connect it to any model and any client. Because Anthropic released it as an open standard, hundreds of MCP servers have already been developed by the community, and major developer tools like VS Code, Cursor, and JetBrains have started supporting MCP natively.
Try building your first server with Python FastMCP in five minutes and connect it to Claude Desktop. You'll immediately feel how powerful it is to have AI directly interact with your filesystem, databases, and APIs.