Skip to content

Split View: MCP (Model Context Protocol) 완전 가이드: AI와 외부 세계를 연결하는 새로운 표준

|

MCP (Model Context Protocol) 완전 가이드: AI와 외부 세계를 연결하는 새로운 표준

AI 에이전트를 만들다 보면 반드시 이런 순간이 옵니다: "이 에이전트가 우리 데이터베이스에 접근할 수 있으면 좋겠는데." 그러면 LangChain 툴을 하나 만들고, OpenAI function calling 형식으로 래핑하고, Claude에는 또 다른 방식으로 연결하고... AI 모델마다 통합 코드를 따로 작성하는 악순환에 빠집니다.

MCP는 이 문제를 해결하기 위해 Anthropic이 2024년에 발표한 오픈 표준입니다.

MCP란 무엇인가?

MCP(Model Context Protocol)는 AI 모델과 외부 도구/데이터 소스를 연결하는 표준화된 프로토콜입니다. 컴퓨터에서 USB-C가 어떤 장치든 하나의 포트로 연결하는 것처럼, MCP는 어떤 AI 모델이든 어떤 외부 도구든 하나의 인터페이스로 연결합니다.

핵심 특징:

  • 오픈 표준: Anthropic이 만들었지만 특정 회사에 종속되지 않음
  • 양방향: 클라이언트(AI)와 서버(도구) 모두 메시지를 보낼 수 있음
  • JSON-RPC 기반: 단순하고 검증된 프로토콜

MCP 이전의 세상 (문제점)

왜 이런 표준이 필요했을까요? 상황을 보면 명확합니다.

기존 방식의 M×N 문제:

모델 수 × 도구 수 = 통합 코드 수

Claude + (Slack + GitHub + DB + Notion) = 4개의 Claude 통합
GPT-4 + (Slack + GitHub + DB + Notion) = 4개의 GPT 통합
Gemini + (Slack + GitHub + DB + Notion) = 4개의 Gemini 통합

12개의 서로 다른 통합 코드

MCP 이후:

도구 수 = MCP 서버  (모델과 무관하게 재사용)

Slack MCP 서버 1개 → Claude, GPT-4, Gemini 모두 사용 가능
GitHub MCP 서버 1개 → 모든 모델에서 사용 가능

한번 만든 MCP 서버는 MCP를 지원하는 모든 AI 클라이언트에서 동작합니다.

MCP 아키텍처

┌─────────────────────┐         ┌──────────────────────┐
MCP Client      │◄───────►│     MCP Server  (Claude, Cursor,JSON     (your tools, DBs,IDE,)RPCAPIs, files, etc.)└─────────────────────┘         └──────────────────────┘

MCP Server가 노출하는 3가지:

├── Resources (읽기 전용 데이터)
│   └── 파일, DB 쿼리 결과, API 응답 등
├── Tools (실행 가능한 액션)
│   └── 파일 쓰기, API 호출, DB 수정 등
└── Prompts (재사용 가능한 프롬프트 템플릿)
    └── "코드 리뷰해줘" 같은 구조화된 템플릿

MCP는 서버-클라이언트 모델을 따릅니다:

  • MCP Client: Claude Desktop, Cursor, 직접 만든 AI 앱 등
  • MCP Server: 여러분이 만드는 도구 서버 (데이터베이스, API, 파일 시스템 등)
  • Transport: stdio (로컬) 또는 HTTP/SSE (원격)

MCP Server 만들기 (Python)

실제로 동작하는 MCP 서버를 만들어봅시다. 고객 데이터베이스에 접근하는 서버입니다.

from mcp.server import Server
from mcp.server.models import InitializationOptions
import mcp.types as types
import json
import asyncio

app = Server("my-database-server")

# Resources 등록: 읽기 전용 데이터 노출
@app.list_resources()
async def list_resources() -> list[types.Resource]:
    return [
        types.Resource(
            uri="db://customers",
            name="Customer Database",
            description="Access customer records. Returns up to 100 records.",
            mimeType="application/json"
        ),
        types.Resource(
            uri="db://orders",
            name="Order History",
            description="Access order history for all customers",
            mimeType="application/json"
        )
    ]

@app.read_resource()
async def read_resource(uri: str) -> str:
    if uri == "db://customers":
        # 실제 DB에서 데이터를 가져옴
        customers = await db.fetch_all(
            "SELECT id, name, email, created_at FROM customers LIMIT 100"
        )
        return json.dumps([dict(c) for c in customers])

    elif uri == "db://orders":
        orders = await db.fetch_all(
            "SELECT id, customer_id, total, status, created_at FROM orders LIMIT 100"
        )
        return json.dumps([dict(o) for o in orders])

    raise ValueError(f"Unknown resource: {uri}")

# Tools 등록: 실행 가능한 액션 노출
@app.list_tools()
async def list_tools() -> list[types.Tool]:
    return [
        types.Tool(
            name="create_customer",
            description="Create a new customer record in the database",
            inputSchema={
                "type": "object",
                "properties": {
                    "name": {
                        "type": "string",
                        "description": "Customer's full name"
                    },
                    "email": {
                        "type": "string",
                        "description": "Customer's email address"
                    },
                    "phone": {
                        "type": "string",
                        "description": "Customer's phone number (optional)"
                    }
                },
                "required": ["name", "email"]
            }
        ),
        types.Tool(
            name="search_customers",
            description="Search customers by name or email",
            inputSchema={
                "type": "object",
                "properties": {
                    "query": {
                        "type": "string",
                        "description": "Search query (name or email)"
                    }
                },
                "required": ["query"]
            }
        )
    ]

@app.call_tool()
async def call_tool(name: str, arguments: dict) -> list[types.TextContent]:
    if name == "create_customer":
        result = await db.execute(
            "INSERT INTO customers (name, email, phone) VALUES (:name, :email, :phone)",
            {
                "name": arguments["name"],
                "email": arguments["email"],
                "phone": arguments.get("phone", None)
            }
        )
        return [types.TextContent(
            type="text",
            text=f"Customer created successfully. ID: {result.lastrowid}"
        )]

    elif name == "search_customers":
        query = f"%{arguments['query']}%"
        customers = await db.fetch_all(
            "SELECT id, name, email FROM customers WHERE name LIKE :q OR email LIKE :q",
            {"q": query}
        )
        if not customers:
            return [types.TextContent(type="text", text="No customers found")]

        result = "\n".join([f"ID: {c.id}, Name: {c.name}, Email: {c.email}" for c in customers])
        return [types.TextContent(type="text", text=result)]

    raise ValueError(f"Unknown tool: {name}")

# 서버 실행
async def main():
    from mcp.server.stdio import stdio_server
    async with stdio_server() as (read_stream, write_stream):
        await app.run(
            read_stream,
            write_stream,
            InitializationOptions(
                server_name="my-database-server",
                server_version="0.1.0"
            )
        )

if __name__ == "__main__":
    asyncio.run(main())

Claude Desktop에서 MCP 서버 사용하기

서버를 만들었으면 Claude Desktop에 등록합니다:

// macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
// Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "my-database": {
      "command": "python",
      "args": ["/path/to/your/mcp_server.py"],
      "env": {
        "DATABASE_URL": "postgresql://user:pass@localhost/mydb"
      }
    },
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/username/projects"]
    }
  }
}

설정 후 Claude Desktop을 재시작하면 Claude가 여러분의 데이터베이스에 직접 접근할 수 있습니다. "고객 목록 보여줘", "홍길동이라는 고객 추가해줘" 같은 자연어 명령이 동작합니다.

기존 통합 방식과의 차이점

OpenAI Function Calling

# OpenAI function calling — OpenAI API에 종속
tools = [{"type": "function", "function": {"name": "search", ...}}]
response = openai.chat.completions.create(tools=tools, ...)

특정 모델 API에 묶여 있습니다. Claude에서 쓰려면 다시 구현해야 합니다.

LangChain Tools

# LangChain — 프레임워크에 종속
from langchain.tools import tool

@tool
def search(query: str) -> str:
    """Search the web"""
    return web_search(query)

LangChain을 쓰지 않는 환경에서는 동작하지 않습니다.

MCP

# MCP — 어떤 MCP 클라이언트에서도 동작
@app.list_tools()
async def list_tools():
    return [types.Tool(name="search", ...)]

Claude Desktop, Cursor, 직접 만든 앱 — MCP를 지원하는 어디서든 동작합니다.

특성OpenAI Function CallingLangChain ToolsMCP
모델 독립성X (OpenAI 전용)부분적O
표준화XXO (오픈 표준)
재사용성낮음중간높음
설정 복잡도낮음중간중간
생태계OpenAI 생태계LangChain 생태계성장 중

MCP 생태계 현황 (2025-2026)

공식적으로 제공되는 MCP 서버들이 빠르게 늘고 있습니다:

공식 서버 (Anthropic 제공):

  • @modelcontextprotocol/server-filesystem — 로컬 파일 시스템
  • @modelcontextprotocol/server-github — GitHub 저장소
  • @modelcontextprotocol/server-postgres — PostgreSQL 데이터베이스
  • @modelcontextprotocol/server-brave-search — Brave Search API
  • @modelcontextprotocol/server-slack — Slack 메시지

커뮤니티 서버:

  • Notion, Jira, Linear 연동
  • Docker, Kubernetes 관리
  • AWS, GCP 리소스 접근

설치는 간단합니다:

# npm으로 설치 (Node.js 서버의 경우)
npx -y @modelcontextprotocol/server-github

# pip으로 설치 (Python 서버의 경우)
pip install mcp

실전 팁: MCP 서버를 잘 만드는 법

1. Tool description을 구체적으로 작성하라

# 나쁜 예
types.Tool(name="query_db", description="Query database")

# 좋은 예
types.Tool(
    name="query_customers",
    description=(
        "Search customer records. Use this when you need to find customers "
        "by name, email, or ID. Returns customer ID, name, email, and creation date. "
        "Use search_orders for order-related queries."
    )
)

2. 에러를 툴 결과로 반환하라

@app.call_tool()
async def call_tool(name: str, arguments: dict):
    try:
        result = await execute_operation(arguments)
        return [types.TextContent(type="text", text=json.dumps(result))]
    except Exception as e:
        # 에러도 정상 응답으로 반환 — LLM이 에러를 보고 대응할 수 있음
        return [types.TextContent(
            type="text",
            text=f"Error: {str(e)}. Please check the input and try again."
        )]

3. 읽기와 쓰기를 명확히 분리하라

Resources = 읽기 전용 (DB SELECT, 파일 읽기, API GET) Tools = 쓰기 작업 (DB INSERT/UPDATE/DELETE, 파일 쓰기, API POST)

이 구분이 명확할수록 AI가 안전하게 판단합니다. "이 작업은 데이터를 변경하는 건가?" 같은 의사결정이 아키텍처 수준에서 명확해집니다.

마치며

MCP는 아직 성숙 중인 표준입니다. 하지만 방향은 분명합니다 — AI 생태계의 공통 언어가 될 가능성이 높습니다. Cursor, Zed, VS Code 같은 IDE들이 MCP 지원을 추가하고 있고, 서드파티 서버 생태계가 빠르게 성장하고 있습니다.

지금 AI 앱을 만들고 있다면, 툴 통합을 MCP 형식으로 구현해두는 것을 추천합니다. 나중에 다른 AI 모델을 지원할 때 재사용할 수 있습니다.

다음 글에서는 여러 에이전트를 조율하는 Multi-Agent 시스템 — AutoGen, CrewAI, LangGraph를 비교합니다.

Model Context Protocol (MCP) Complete Guide: The New Standard for AI-World Integration

There's a moment every AI builder hits: "I wish this agent could access our database." So you write a LangChain tool, wrap it for OpenAI function calling, then re-implement it yet again for Claude. Every model needs its own integration code.

Model Context Protocol (MCP) is Anthropic's answer to this problem, released in late 2024.

What Is MCP?

MCP is an open standard protocol for connecting AI models to external tools and data sources. Think USB-C: regardless of which device you plug in or which port it is, it just works. MCP does the same for AI.

Key properties:

  • Open standard: Created by Anthropic but not locked to any single company
  • Bidirectional: Both client (AI) and server (tool) can send messages
  • JSON-RPC based: Simple, proven, well-understood protocol

The World Before MCP

The M×N integration problem was real:

Number of models × Number of tools = Integration code written

Claude + (Slack + GitHub + DB + Notion) = 4 Claude integrations
GPT-4 + (Slack + GitHub + DB + Notion) = 4 GPT-4 integrations
Gemini + (Slack + GitHub + DB + Notion) = 4 Gemini integrations

Total: 12 separate integration codebases

With MCP:

Number of tools = Number of MCP servers (reusable across models)

1 Slack MCP server → works with Claude, GPT-4, Gemini, and any future model
1 GitHub MCP server → same story

Write the server once. Any MCP-compatible AI client picks it up automatically.

MCP Architecture

┌─────────────────────┐         ┌──────────────────────┐
MCP Client      │◄───────►│     MCP Server  (Claude Desktop,JSON     (your tools, DBs,Cursor, your app)RPCAPIs, files, etc.)└─────────────────────┘         └──────────────────────┘

An MCP Server exposes three primitives:

├── Resources (read-only data)
│   └── files, DB query results, API responses
├── Tools (executable actions)
│   └── write file, call API, modify DB
└── Prompts (reusable prompt templates)
    └── structured templates like "review this code"

Components:

  • MCP Client: Claude Desktop, Cursor, any AI app you build
  • MCP Server: Your tool server (database, API, filesystem, etc.)
  • Transport: stdio for local processes, HTTP/SSE for remote

Building an MCP Server in Python

Let's build a working server that exposes a customer database.

from mcp.server import Server
from mcp.server.models import InitializationOptions
import mcp.types as types
import json
import asyncio

app = Server("my-database-server")

# Register Resources: expose read-only data
@app.list_resources()
async def list_resources() -> list[types.Resource]:
    return [
        types.Resource(
            uri="db://customers",
            name="Customer Database",
            description="Access customer records. Returns up to 100 records.",
            mimeType="application/json"
        ),
        types.Resource(
            uri="db://orders",
            name="Order History",
            description="Access order history for all customers",
            mimeType="application/json"
        )
    ]

@app.read_resource()
async def read_resource(uri: str) -> str:
    if uri == "db://customers":
        customers = await db.fetch_all(
            "SELECT id, name, email, created_at FROM customers LIMIT 100"
        )
        return json.dumps([dict(c) for c in customers])

    elif uri == "db://orders":
        orders = await db.fetch_all(
            "SELECT id, customer_id, total, status, created_at FROM orders LIMIT 100"
        )
        return json.dumps([dict(o) for o in orders])

    raise ValueError(f"Unknown resource: {uri}")

# Register Tools: expose executable actions
@app.list_tools()
async def list_tools() -> list[types.Tool]:
    return [
        types.Tool(
            name="create_customer",
            description=(
                "Create a new customer record in the database. "
                "Use this when a user wants to add a new customer. "
                "Returns the new customer's ID on success."
            ),
            inputSchema={
                "type": "object",
                "properties": {
                    "name": {
                        "type": "string",
                        "description": "Customer's full name"
                    },
                    "email": {
                        "type": "string",
                        "description": "Customer's email address"
                    },
                    "phone": {
                        "type": "string",
                        "description": "Customer's phone number (optional)"
                    }
                },
                "required": ["name", "email"]
            }
        ),
        types.Tool(
            name="search_customers",
            description=(
                "Search customers by name or email. "
                "Use this to look up existing customers before creating new ones."
            ),
            inputSchema={
                "type": "object",
                "properties": {
                    "query": {
                        "type": "string",
                        "description": "Search query — matches against name and email fields"
                    }
                },
                "required": ["query"]
            }
        )
    ]

@app.call_tool()
async def call_tool(name: str, arguments: dict) -> list[types.TextContent]:
    if name == "create_customer":
        try:
            result = await db.execute(
                "INSERT INTO customers (name, email, phone) VALUES (:name, :email, :phone)",
                {
                    "name": arguments["name"],
                    "email": arguments["email"],
                    "phone": arguments.get("phone", None)
                }
            )
            return [types.TextContent(
                type="text",
                text=f"Customer created successfully. ID: {result.lastrowid}"
            )]
        except Exception as e:
            return [types.TextContent(
                type="text",
                text=f"Error creating customer: {str(e)}"
            )]

    elif name == "search_customers":
        query = f"%{arguments['query']}%"
        customers = await db.fetch_all(
            "SELECT id, name, email FROM customers WHERE name LIKE :q OR email LIKE :q",
            {"q": query}
        )
        if not customers:
            return [types.TextContent(type="text", text="No customers found matching that query")]

        result = "\n".join([
            f"ID: {c.id}, Name: {c.name}, Email: {c.email}"
            for c in customers
        ])
        return [types.TextContent(type="text", text=result)]

    raise ValueError(f"Unknown tool: {name}")

# Run the server
async def main():
    from mcp.server.stdio import stdio_server
    async with stdio_server() as (read_stream, write_stream):
        await app.run(
            read_stream,
            write_stream,
            InitializationOptions(
                server_name="my-database-server",
                server_version="0.1.0"
            )
        )

if __name__ == "__main__":
    asyncio.run(main())

Wiring It Up in Claude Desktop

Register your server in the Claude Desktop config file:

// macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
// Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "my-database": {
      "command": "python",
      "args": ["/path/to/your/mcp_server.py"],
      "env": {
        "DATABASE_URL": "postgresql://user:pass@localhost/mydb"
      }
    },
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/username/projects"]
    }
  }
}

Restart Claude Desktop and it can now talk to your database directly. "Show me all customers", "Add a new customer named John Smith" — natural language works.

Comparison: MCP vs Existing Approaches

OpenAI Function Calling

# Tied to OpenAI's API format
tools = [{"type": "function", "function": {"name": "search", ...}}]
response = openai.chat.completions.create(tools=tools, ...)

Works great inside the OpenAI ecosystem. Using Claude? Re-implement from scratch.

LangChain Tools

# Tied to the LangChain framework
from langchain.tools import tool

@tool
def search(query: str) -> str:
    """Search the web"""
    return web_search(query)

Great if you're all-in on LangChain. Not portable to environments that don't use it.

MCP

# Works with any MCP-compatible client
@app.list_tools()
async def list_tools():
    return [types.Tool(name="search", ...)]

Build it once. Claude Desktop, Cursor, your own app — any client that speaks MCP works.

PropertyOpenAI Function CallingLangChain ToolsMCP
Model independenceNo (OpenAI only)PartialYes
Open standardNoNoYes
ReusabilityLowMediumHigh
Setup complexityLowMediumMedium
EcosystemOpenAI ecosystemLangChain ecosystemGrowing fast

The MCP Ecosystem in 2025-2026

Official servers from Anthropic:

  • @modelcontextprotocol/server-filesystem — local file access
  • @modelcontextprotocol/server-github — GitHub repositories
  • @modelcontextprotocol/server-postgres — PostgreSQL databases
  • @modelcontextprotocol/server-brave-search — Brave Search API
  • @modelcontextprotocol/server-slack — Slack messaging

Community servers cover Notion, Jira, Linear, Docker, Kubernetes, AWS, GCP, and more. The ecosystem is growing fast.

Installation:

# Node.js servers
npx -y @modelcontextprotocol/server-github

# Python servers
pip install mcp

Production Tips for Building Good MCP Servers

1. Write specific tool descriptions

The AI uses the description to decide when and how to call your tool. Vague descriptions lead to wrong calls.

# Bad
types.Tool(name="query_db", description="Query database")

# Good
types.Tool(
    name="search_customers",
    description=(
        "Search customer records by name or email. "
        "Use this when you need to find an existing customer. "
        "Returns customer ID, name, email, and account creation date. "
        "Use get_orders for order-related queries."
    )
)

2. Return errors as tool results, not exceptions

@app.call_tool()
async def call_tool(name: str, arguments: dict):
    try:
        result = await execute_operation(arguments)
        return [types.TextContent(type="text", text=json.dumps(result))]
    except Exception as e:
        # Return error as a normal response — the AI can adapt
        return [types.TextContent(
            type="text",
            text=f"Error: {str(e)}. Please check the input and try again."
        )]

3. Separate reads from writes at the architecture level

Resources = read-only (SELECT, file reads, GET requests) Tools = write operations (INSERT/UPDATE/DELETE, file writes, POST requests)

This architectural boundary helps the AI reason about safety. "Will this operation modify data?" becomes clear from the structure, not just the description.

Wrapping Up

MCP is still maturing, but the direction is clear. Cursor, Zed, and VS Code are adding MCP support. The third-party server ecosystem is growing fast. If Anthropic's track record with open standards is any guide, MCP has a good chance of becoming the common language of AI tool integration.

If you're building AI apps today, implement your tool integrations in MCP format. You'll thank yourself later when you want to support a second model.

Next: Multi-Agent systems — comparing AutoGen, CrewAI, and LangGraph so you can pick the right framework for your use case.