Skip to content

Split View: LangGraph 에이전트 워크플로우 실전 가이드: 멀티에이전트 오케스트레이션부터 프로덕션 배포까지

|

LangGraph 에이전트 워크플로우 실전 가이드: 멀티에이전트 오케스트레이션부터 프로덕션 배포까지


1. LangGraph란

LangGraph는 LangChain 팀이 개발한 상태 기반(Stateful) 에이전트 오케스트레이션 프레임워크다. 기존 LangChain의 Chain/Agent가 선형적이었다면, LangGraph는 그래프 구조로 복잡한 워크플로우를 표현한다.

1.1 왜 LangGraph인가?

특성LangChain AgentLangGraph
흐름 제어단순 루프DAG + 조건부 분기
상태 관리제한적TypedDict 기반 명시적 상태
멀티에이전트어려움네이티브 지원
인간 개입커스텀 구현interrupt() 내장
스트리밍기본토큰/이벤트/상태 스트리밍
디버깅어려움LangGraph Studio

2. 기본 개념

2.1 StateGraph

from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END

# 1) 상태 정의
class AgentState(TypedDict):
    messages: Annotated[list, "add_messages"]  # 메시지 누적
    next_action: str
    result: str

# 2) 노드 함수 정의
def classify_intent(state: AgentState) -> dict:
    """사용자 의도 분류"""
    last_msg = state["messages"][-1].content
    # LLM으로 의도 분류
    intent = llm.invoke(f"Classify intent: {last_msg}")
    return {"next_action": intent.content}

def handle_question(state: AgentState) -> dict:
    """질문 처리"""
    answer = llm.invoke(state["messages"])
    return {"result": answer.content}

def handle_task(state: AgentState) -> dict:
    """작업 실행"""
    result = agent_executor.invoke(state["messages"])
    return {"result": result}

# 3) 그래프 구축
graph = StateGraph(AgentState)
graph.add_node("classify", classify_intent)
graph.add_node("question", handle_question)
graph.add_node("task", handle_task)

# 4) 엣지 연결
graph.add_edge(START, "classify")
graph.add_conditional_edges(
    "classify",
    lambda state: state["next_action"],
    {
        "question": "question",
        "task": "task",
    }
)
graph.add_edge("question", END)
graph.add_edge("task", END)

# 5) 컴파일 & 실행
app = graph.compile()
result = app.invoke({"messages": [HumanMessage("What is Kubernetes?")]})

2.2 Tool 사용 에이전트

from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent

@tool
def search_docs(query: str) -> str:
    """Search internal documentation."""
    # 벡터 DB 검색
    results = vectorstore.similarity_search(query, k=3)
    return "\n".join([r.page_content for r in results])

@tool
def run_query(sql: str) -> str:
    """Execute a read-only SQL query."""
    return db.execute(sql).fetchall()

@tool
def create_ticket(title: str, description: str) -> str:
    """Create a Jira ticket."""
    return jira.create_issue(title=title, description=description)

llm = ChatOpenAI(model="gpt-4o")

# ReAct 에이전트 자동 생성
agent = create_react_agent(
    llm,
    tools=[search_docs, run_query, create_ticket],
    state_modifier="You are a helpful DevOps assistant."
)

# 실행
result = agent.invoke({
    "messages": [HumanMessage("Check if order-service has errors and create a ticket")]
})

3. 고급 패턴

3.1 멀티에이전트 오케스트레이션

from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import create_react_agent

class SupervisorState(TypedDict):
    messages: Annotated[list, "add_messages"]
    next_agent: str
    final_answer: str

# 전문 에이전트들
researcher = create_react_agent(llm, tools=[search_web, search_docs])
coder = create_react_agent(llm, tools=[run_code, read_file])
reviewer = create_react_agent(llm, tools=[analyze_code, lint_code])

def supervisor(state: SupervisorState) -> dict:
    """어떤 에이전트에게 위임할지 결정"""
    response = llm.invoke([
        SystemMessage("You are a supervisor. Route to: researcher, coder, reviewer, or FINISH"),
        *state["messages"]
    ])
    return {"next_agent": response.content.strip()}

def run_researcher(state):
    result = researcher.invoke({"messages": state["messages"]})
    return {"messages": result["messages"]}

def run_coder(state):
    result = coder.invoke({"messages": state["messages"]})
    return {"messages": result["messages"]}

def run_reviewer(state):
    result = reviewer.invoke({"messages": state["messages"]})
    return {"messages": result["messages"]}

# 그래프 구성
workflow = StateGraph(SupervisorState)
workflow.add_node("supervisor", supervisor)
workflow.add_node("researcher", run_researcher)
workflow.add_node("coder", run_coder)
workflow.add_node("reviewer", run_reviewer)

workflow.add_edge(START, "supervisor")
workflow.add_conditional_edges(
    "supervisor",
    lambda s: s["next_agent"],
    {
        "researcher": "researcher",
        "coder": "coder",
        "reviewer": "reviewer",
        "FINISH": END,
    }
)
# 각 에이전트 실행 후 supervisor로 복귀
for agent_name in ["researcher", "coder", "reviewer"]:
    workflow.add_edge(agent_name, "supervisor")

app = workflow.compile()

3.2 Human-in-the-Loop

from langgraph.types import interrupt, Command

def sensitive_action(state):
    """민감한 작업 전 인간 승인 요청"""
    action = state["pending_action"]

    # 실행을 중단하고 인간 승인 대기
    approval = interrupt({
        "question": f"Approve this action?\n{action}",
        "options": ["approve", "reject", "modify"]
    })

    if approval == "approve":
        return execute_action(action)
    elif approval == "reject":
        return {"result": "Action cancelled by user"}
    else:
        return {"result": f"Action modified: {approval}"}

# 체크포인터와 함께 컴파일
from langgraph.checkpoint.memory import MemorySaver

checkpointer = MemorySaver()
app = workflow.compile(checkpointer=checkpointer)

# 실행 (interrupt에서 멈춤)
config = {"configurable": {"thread_id": "user-123"}}
result = app.invoke({"messages": [...]}, config)
# result = {"__interrupt__": {...}}

# 인간이 승인 후 재개
app.invoke(Command(resume="approve"), config)

3.3 병렬 실행

from langgraph.graph import StateGraph

class ParallelState(TypedDict):
    query: str
    web_results: str
    db_results: str
    combined: str

def search_web_node(state):
    return {"web_results": search_web(state["query"])}

def search_db_node(state):
    return {"db_results": search_db(state["query"])}

def combine_results(state):
    combined = f"Web: {state['web_results']}\nDB: {state['db_results']}"
    return {"combined": combined}

graph = StateGraph(ParallelState)
graph.add_node("web", search_web_node)
graph.add_node("db", search_db_node)
graph.add_node("combine", combine_results)

# 병렬 실행: START → [web, db] → combine
graph.add_edge(START, "web")
graph.add_edge(START, "db")
graph.add_edge("web", "combine")
graph.add_edge("db", "combine")
graph.add_edge("combine", END)

app = graph.compile()

4. 스트리밍

# 토큰 단위 스트리밍
async for event in app.astream_events(
    {"messages": [HumanMessage("Explain CQRS")]},
    version="v2"
):
    if event["event"] == "on_chat_model_stream":
        print(event["data"]["chunk"].content, end="", flush=True)

# 노드 단위 스트리밍
for chunk in app.stream({"messages": [HumanMessage("...")]}):
    for node_name, output in chunk.items():
        print(f"[{node_name}]: {output}")

5. 메모리와 체크포인팅

from langgraph.checkpoint.postgres import PostgresSaver

# PostgreSQL 체크포인터 (프로덕션)
checkpointer = PostgresSaver.from_conn_string(
    "postgresql://user:pass@localhost/langgraph"
)

app = workflow.compile(checkpointer=checkpointer)

# 대화 지속 (thread_id로 구분)
config = {"configurable": {"thread_id": "user-456"}}

# 첫 번째 메시지
app.invoke({"messages": [HumanMessage("Hi")]}, config)

# 두 번째 메시지 (이전 대화 컨텍스트 유지)
app.invoke({"messages": [HumanMessage("What did I just say?")]}, config)

6. LangGraph Platform 배포

6.1 langgraph.json

{
  "dependencies": ["."],
  "graphs": {
    "agent": "./agent.py:app"
  },
  "env": ".env"
}

6.2 배포

# LangGraph CLI 설치
pip install langgraph-cli

# 로컬 테스트
langgraph dev

# Docker 빌드
langgraph build -t my-agent:latest

# LangGraph Cloud 배포
langgraph deploy --app my-agent

7. 퀴즈

Q1. LangGraph의 StateGraph에서 상태(State)의 역할은?

노드 간 공유되는 데이터 컨테이너. TypedDict로 정의하며, 각 노드는 상태를 읽고 업데이트. Annotated reducer로 리스트 누적 등의 병합 전략 정의 가능.

Q2. add_conditional_edges의 용도는?

조건부 분기. 현재 상태에 따라 다음 노드를 동적으로 결정. 라우팅 함수가 상태를 받아 다음 노드 이름을 반환.

Q3. interrupt()의 역할은?

Human-in-the-Loop 구현. 워크플로우 실행을 중단하고, 인간의 입력(승인/거부/수정)을 기다린 후 Command(resume=...)로 재개.

Q4. 멀티에이전트에서 Supervisor 패턴의 장점은?

중앙 Supervisor가 전체 흐름을 제어하여 (1) 각 에이전트의 전문성 활용 (2) 작업 순서 동적 결정 (3) 실행 후 다음 단계 판단 가능.

Q5. 체크포인터(Checkpointer)를 사용하는 이유는?

(1) 대화 지속 — thread_id별 상태 저장 (2) 장애 복구 — 중단점에서 재개 (3) Human-in-the-Loop — interrupt 후 상태 보존.

Q6. LangGraph에서 병렬 실행은 어떻게 구현하는가?

같은 소스 노드(예: START)에서 여러 노드로 엣지를 연결하면 자동으로 병렬 실행. 결과를 합치는 노드에서 모든 선행 노드가 완료될 때까지 대기.

Q7. create_react_agent와 직접 StateGraph 구성의 차이는?

create_react_agentReAct(Reasoning + Acting) 패턴의 프리빌트. 단순 도구 사용 에이전트에 적합. 복잡한 분기/멀티에이전트/커스텀 상태가 필요하면 직접 StateGraph 구성.

LangGraph Agent Workflow Practical Guide: From Multi-Agent Orchestration to Production Deployment


1. What is LangGraph

LangGraph is a stateful agent orchestration framework developed by the LangChain team. While the existing LangChain Chain/Agent was linear, LangGraph expresses complex workflows as a graph structure.

1.1 Why LangGraph?

FeatureLangChain AgentLangGraph
Flow ControlSimple loopDAG + conditional branching
State MgmtLimitedExplicit state via TypedDict
Multi-AgentDifficultNative support
Human InterventionCustom implBuilt-in interrupt()
StreamingBasicToken/event/state streaming
DebuggingDifficultLangGraph Studio

2. Core Concepts

2.1 StateGraph

from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END

# 1) Define state
class AgentState(TypedDict):
    messages: Annotated[list, "add_messages"]  # Accumulate messages
    next_action: str
    result: str

# 2) Define node functions
def classify_intent(state: AgentState) -> dict:
    """Classify user intent"""
    last_msg = state["messages"][-1].content
    # Classify intent with LLM
    intent = llm.invoke(f"Classify intent: {last_msg}")
    return {"next_action": intent.content}

def handle_question(state: AgentState) -> dict:
    """Handle questions"""
    answer = llm.invoke(state["messages"])
    return {"result": answer.content}

def handle_task(state: AgentState) -> dict:
    """Execute tasks"""
    result = agent_executor.invoke(state["messages"])
    return {"result": result}

# 3) Build graph
graph = StateGraph(AgentState)
graph.add_node("classify", classify_intent)
graph.add_node("question", handle_question)
graph.add_node("task", handle_task)

# 4) Connect edges
graph.add_edge(START, "classify")
graph.add_conditional_edges(
    "classify",
    lambda state: state["next_action"],
    {
        "question": "question",
        "task": "task",
    }
)
graph.add_edge("question", END)
graph.add_edge("task", END)

# 5) Compile & run
app = graph.compile()
result = app.invoke({"messages": [HumanMessage("What is Kubernetes?")]})

2.2 Tool-Using Agent

from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent

@tool
def search_docs(query: str) -> str:
    """Search internal documentation."""
    # Vector DB search
    results = vectorstore.similarity_search(query, k=3)
    return "\n".join([r.page_content for r in results])

@tool
def run_query(sql: str) -> str:
    """Execute a read-only SQL query."""
    return db.execute(sql).fetchall()

@tool
def create_ticket(title: str, description: str) -> str:
    """Create a Jira ticket."""
    return jira.create_issue(title=title, description=description)

llm = ChatOpenAI(model="gpt-4o")

# Automatically create a ReAct agent
agent = create_react_agent(
    llm,
    tools=[search_docs, run_query, create_ticket],
    state_modifier="You are a helpful DevOps assistant."
)

# Execute
result = agent.invoke({
    "messages": [HumanMessage("Check if order-service has errors and create a ticket")]
})

3. Advanced Patterns

3.1 Multi-Agent Orchestration

from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import create_react_agent

class SupervisorState(TypedDict):
    messages: Annotated[list, "add_messages"]
    next_agent: str
    final_answer: str

# Specialist agents
researcher = create_react_agent(llm, tools=[search_web, search_docs])
coder = create_react_agent(llm, tools=[run_code, read_file])
reviewer = create_react_agent(llm, tools=[analyze_code, lint_code])

def supervisor(state: SupervisorState) -> dict:
    """Decide which agent to delegate to"""
    response = llm.invoke([
        SystemMessage("You are a supervisor. Route to: researcher, coder, reviewer, or FINISH"),
        *state["messages"]
    ])
    return {"next_agent": response.content.strip()}

def run_researcher(state):
    result = researcher.invoke({"messages": state["messages"]})
    return {"messages": result["messages"]}

def run_coder(state):
    result = coder.invoke({"messages": state["messages"]})
    return {"messages": result["messages"]}

def run_reviewer(state):
    result = reviewer.invoke({"messages": state["messages"]})
    return {"messages": result["messages"]}

# Build graph
workflow = StateGraph(SupervisorState)
workflow.add_node("supervisor", supervisor)
workflow.add_node("researcher", run_researcher)
workflow.add_node("coder", run_coder)
workflow.add_node("reviewer", run_reviewer)

workflow.add_edge(START, "supervisor")
workflow.add_conditional_edges(
    "supervisor",
    lambda s: s["next_agent"],
    {
        "researcher": "researcher",
        "coder": "coder",
        "reviewer": "reviewer",
        "FINISH": END,
    }
)
# Each agent returns to supervisor after execution
for agent_name in ["researcher", "coder", "reviewer"]:
    workflow.add_edge(agent_name, "supervisor")

app = workflow.compile()

3.2 Human-in-the-Loop

from langgraph.types import interrupt, Command

def sensitive_action(state):
    """Request human approval before sensitive operations"""
    action = state["pending_action"]

    # Pause execution and wait for human approval
    approval = interrupt({
        "question": f"Approve this action?\n{action}",
        "options": ["approve", "reject", "modify"]
    })

    if approval == "approve":
        return execute_action(action)
    elif approval == "reject":
        return {"result": "Action cancelled by user"}
    else:
        return {"result": f"Action modified: {approval}"}

# Compile with checkpointer
from langgraph.checkpoint.memory import MemorySaver

checkpointer = MemorySaver()
app = workflow.compile(checkpointer=checkpointer)

# Execute (pauses at interrupt)
config = {"configurable": {"thread_id": "user-123"}}
result = app.invoke({"messages": [...]}, config)
# result = {"__interrupt__": {...}}

# Resume after human approval
app.invoke(Command(resume="approve"), config)

3.3 Parallel Execution

from langgraph.graph import StateGraph

class ParallelState(TypedDict):
    query: str
    web_results: str
    db_results: str
    combined: str

def search_web_node(state):
    return {"web_results": search_web(state["query"])}

def search_db_node(state):
    return {"db_results": search_db(state["query"])}

def combine_results(state):
    combined = f"Web: {state['web_results']}\nDB: {state['db_results']}"
    return {"combined": combined}

graph = StateGraph(ParallelState)
graph.add_node("web", search_web_node)
graph.add_node("db", search_db_node)
graph.add_node("combine", combine_results)

# Parallel execution: START -> [web, db] -> combine
graph.add_edge(START, "web")
graph.add_edge(START, "db")
graph.add_edge("web", "combine")
graph.add_edge("db", "combine")
graph.add_edge("combine", END)

app = graph.compile()

4. Streaming

# Token-level streaming
async for event in app.astream_events(
    {"messages": [HumanMessage("Explain CQRS")]},
    version="v2"
):
    if event["event"] == "on_chat_model_stream":
        print(event["data"]["chunk"].content, end="", flush=True)

# Node-level streaming
for chunk in app.stream({"messages": [HumanMessage("...")]}):
    for node_name, output in chunk.items():
        print(f"[{node_name}]: {output}")

5. Memory and Checkpointing

from langgraph.checkpoint.postgres import PostgresSaver

# PostgreSQL checkpointer (production)
checkpointer = PostgresSaver.from_conn_string(
    "postgresql://user:pass@localhost/langgraph"
)

app = workflow.compile(checkpointer=checkpointer)

# Persistent conversations (identified by thread_id)
config = {"configurable": {"thread_id": "user-456"}}

# First message
app.invoke({"messages": [HumanMessage("Hi")]}, config)

# Second message (previous conversation context preserved)
app.invoke({"messages": [HumanMessage("What did I just say?")]}, config)

6. LangGraph Platform Deployment

6.1 langgraph.json

{
  "dependencies": ["."],
  "graphs": {
    "agent": "./agent.py:app"
  },
  "env": ".env"
}

6.2 Deployment

# Install LangGraph CLI
pip install langgraph-cli

# Local testing
langgraph dev

# Docker build
langgraph build -t my-agent:latest

# LangGraph Cloud deployment
langgraph deploy --app my-agent

7. Quiz

Q1. What is the role of State in LangGraph's StateGraph?

A data container shared between nodes. Defined as a TypedDict, each node reads and updates the state. Annotated reducers can define merge strategies such as list accumulation.

Q2. What is the purpose of add_conditional_edges?

Conditional branching. It dynamically determines the next node based on the current state. A routing function receives the state and returns the name of the next node.

Q3. What does interrupt() do?

Implements Human-in-the-Loop. It pauses workflow execution, waits for human input (approve/reject/modify), and then resumes with Command(resume=...).

Q4. What are the advantages of the Supervisor pattern in multi-agent systems?

A central Supervisor controls the overall flow, enabling (1) leveraging each agent's expertise, (2) dynamic task ordering, and (3) determining next steps after execution.

Q5. Why use a Checkpointer?

(1) Conversation persistence — saves state per thread_id. (2) Failure recovery — resumes from checkpoints. (3) Human-in-the-Loop — preserves state after interrupt.

Q6. How is parallel execution implemented in LangGraph?

Connect edges from the same source node (e.g., START) to multiple nodes for automatic parallel execution. The merging node waits until all predecessor nodes complete.

Q7. What is the difference between create_react_agent and building a StateGraph directly?

create_react_agent is a prebuilt ReAct (Reasoning + Acting) pattern. It is suitable for simple tool-using agents. For complex branching, multi-agent setups, or custom state management, build the StateGraph directly.

Quiz

Q1: What is the main topic covered in "LangGraph Agent Workflow Practical Guide: From Multi-Agent Orchestration to Production Deployment"?

Build stateful AI agent workflows with LangGraph. Covers StateGraph, conditional routing, multi-agent orchestration, Human-in-the-Loop, and LangGraph Platform deployment — all with production-ready code.

Q2: What is LangGraph? LangGraph is a stateful agent orchestration framework developed by the LangChain team. While the existing LangChain Chain/Agent was linear, LangGraph expresses complex workflows as a graph structure. 1.1 Why LangGraph?

Q3: Explain the core concept of Core Concepts. 2.1 StateGraph 2.2 Tool-Using Agent

Q4: What are the key aspects of Advanced Patterns? 3.1 Multi-Agent Orchestration 3.2 Human-in-the-Loop 3.3 Parallel Execution

Q5: How does LangGraph Platform Deployment work? 6.1 langgraph.json 6.2 Deployment