- Authors
- Name
- 1. What is LangGraph
- 2. Core Concepts
- 3. Advanced Patterns
- 4. Streaming
- 5. Memory and Checkpointing
- 6. LangGraph Platform Deployment
- 7. Quiz
1. What is LangGraph
LangGraph is a stateful agent orchestration framework developed by the LangChain team. While the existing LangChain Chain/Agent was linear, LangGraph expresses complex workflows as a graph structure.
1.1 Why LangGraph?
| Feature | LangChain Agent | LangGraph |
|---|---|---|
| Flow Control | Simple loop | DAG + conditional branching |
| State Mgmt | Limited | Explicit state via TypedDict |
| Multi-Agent | Difficult | Native support |
| Human Intervention | Custom impl | Built-in interrupt() |
| Streaming | Basic | Token/event/state streaming |
| Debugging | Difficult | LangGraph Studio |
2. Core Concepts
2.1 StateGraph
from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
# 1) Define state
class AgentState(TypedDict):
messages: Annotated[list, "add_messages"] # Accumulate messages
next_action: str
result: str
# 2) Define node functions
def classify_intent(state: AgentState) -> dict:
"""Classify user intent"""
last_msg = state["messages"][-1].content
# Classify intent with LLM
intent = llm.invoke(f"Classify intent: {last_msg}")
return {"next_action": intent.content}
def handle_question(state: AgentState) -> dict:
"""Handle questions"""
answer = llm.invoke(state["messages"])
return {"result": answer.content}
def handle_task(state: AgentState) -> dict:
"""Execute tasks"""
result = agent_executor.invoke(state["messages"])
return {"result": result}
# 3) Build graph
graph = StateGraph(AgentState)
graph.add_node("classify", classify_intent)
graph.add_node("question", handle_question)
graph.add_node("task", handle_task)
# 4) Connect edges
graph.add_edge(START, "classify")
graph.add_conditional_edges(
"classify",
lambda state: state["next_action"],
{
"question": "question",
"task": "task",
}
)
graph.add_edge("question", END)
graph.add_edge("task", END)
# 5) Compile & run
app = graph.compile()
result = app.invoke({"messages": [HumanMessage("What is Kubernetes?")]})
2.2 Tool-Using Agent
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent
@tool
def search_docs(query: str) -> str:
"""Search internal documentation."""
# Vector DB search
results = vectorstore.similarity_search(query, k=3)
return "\n".join([r.page_content for r in results])
@tool
def run_query(sql: str) -> str:
"""Execute a read-only SQL query."""
return db.execute(sql).fetchall()
@tool
def create_ticket(title: str, description: str) -> str:
"""Create a Jira ticket."""
return jira.create_issue(title=title, description=description)
llm = ChatOpenAI(model="gpt-4o")
# Automatically create a ReAct agent
agent = create_react_agent(
llm,
tools=[search_docs, run_query, create_ticket],
state_modifier="You are a helpful DevOps assistant."
)
# Execute
result = agent.invoke({
"messages": [HumanMessage("Check if order-service has errors and create a ticket")]
})
3. Advanced Patterns
3.1 Multi-Agent Orchestration
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import create_react_agent
class SupervisorState(TypedDict):
messages: Annotated[list, "add_messages"]
next_agent: str
final_answer: str
# Specialist agents
researcher = create_react_agent(llm, tools=[search_web, search_docs])
coder = create_react_agent(llm, tools=[run_code, read_file])
reviewer = create_react_agent(llm, tools=[analyze_code, lint_code])
def supervisor(state: SupervisorState) -> dict:
"""Decide which agent to delegate to"""
response = llm.invoke([
SystemMessage("You are a supervisor. Route to: researcher, coder, reviewer, or FINISH"),
*state["messages"]
])
return {"next_agent": response.content.strip()}
def run_researcher(state):
result = researcher.invoke({"messages": state["messages"]})
return {"messages": result["messages"]}
def run_coder(state):
result = coder.invoke({"messages": state["messages"]})
return {"messages": result["messages"]}
def run_reviewer(state):
result = reviewer.invoke({"messages": state["messages"]})
return {"messages": result["messages"]}
# Build graph
workflow = StateGraph(SupervisorState)
workflow.add_node("supervisor", supervisor)
workflow.add_node("researcher", run_researcher)
workflow.add_node("coder", run_coder)
workflow.add_node("reviewer", run_reviewer)
workflow.add_edge(START, "supervisor")
workflow.add_conditional_edges(
"supervisor",
lambda s: s["next_agent"],
{
"researcher": "researcher",
"coder": "coder",
"reviewer": "reviewer",
"FINISH": END,
}
)
# Each agent returns to supervisor after execution
for agent_name in ["researcher", "coder", "reviewer"]:
workflow.add_edge(agent_name, "supervisor")
app = workflow.compile()
3.2 Human-in-the-Loop
from langgraph.types import interrupt, Command
def sensitive_action(state):
"""Request human approval before sensitive operations"""
action = state["pending_action"]
# Pause execution and wait for human approval
approval = interrupt({
"question": f"Approve this action?\n{action}",
"options": ["approve", "reject", "modify"]
})
if approval == "approve":
return execute_action(action)
elif approval == "reject":
return {"result": "Action cancelled by user"}
else:
return {"result": f"Action modified: {approval}"}
# Compile with checkpointer
from langgraph.checkpoint.memory import MemorySaver
checkpointer = MemorySaver()
app = workflow.compile(checkpointer=checkpointer)
# Execute (pauses at interrupt)
config = {"configurable": {"thread_id": "user-123"}}
result = app.invoke({"messages": [...]}, config)
# result = {"__interrupt__": {...}}
# Resume after human approval
app.invoke(Command(resume="approve"), config)
3.3 Parallel Execution
from langgraph.graph import StateGraph
class ParallelState(TypedDict):
query: str
web_results: str
db_results: str
combined: str
def search_web_node(state):
return {"web_results": search_web(state["query"])}
def search_db_node(state):
return {"db_results": search_db(state["query"])}
def combine_results(state):
combined = f"Web: {state['web_results']}\nDB: {state['db_results']}"
return {"combined": combined}
graph = StateGraph(ParallelState)
graph.add_node("web", search_web_node)
graph.add_node("db", search_db_node)
graph.add_node("combine", combine_results)
# Parallel execution: START -> [web, db] -> combine
graph.add_edge(START, "web")
graph.add_edge(START, "db")
graph.add_edge("web", "combine")
graph.add_edge("db", "combine")
graph.add_edge("combine", END)
app = graph.compile()
4. Streaming
# Token-level streaming
async for event in app.astream_events(
{"messages": [HumanMessage("Explain CQRS")]},
version="v2"
):
if event["event"] == "on_chat_model_stream":
print(event["data"]["chunk"].content, end="", flush=True)
# Node-level streaming
for chunk in app.stream({"messages": [HumanMessage("...")]}):
for node_name, output in chunk.items():
print(f"[{node_name}]: {output}")
5. Memory and Checkpointing
from langgraph.checkpoint.postgres import PostgresSaver
# PostgreSQL checkpointer (production)
checkpointer = PostgresSaver.from_conn_string(
"postgresql://user:pass@localhost/langgraph"
)
app = workflow.compile(checkpointer=checkpointer)
# Persistent conversations (identified by thread_id)
config = {"configurable": {"thread_id": "user-456"}}
# First message
app.invoke({"messages": [HumanMessage("Hi")]}, config)
# Second message (previous conversation context preserved)
app.invoke({"messages": [HumanMessage("What did I just say?")]}, config)
6. LangGraph Platform Deployment
6.1 langgraph.json
{
"dependencies": ["."],
"graphs": {
"agent": "./agent.py:app"
},
"env": ".env"
}
6.2 Deployment
# Install LangGraph CLI
pip install langgraph-cli
# Local testing
langgraph dev
# Docker build
langgraph build -t my-agent:latest
# LangGraph Cloud deployment
langgraph deploy --app my-agent
7. Quiz
Q1. What is the role of State in LangGraph's StateGraph?
A data container shared between nodes. Defined as a TypedDict, each node reads and updates the state. Annotated reducers can define merge strategies such as list accumulation.
Q2. What is the purpose of add_conditional_edges?
Conditional branching. It dynamically determines the next node based on the current state. A routing function receives the state and returns the name of the next node.
Q3. What does interrupt() do?
Implements Human-in-the-Loop. It pauses workflow execution, waits for human input (approve/reject/modify), and then resumes with Command(resume=...).
Q4. What are the advantages of the Supervisor pattern in multi-agent systems?
A central Supervisor controls the overall flow, enabling (1) leveraging each agent's expertise, (2) dynamic task ordering, and (3) determining next steps after execution.
Q5. Why use a Checkpointer?
(1) Conversation persistence — saves state per thread_id. (2) Failure recovery — resumes from checkpoints. (3) Human-in-the-Loop — preserves state after interrupt.
Q6. How is parallel execution implemented in LangGraph?
Connect edges from the same source node (e.g., START) to multiple nodes for automatic parallel execution. The merging node waits until all predecessor nodes complete.
Q7. What is the difference between create_react_agent and building a StateGraph directly?
create_react_agent is a prebuilt ReAct (Reasoning + Acting) pattern. It is suitable for simple tool-using agents. For complex branching, multi-agent setups, or custom state management, build the StateGraph directly.