LangGraph gives you something most agent frameworks don’t: fine-grained control over every decision your agent makes. Instead of dumping a prompt and tools into a black box, you define a graph where each node is a step – call an LLM, run a tool, check a condition – and edges control the flow between them. State persists automatically across the entire execution.

As of February 2026, LangGraph is at version 1.0.8, runs on Python 3.10+, and ships with durable state persistence, human-in-the-loop patterns, and built-in checkpointing. Here’s how to actually use it.

Install LangGraph

1
pip install langgraph langchain-openai langchain-core

You’ll also need an OpenAI API key (or swap in any LangChain-compatible model). Export it before running anything:

1
export OPENAI_API_KEY="sk-your-key-here"

Define Your Agent’s State

Every LangGraph agent revolves around a state object. Think of it as the shared notebook your agent carries between steps. You define it as a TypedDict, and LangGraph handles passing it from node to node.

1
2
3
4
5
from typing import TypedDict, Annotated
from langgraph.graph.message import AnyMessage, add_messages

class AgentState(TypedDict):
    messages: Annotated[list[AnyMessage], add_messages]

The Annotated[list[AnyMessage], add_messages] bit is critical. It tells LangGraph to append new messages to the list rather than overwriting it. Without that reducer, you’ll hit an InvalidUpdateError the moment two nodes try to write to the same key.

Build a Tool-Calling Agent

Here’s a complete agent that can search the web and answer questions. It uses a StateGraph with three nodes: one for the LLM, one for tools, and a router that decides which to call next.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
from typing import TypedDict, Annotated, Literal
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import AnyMessage, add_messages
from langgraph.prebuilt import ToolNode
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langchain_community.tools.tavily_search import TavilySearchResults

# 1. Define state
class AgentState(TypedDict):
    messages: Annotated[list[AnyMessage], add_messages]

# 2. Set up tools and model
search_tool = TavilySearchResults(max_results=3)
tools = [search_tool]
model = ChatOpenAI(model="gpt-4o", temperature=0).bind_tools(tools)

# 3. Define nodes
def call_model(state: AgentState) -> dict:
    response = model.invoke(state["messages"])
    return {"messages": [response]}

tool_node = ToolNode(tools)

# 4. Define the router
def should_continue(state: AgentState) -> Literal["tools", "__end__"]:
    last_message = state["messages"][-1]
    if last_message.tool_calls:
        return "tools"
    return "__end__"

# 5. Build the graph
workflow = StateGraph(AgentState)
workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)

workflow.add_edge(START, "agent")
workflow.add_conditional_edges("agent", should_continue)
workflow.add_edge("tools", "agent")

agent = workflow.compile()

# 6. Run it
result = agent.invoke({
    "messages": [HumanMessage(content="What is LangGraph 1.0?")]
})
print(result["messages"][-1].content)

The flow works like this: the user message enters at START, hits the agent node (LLM call), then the router checks if the LLM wants to call a tool. If yes, it routes to the tools node, which executes the tool and sends results back to agent. If no tool calls, it routes to __end__ and returns the response.

Stream Responses

For real-time output, swap invoke for stream:

1
2
3
4
5
6
for event in agent.stream(
    {"messages": [HumanMessage(content="Search for LangGraph best practices")]},
    stream_mode="values"
):
    if "messages" in event:
        event["messages"][-1].pretty_print()

This gives you token-by-token output and lets you see each step of the agent’s reasoning as it happens.

Add Memory with Checkpointing

LangGraph’s killer feature is durable state. Add a checkpointer and your agent remembers previous conversations:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
from langgraph.checkpoint.memory import MemorySaver

memory = MemorySaver()
agent = workflow.compile(checkpointer=memory)

# First turn
config = {"configurable": {"thread_id": "user-123"}}
agent.invoke(
    {"messages": [HumanMessage(content="My name is Alex")]},
    config=config
)

# Second turn -- agent remembers the name
result = agent.invoke(
    {"messages": [HumanMessage(content="What's my name?")]},
    config=config
)
print(result["messages"][-1].content)  # "Your name is Alex"

The thread_id acts as a session key. Same thread, same memory. Different thread, fresh start. For production, swap MemorySaver for a database-backed checkpointer like SqliteSaver or PostgresSaver.

Common Errors and How to Fix Them

GraphRecursionError

This is the one you’ll hit most often. It means your agent looped too many times before reaching END.

1
2
langgraph.errors.GraphRecursionError: Recursion limit of 25 reached
without hitting a stop condition

The default recursion_limit is 25. If your agent genuinely needs more steps (multi-hop research, complex tool chains), bump it:

1
2
3
4
result = agent.invoke(
    {"messages": [HumanMessage(content="...")]},
    {"recursion_limit": 100}
)

But if you’re hitting this unexpectedly, your routing logic probably has a bug. A common culprit is a should_continue function that never returns "__end__". Double-check that your conditional edge handles the case where tool_calls is empty.

InvalidUpdateError

1
2
langgraph.errors.InvalidUpdateError: Concurrent nodes attempted to
update the same state key without a reducer

This happens when two nodes running in parallel both try to write to the same state key. Fix it by adding a reducer to that key in your state definition:

1
2
3
4
5
from operator import add

class AgentState(TypedDict):
    messages: Annotated[list[AnyMessage], add_messages]
    items: Annotated[list[str], add]  # reducer prevents collisions

INVALID_GRAPH_NODE_RETURN_VALUE

Your node function returned something LangGraph can’t use. Nodes must return a dictionary with keys that match your state schema:

1
2
3
4
5
6
7
# Wrong -- returns a string
def bad_node(state: AgentState) -> str:
    return "hello"

# Right -- returns a dict matching state keys
def good_node(state: AgentState) -> dict:
    return {"messages": [AIMessage(content="hello")]}

Missing API Key Errors

1
openai.AuthenticationError: Error code: 401

This one is self-explanatory but trips people up in containers and CI environments where environment variables aren’t forwarded. Make sure OPENAI_API_KEY (or whatever provider you use) is set in the process running your agent, not just in your shell profile.

When to Use LangGraph vs. Simpler Approaches

LangGraph is the right tool when your agent needs:

  • Branching logic – different paths based on intermediate results
  • Persistent memory – conversations that survive restarts
  • Human-in-the-loop – pausing execution for approval before a tool runs
  • Parallel execution – running multiple tool calls simultaneously
  • Complex multi-step workflows – chains of 5+ steps with conditional routing

If you just need a single LLM call with one tool, ChatOpenAI.bind_tools() alone is enough. Don’t reach for a graph when a function will do.

Production Tips

Set explicit recursion limits. The default of 25 is fine for development but set it intentionally in production so you know exactly when and why an agent stops.

Use LangSmith for tracing. Debugging agent loops without traces is like debugging distributed systems without logs. LangSmith shows you every node transition, state change, and LLM call in a visual timeline.

Keep your state schema tight. Every key you add to state gets serialized on every checkpoint. Don’t store large blobs or raw API responses – extract what you need and discard the rest.

Handle tool errors gracefully. Wrap tool functions in try/except and return error messages as strings rather than letting exceptions crash the graph. The LLM can often recover from a failed tool call if you tell it what went wrong.