Most agent workflows break the moment you need a human to sign off on something. The agent either barrels ahead without asking or stops dead with no way to resume. LangGraph fixes this with checkpointed workflows – your agent pauses at specific nodes, waits for human input, and picks up right where it left off.

You’ll build a workflow agent that drafts content, pauses for human approval, collects structured feedback, and revises based on that feedback. The whole thing runs as a state machine with clean entry points for human interaction.

Setting Up the LangGraph Workflow

Install the dependencies first:

1
pip install langgraph langchain-openai langchain-core

Define the state schema and build a basic graph with an approval checkpoint. Every node in LangGraph reads from and writes to a shared state object, so you need to define that structure upfront.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
from typing import TypedDict, Literal, Annotated
from langgraph.graph import StateGraph, END, START
from langgraph.checkpoint.memory import MemorySaver
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage

class WorkflowState(TypedDict):
    task: str
    draft: str
    feedback: str
    approval_status: Literal["pending", "approved", "rejected", "revise"]
    revision_count: int
    final_output: str

llm = ChatOpenAI(model="gpt-4o", temperature=0.3)

def generate_draft(state: WorkflowState) -> dict:
    task = state["task"]
    revision_count = state.get("revision_count", 0)
    feedback = state.get("feedback", "")

    messages = [
        SystemMessage(content="You are a technical writer. Produce clear, accurate content."),
        HumanMessage(content=f"Task: {task}"),
    ]

    if revision_count > 0 and feedback:
        messages.append(
            HumanMessage(content=f"Previous feedback to address:\n{feedback}")
        )

    response = llm.invoke(messages)
    return {
        "draft": response.content,
        "approval_status": "pending",
        "revision_count": revision_count + 1,
    }

This generate_draft node handles both the initial draft and revisions. When revision_count is above zero, it appends the human feedback to the prompt so the LLM knows what to fix.

Building the Approval Gate

The approval gate is a node that does nothing on its own – it exists as a breakpoint where execution pauses. You use LangGraph’s interrupt mechanism to halt the graph and wait for external input.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
from langgraph.types import interrupt, Command

def approval_gate(state: WorkflowState) -> dict:
    """Pause execution and wait for human decision."""
    draft = state["draft"]
    revision_count = state["revision_count"]

    decision = interrupt({
        "draft": draft,
        "revision_count": revision_count,
        "prompt": "Review the draft above. Respond with: approved, rejected, or revise",
    })

    return {
        "approval_status": decision["status"],
        "feedback": decision.get("feedback", ""),
    }

def route_after_approval(state: WorkflowState) -> str:
    status = state["approval_status"]
    if status == "approved":
        return "finalize"
    elif status == "revise":
        if state["revision_count"] >= 3:
            return "finalize"
        return "generate_draft"
    else:
        return END

def finalize(state: WorkflowState) -> dict:
    return {"final_output": state["draft"]}

The interrupt() call suspends the graph and surfaces data to whoever is driving the workflow. When the human responds, LangGraph resumes execution from that exact node with the provided values. The routing function checks the approval status and either loops back to drafting, finalizes, or ends the workflow.

Wiring the Graph and Running It

Now connect the nodes into a graph and add a checkpointer so state persists across interruptions:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
builder = StateGraph(WorkflowState)

builder.add_node("generate_draft", generate_draft)
builder.add_node("approval_gate", approval_gate)
builder.add_node("finalize", finalize)

builder.add_edge(START, "generate_draft")
builder.add_edge("generate_draft", "approval_gate")
builder.add_conditional_edges("approval_gate", route_after_approval)
builder.add_edge("finalize", END)

checkpointer = MemorySaver()
graph = builder.compile(checkpointer=checkpointer)

# Start the workflow
config = {"configurable": {"thread_id": "review-session-1"}}
initial_input = {
    "task": "Write a migration guide from REST to GraphQL",
    "draft": "",
    "feedback": "",
    "approval_status": "pending",
    "revision_count": 0,
    "final_output": "",
}

# First run -- generates draft, then pauses at approval_gate
for event in graph.stream(initial_input, config, stream_mode="updates"):
    print(event)

The graph runs until it hits the interrupt() call inside approval_gate, then stops. The state is saved to the checkpointer automatically.

Resuming with Human Feedback

To resume after the human reviews the draft, use Command to inject the decision back into the graph:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# Human reviewed the draft and wants revisions
human_decision = Command(
    resume={
        "status": "revise",
        "feedback": "Add a section on N+1 query problems. The batching example needs error handling.",
    }
)

# Resume the graph -- routes back to generate_draft with feedback
for event in graph.stream(human_decision, config, stream_mode="updates"):
    print(event)

The graph picks up at approval_gate, processes the decision, routes to generate_draft (because status is "revise"), generates a new draft incorporating the feedback, and pauses again at the approval gate. You can repeat this loop until the human approves or the revision cap is hit.

When the human is satisfied:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
human_decision = Command(
    resume={
        "status": "approved",
        "feedback": "",
    }
)

for event in graph.stream(human_decision, config, stream_mode="updates"):
    print(event)

# Fetch the final state
final_state = graph.get_state(config)
print(final_state.values["final_output"])

Each thread ID tracks a separate conversation, so you can run multiple approval workflows in parallel without state collisions.

Common Errors and Fixes

InvalidUpdateError: Expected dict, got Command – You passed Command as input when using an older LangGraph version. Make sure you’re on langgraph>=0.2.50. Run pip install --upgrade langgraph and check the version with python -c "import langgraph; print(langgraph.__version__)".

GraphInterrupt raised instead of pausing – This happens if you don’t pass a checkpointer when compiling the graph. The interrupt() call requires persistence. Always pass checkpointer=MemorySaver() (or a database-backed checkpointer) to builder.compile().

State not updating after resume – Double-check that your Command(resume=...) payload matches what the interrupt() return value expects. If approval_gate reads decision["status"] and you send {"approval": "approved"} instead of {"status": "approved"}, the key lookup fails silently or throws a KeyError.

KeyError: 'thread_id' – The config dict must contain a configurable key with thread_id inside it. A common mistake is passing {"thread_id": "abc"} directly instead of {"configurable": {"thread_id": "abc"}}.

Infinite revision loops – Always cap your revision count. The route_after_approval function above forces finalization after 3 revisions. Without a cap, a persistent “revise” response loops forever. Adjust the limit based on your use case, but always have one.

NodeInterrupt not serializable – If you’re using a database-backed checkpointer like SqliteSaver, make sure the data you pass to interrupt() is JSON-serializable. Don’t pass class instances or lambda functions – stick to dicts, lists, and primitives.