Most agent workflows break the moment you need a human to sign off on something. The agent either barrels ahead without asking or stops dead with no way to resume. LangGraph fixes this with checkpointed workflows – your agent pauses at specific nodes, waits for human input, and picks up right where it left off.
You’ll build a workflow agent that drafts content, pauses for human approval, collects structured feedback, and revises based on that feedback. The whole thing runs as a state machine with clean entry points for human interaction.
Setting Up the LangGraph Workflow
Install the dependencies first:
| |
Define the state schema and build a basic graph with an approval checkpoint. Every node in LangGraph reads from and writes to a shared state object, so you need to define that structure upfront.
| |
This generate_draft node handles both the initial draft and revisions. When revision_count is above zero, it appends the human feedback to the prompt so the LLM knows what to fix.
Building the Approval Gate
The approval gate is a node that does nothing on its own – it exists as a breakpoint where execution pauses. You use LangGraph’s interrupt mechanism to halt the graph and wait for external input.
| |
The interrupt() call suspends the graph and surfaces data to whoever is driving the workflow. When the human responds, LangGraph resumes execution from that exact node with the provided values. The routing function checks the approval status and either loops back to drafting, finalizes, or ends the workflow.
Wiring the Graph and Running It
Now connect the nodes into a graph and add a checkpointer so state persists across interruptions:
| |
The graph runs until it hits the interrupt() call inside approval_gate, then stops. The state is saved to the checkpointer automatically.
Resuming with Human Feedback
To resume after the human reviews the draft, use Command to inject the decision back into the graph:
| |
The graph picks up at approval_gate, processes the decision, routes to generate_draft (because status is "revise"), generates a new draft incorporating the feedback, and pauses again at the approval gate. You can repeat this loop until the human approves or the revision cap is hit.
When the human is satisfied:
| |
Each thread ID tracks a separate conversation, so you can run multiple approval workflows in parallel without state collisions.
Common Errors and Fixes
InvalidUpdateError: Expected dict, got Command – You passed Command as input when using an older LangGraph version. Make sure you’re on langgraph>=0.2.50. Run pip install --upgrade langgraph and check the version with python -c "import langgraph; print(langgraph.__version__)".
GraphInterrupt raised instead of pausing – This happens if you don’t pass a checkpointer when compiling the graph. The interrupt() call requires persistence. Always pass checkpointer=MemorySaver() (or a database-backed checkpointer) to builder.compile().
State not updating after resume – Double-check that your Command(resume=...) payload matches what the interrupt() return value expects. If approval_gate reads decision["status"] and you send {"approval": "approved"} instead of {"status": "approved"}, the key lookup fails silently or throws a KeyError.
KeyError: 'thread_id' – The config dict must contain a configurable key with thread_id inside it. A common mistake is passing {"thread_id": "abc"} directly instead of {"configurable": {"thread_id": "abc"}}.
Infinite revision loops – Always cap your revision count. The route_after_approval function above forces finalization after 3 revisions. Without a cap, a persistent “revise” response loops forever. Adjust the limit based on your use case, but always have one.
NodeInterrupt not serializable – If you’re using a database-backed checkpointer like SqliteSaver, make sure the data you pass to interrupt() is JSON-serializable. Don’t pass class instances or lambda functions – stick to dicts, lists, and primitives.
Related Guides
- How to Build a Document QA Agent with PDF Parsing and Tool Use
- How to Build a Web Research Agent with LLMs and Search APIs
- How to Build a Monitoring Agent with Prometheus Alerts and LLM Diagnosis
- How to Build a Tool-Calling Agent with Claude and MCP
- How to Build a SQL Query Agent with LLMs and Tool Calling
- How to Build an API Testing Agent with LLMs and Requests
- How to Build a ReAct Agent from Scratch with Python
- How to Build a Code Generation Agent with LLMs
- How to Build a Customer Support Agent with RAG and Tool Calling
- How to Build AI Agents with the Claude Agent SDK