An AI agent without persistent memory forgets everything the moment your process dies. Every restart is a blank slate – no context, no conversation history, no learned preferences. That’s fine for toy demos. In production, your agent needs to pick up exactly where it left off.

LangGraph (the agent runtime from the LangChain team) solves this with checkpointers – pluggable backends that save your agent’s full state after every step. You compile your graph with a checkpointer, pass a thread_id at runtime, and the framework handles serialization, retrieval, and state restoration automatically.

Here’s how to move from volatile in-memory state to durable, production-grade persistence.

The In-Memory Baseline

Before adding persistence, make sure your agent works with MemorySaver. This stores state in a Python dictionary – fast, but gone when the process exits.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
from langgraph.checkpoint.memory import MemorySaver

llm = ChatOpenAI(model="gpt-4o")
memory = MemorySaver()

agent = create_react_agent(
    model=llm,
    tools=[],
    checkpointer=memory,
)

config = {"configurable": {"thread_id": "user-123"}}
response = agent.invoke(
    {"messages": [{"role": "user", "content": "My name is Alex."}]},
    config=config,
)
print(response["messages"][-1].content)

# Same thread_id = agent remembers the conversation
response = agent.invoke(
    {"messages": [{"role": "user", "content": "What's my name?"}]},
    config=config,
)
print(response["messages"][-1].content)  # "Your name is Alex."

The thread_id is the session key. Same thread, same memory. Different thread, fresh context. This pattern stays identical no matter which backend you swap in – the only thing that changes is the checkpointer.

Persist to SQLite

SQLite is the fastest path to persistence that survives restarts. No server to run, no connection strings to configure – just a file on disk.

1
pip install langgraph-checkpoint-sqlite
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
from langgraph.checkpoint.sqlite import SqliteSaver

# File-based persistence -- survives process restarts
with SqliteSaver.from_conn_string("agent_memory.db") as checkpointer:
    agent = create_react_agent(
        model=llm,
        tools=[],
        checkpointer=checkpointer,
    )

    config = {"configurable": {"thread_id": "user-456"}}
    agent.invoke(
        {"messages": [{"role": "user", "content": "Remember: I prefer dark mode."}]},
        config=config,
    )

Kill the process. Restart it. Run the same code with the same thread_id, and the agent still knows you prefer dark mode. The checkpoint tables get created automatically in the SQLite file.

If you need async support (which you will in any FastAPI or async web app), use AsyncSqliteSaver:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
from langgraph.checkpoint.sqlite.aio import AsyncSqliteSaver

async with AsyncSqliteSaver.from_conn_string("agent_memory.db") as checkpointer:
    agent = create_react_agent(
        model=llm,
        tools=[],
        checkpointer=checkpointer,
    )
    response = await agent.ainvoke(
        {"messages": [{"role": "user", "content": "What's my preference?"}]},
        config={"configurable": {"thread_id": "user-456"}},
    )

SQLite works well for single-instance deployments, local development, and prototypes. Once you need multiple app instances hitting the same memory store, move to PostgreSQL.

Persist to PostgreSQL

PostgreSQL is the production choice. It handles concurrent writes from multiple agent instances, supports proper transactions, and you probably already run it in your stack.

1
pip install langgraph-checkpoint-postgres
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
from langgraph.checkpoint.postgres import PostgresSaver

DB_URI = "postgresql://user:password@localhost:5432/agent_memory?sslmode=disable"

with PostgresSaver.from_conn_string(DB_URI) as checkpointer:
    checkpointer.setup()  # Creates checkpoint tables -- run once

    agent = create_react_agent(
        model=llm,
        tools=[],
        checkpointer=checkpointer,
    )

    config = {"configurable": {"thread_id": "session-abc"}}
    agent.invoke(
        {"messages": [{"role": "user", "content": "I'm working on project Atlas."}]},
        config=config,
    )

The .setup() call is critical on first run – it creates the required tables in your database. Skip it and you’ll get a UndefinedTable error:

1
psycopg.errors.UndefinedTable: relation "checkpoints" does not exist

For async applications, use AsyncPostgresSaver with the same connection string:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
from langgraph.checkpoint.postgres.aio import AsyncPostgresSaver

async with AsyncPostgresSaver.from_conn_string(DB_URI) as checkpointer:
    await checkpointer.setup()
    agent = create_react_agent(
        model=llm,
        tools=[],
        checkpointer=checkpointer,
    )
    # Use agent.ainvoke() for async calls

When manually creating psycopg connections instead of using from_conn_string, you must set autocommit=True and use row_factory=dict_row:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
import psycopg
from psycopg.rows import dict_row

conn = psycopg.connect(
    DB_URI,
    autocommit=True,
    row_factory=dict_row,
)
checkpointer = PostgresSaver(conn)
checkpointer.setup()

Miss autocommit=True and .setup() silently fails to commit the checkpoint tables. Your agent will appear to work until the first restart, when it finds an empty database.

Add Cross-Thread Memory with LangGraph Store

Checkpointers give you per-thread memory – each thread_id gets its own isolated conversation history. But what if you want your agent to remember facts across different threads? A user mentions their timezone in one conversation, and you want the agent to know it in every future conversation.

That’s what the LangGraph InMemoryStore (and its persistent variants) is for. It’s a key-value store that lives outside the checkpoint system.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
from langgraph.store.memory import InMemoryStore
from langgraph.prebuilt import create_react_agent

store = InMemoryStore()

agent = create_react_agent(
    model=llm,
    tools=[],
    checkpointer=MemorySaver(),
    store=store,
)

The store uses a namespace hierarchy to organize data. Your agent can read and write to it using the store parameter that LangGraph injects into tool functions and node functions:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
from langchain_core.tools import tool

@tool
def save_preference(key: str, value: str, config: dict) -> str:
    """Save a user preference for future reference."""
    store = config["store"]
    user_id = config["configurable"].get("user_id", "default")
    store.put(("preferences", user_id), key, {"value": value})
    return f"Saved {key} = {value}"

@tool
def get_preference(key: str, config: dict) -> str:
    """Retrieve a saved user preference."""
    store = config["store"]
    user_id = config["configurable"].get("user_id", "default")
    item = store.get(("preferences", user_id), key)
    if item:
        return item.value.get("value", "Not found")
    return "No preference saved"

This store persists across threads. A user can set their timezone in thread A, start a new conversation in thread B, and the agent can still retrieve it.

Memory Strategy Comparison

Not every agent needs the same memory backend. Here’s when to use what:

BackendBest ForLimits
MemorySaverDevelopment, testing, short-lived scriptsLost on restart
SqliteSaverSingle-instance apps, local tools, prototypesSingle-writer only
PostgresSaverProduction multi-instance deploymentsRequires a running Postgres server
InMemoryStoreCross-thread facts, user profilesLost on restart (pair with persistent store)

For production, the typical pattern is PostgresSaver for checkpointing (per-thread conversation history) combined with a persistent store for cross-thread knowledge.

Common Errors and Fixes

ModuleNotFoundError: No module named 'langgraph.checkpoint.sqlite'

The checkpoint backends are separate packages. Install the one you need:

1
2
pip install langgraph-checkpoint-sqlite   # for SqliteSaver
pip install langgraph-checkpoint-postgres  # for PostgresSaver

InvalidUpdateError: Expected dict, got list

Your state schema is missing a reducer. If a state key holds a list (like messages), annotate it:

1
2
3
4
5
from typing import Annotated
from langgraph.graph.message import add_messages

class State(TypedDict):
    messages: Annotated[list, add_messages]  # reducer required

psycopg.errors.UndefinedTable: relation "checkpoints" does not exist

Call checkpointer.setup() before invoking your agent. This creates the required tables.

Checkpoint data growing too large

Every state key gets serialized at every step. Don’t store raw API responses, large files, or full document contents in state. Extract what you need into concise fields and let the raw data live elsewhere.

Legacy Memory Classes

If you’re reading older tutorials, you’ll encounter ConversationBufferMemory, ConversationSummaryMemory, and VectorStoreRetrieverMemory from langchain.memory. These were deprecated in LangChain v0.3.1. They still work (removal is planned for v1.0), but all new code should use LangGraph checkpointers instead.

The migration is straightforward: remove the memory object, add a checkpointer, and pass thread_id in your config. The checkpointer handles everything the old memory classes did, plus it gives you time travel, fault tolerance, and human-in-the-loop patterns for free.