MCP (Model Context Protocol) is Anthropic’s open standard for wiring AI models to external tools. You run an MCP server that exposes tools – file access, web search, database queries, whatever you need – and your agent connects as a client, discovers those tools, and passes them to Claude. Claude picks which tools to call, you execute them through the MCP session, and feed results back. The model never touches your infrastructure directly.
Here’s the minimal setup to connect to an MCP tool server and let Claude call tools through it:
1
| pip install mcp anthropic
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
| import asyncio
from contextlib import AsyncExitStack
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from anthropic import Anthropic
async def main():
client = Anthropic()
exit_stack = AsyncExitStack()
# Connect to an MCP server (e.g., a filesystem tool server)
server_params = StdioServerParameters(
command="python",
args=["my_mcp_server.py"],
env=None,
)
stdio_transport = await exit_stack.enter_async_context(stdio_client(server_params))
read_stream, write_stream = stdio_transport
session = await exit_stack.enter_async_context(
ClientSession(read_stream, write_stream)
)
await session.initialize()
# Discover available tools
tools_response = await session.list_tools()
print("Available tools:", [t.name for t in tools_response.tools])
await exit_stack.aclose()
asyncio.run(main())
|
That connects to an MCP server over stdio, initializes the session, and lists every tool the server exposes. The StdioServerParameters tells the MCP client how to spawn the server process. You can point it at any MCP-compatible server – a Python script, a Node.js process, or a prebuilt server like @modelcontextprotocol/server-filesystem.
Claude’s Messages API expects tools as a list of dictionaries with name, description, and input_schema. MCP tools come back with .name, .description, and .inputSchema. The mapping is straightforward:
1
2
3
4
5
6
7
8
9
10
| def mcp_tools_to_anthropic(tools_response):
"""Convert MCP tool definitions to Anthropic API format."""
return [
{
"name": tool.name,
"description": tool.description or "",
"input_schema": tool.inputSchema,
}
for tool in tools_response.tools
]
|
Now you can pass these directly to client.messages.create():
1
2
3
4
5
6
7
8
9
10
11
| tools_response = await session.list_tools()
anthropic_tools = mcp_tools_to_anthropic(tools_response)
response = client.messages.create(
model="claude-sonnet-4-5-20250514",
max_tokens=4096,
tools=anthropic_tools,
messages=[{"role": "user", "content": "List all Python files in the current directory"}],
)
print(response.stop_reason) # "tool_use" if Claude wants to call a tool
|
When stop_reason is "tool_use", Claude’s response contains one or more tool_use blocks. Each block has a name, input (the arguments), and an id you need to reference when sending results back.
The Agent Loop#
The core of any tool-calling agent is the loop: send a message, check if Claude wants tools, execute them, feed results back, repeat until Claude gives a final text response. Here’s a complete agent that handles single and multi-step tool chains:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
| import asyncio
import json
from contextlib import AsyncExitStack
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from anthropic import Anthropic
def mcp_tools_to_anthropic(tools_response):
return [
{
"name": tool.name,
"description": tool.description or "",
"input_schema": tool.inputSchema,
}
for tool in tools_response.tools
]
async def agent_loop(query: str, session: ClientSession, client: Anthropic):
"""Run the agent loop: Claude decides tools, we execute, feed back results."""
tools_response = await session.list_tools()
anthropic_tools = mcp_tools_to_anthropic(tools_response)
messages = [{"role": "user", "content": query}]
while True:
response = client.messages.create(
model="claude-sonnet-4-5-20250514",
max_tokens=4096,
tools=anthropic_tools,
messages=messages,
)
# Collect the assistant's full response (text + tool_use blocks)
assistant_content = []
for block in response.content:
if block.type == "text":
print(block.text)
assistant_content.append(block)
messages.append({"role": "assistant", "content": assistant_content})
# If Claude is done talking, exit the loop
if response.stop_reason == "end_turn":
break
# Execute every tool call Claude requested
tool_results = []
for block in response.content:
if block.type == "tool_use":
print(f" -> Calling tool: {block.name}({json.dumps(block.input)})")
result = await session.call_tool(block.name, block.input)
# Extract text from MCP result content
result_text = ""
for item in result.content:
if hasattr(item, "text"):
result_text += item.text
tool_results.append(
{
"type": "tool_result",
"tool_use_id": block.id,
"content": result_text,
}
)
messages.append({"role": "user", "content": tool_results})
async def main():
client = Anthropic()
exit_stack = AsyncExitStack()
server_params = StdioServerParameters(
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem", "/tmp/workspace"],
env=None,
)
stdio_transport = await exit_stack.enter_async_context(stdio_client(server_params))
read_stream, write_stream = stdio_transport
session = await exit_stack.enter_async_context(
ClientSession(read_stream, write_stream)
)
await session.initialize()
await agent_loop(
query="Create a file called notes.txt with the text 'hello world', then read it back to confirm",
session=session,
client=client,
)
await exit_stack.aclose()
asyncio.run(main())
|
Walk through what happens here:
- Claude gets the user query plus tool descriptions.
- Claude responds with
stop_reason: "tool_use" and one or more tool_use blocks. - We execute each tool through
session.call_tool(), which sends the request to the MCP server. - We package the results as
tool_result messages and append them. - We call Claude again with the full conversation. Claude might call more tools (multi-step chain) or produce a final text response.
- The loop ends when
stop_reason is "end_turn".
This handles multi-step chains naturally. If Claude writes a file and then wants to read it back, that’s two iterations of the loop. The conversation history accumulates, so Claude always sees the full context of what it’s done.
Connecting to Multiple MCP Servers#
Real agents often need tools from several servers – filesystem access from one, web search from another, database queries from a third. You can connect to multiple servers and merge their tool lists:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
| async def connect_to_servers(exit_stack: AsyncExitStack, server_configs: list):
"""Connect to multiple MCP servers, return merged sessions and tools."""
sessions = []
all_tools = []
for config in server_configs:
server_params = StdioServerParameters(
command=config["command"],
args=config["args"],
env=config.get("env"),
)
stdio_transport = await exit_stack.enter_async_context(
stdio_client(server_params)
)
read_stream, write_stream = stdio_transport
session = await exit_stack.enter_async_context(
ClientSession(read_stream, write_stream)
)
await session.initialize()
tools_response = await session.list_tools()
for tool in tools_response.tools:
all_tools.append(
{
"name": tool.name,
"description": tool.description or "",
"input_schema": tool.inputSchema,
}
)
# Map tool name -> session so we know where to route calls
sessions.append((tool.name, session))
# Build a lookup dict: tool_name -> session
tool_to_session = {name: sess for name, sess in sessions}
return tool_to_session, all_tools
|
Then in your agent loop, route each tool call to the right session:
1
2
3
4
| for block in response.content:
if block.type == "tool_use":
target_session = tool_to_session[block.name]
result = await target_session.call_tool(block.name, block.input)
|
This way Claude sees a flat list of all available tools and doesn’t need to know which server provides which tool. You handle the routing.
Common Errors and Fixes#
FileNotFoundError when connecting to a server – The MCP client spawns the server as a subprocess. If command is "python" but your system uses python3, the process fails silently. Fix it by using the right binary:
1
2
3
4
5
| server_params = StdioServerParameters(
command="python3", # or use shutil.which("python3") for portability
args=["server.py"],
env=None,
)
|
Tool execution failed with no useful message – MCP tool errors come back in result.content as items that might have an isError attribute. Always check:
1
2
3
| result = await session.call_tool(block.name, block.input)
if result.isError:
print(f"Tool error: {result.content}")
|
Claude ignores available tools and gives a text-only response – This usually means the tool descriptions are too vague. MCP tools get their descriptions from the server’s @mcp.tool() docstrings. Make them specific: instead of “Search for things”, write “Search the filesystem for files matching a glob pattern. Returns a list of absolute paths.”
BadRequestError: tool_use_id not found – Every tool_result must reference the exact tool_use_id from the corresponding tool_use block. If you’re processing multiple tool calls in one turn, make sure you match each result to the right ID:
1
2
3
4
5
| tool_results.append({
"type": "tool_result",
"tool_use_id": block.id, # must match the tool_use block's id
"content": result_text,
})
|
Agent loops forever – Add a turn counter and bail out after a reasonable limit:
1
2
3
4
5
6
7
8
9
| max_turns = 20
turn = 0
while turn < max_turns:
turn += 1
response = client.messages.create(...)
if response.stop_reason == "end_turn":
break
else:
print("Agent hit max turns limit")
|
Connection timeout on slow servers – Some MCP servers take a few seconds to start (especially Node.js ones that run npx). The default timeout should handle most cases, but if you’re seeing timeouts, make sure the server process actually starts correctly by testing it manually first: run the command in your terminal and check for errors.