Function calling lets an LLM request that your code run a specific function with structured arguments, then use the result to compose its answer. Both OpenAI and Anthropic support it, but the request format, response handling, and error messages differ enough to trip you up when switching between them. Here is how to set up both, side by side.

OpenAI: Responses API

OpenAI’s current API is the Responses API (replacing Chat Completions). You define tools as a list of function schemas, send them alongside your prompt, and check the response for function_call items.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
import json
from openai import OpenAI

client = OpenAI()

tools = [
    {
        "type": "function",
        "name": "get_weather",
        "description": "Get current weather for a city",
        "parameters": {
            "type": "object",
            "required": ["location"],
            "properties": {
                "location": {
                    "type": "string",
                    "description": "City and state, e.g. Portland, OR"
                },
                "unit": {
                    "type": "string",
                    "enum": ["celsius", "fahrenheit"]
                }
            },
            "additionalProperties": False,
        },
        "strict": True,
    }
]

# Step 1: send the user message with tools
response = client.responses.create(
    model="gpt-4.1",
    input=[{"role": "user", "content": "What's the weather in Portland, OR?"}],
    tools=tools,
    tool_choice="auto",
)

# Step 2: find the function call in the output
tool_call = next(
    item for item in response.output if item.type == "function_call"
)
args = json.loads(tool_call.arguments)
print(f"Function: {tool_call.name}, Args: {args}")
# Function: get_weather, Args: {'location': 'Portland, OR', 'unit': 'fahrenheit'}

# Step 3: execute your function and send the result back
weather_result = {"temp": 58, "condition": "overcast", "unit": "fahrenheit"}

followup_input = response.output + [
    {
        "type": "function_call_output",
        "call_id": tool_call.call_id,
        "output": json.dumps(weather_result),
    }
]

final = client.responses.create(
    model="gpt-4.1",
    input=[{"role": "user", "content": "What's the weather in Portland, OR?"}]
    + followup_input,
    tools=tools,
)
print(final.output_text)

A few things to notice. The strict: True flag on the tool definition forces the model to always produce arguments that match your schema exactly – no missing required fields, no invented properties. Always enable this in production. The call_id from the tool call must be included when you return the result; a mismatched ID produces a 400 No tool call found for function call output with call_id error.

tool_choice options

The tool_choice parameter controls whether and which tools the model uses:

  • "auto" – the model decides whether to call a tool (default)
  • "required" – the model must call at least one tool
  • "none" – tool calling is disabled
  • {"type": "function", "name": "get_weather"} – force a specific function

Anthropic: Messages API

Anthropic’s Claude uses a different structure. Tools go in a top-level tools array, but schemas use input_schema instead of parameters. Responses come back as content blocks with type: "tool_use", and you return results as tool_result blocks.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
import anthropic
import json

client = anthropic.Anthropic()

tools = [
    {
        "name": "get_weather",
        "description": "Get current weather for a city",
        "input_schema": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "City and state, e.g. Portland, OR"
                },
                "unit": {
                    "type": "string",
                    "enum": ["celsius", "fahrenheit"]
                }
            },
            "required": ["location"],
        },
    }
]

# Step 1: send the request
response = client.messages.create(
    model="claude-sonnet-4-5-20250514",
    max_tokens=1024,
    tools=tools,
    messages=[
        {"role": "user", "content": "What's the weather in Portland, OR?"}
    ],
)
# stop_reason will be "tool_use" when Claude wants to call a function
print(f"Stop reason: {response.stop_reason}")

# Step 2: extract the tool use block
tool_block = next(
    block for block in response.content if block.type == "tool_use"
)
print(f"Tool: {tool_block.name}, Input: {tool_block.input}")
# Tool: get_weather, Input: {'location': 'Portland, OR', 'unit': 'fahrenheit'}

# Step 3: execute the function, send the result back
weather_result = {"temp": 58, "condition": "overcast", "unit": "fahrenheit"}

final = client.messages.create(
    model="claude-sonnet-4-5-20250514",
    max_tokens=1024,
    tools=tools,
    messages=[
        {"role": "user", "content": "What's the weather in Portland, OR?"},
        {"role": "assistant", "content": response.content},
        {
            "role": "user",
            "content": [
                {
                    "type": "tool_result",
                    "tool_use_id": tool_block.id,
                    "content": json.dumps(weather_result),
                }
            ],
        },
    ],
)
print(final.content[0].text)

The key difference: Anthropic sends the tool result as a user message with a tool_result content block. You must include the full assistant response (with the tool_use block) before it, or you get a validation error. The tool_use_id must match the id from the tool use block, not a separate call_id.

tool_choice options

Claude’s tool_choice works differently from OpenAI’s:

  • {"type": "auto"} – Claude decides (default)
  • {"type": "any"} – Claude must use at least one tool, but picks which
  • {"type": "tool", "name": "get_weather"} – force a specific tool
  • {"type": "none"} – no tool use

You can also disable parallel tool calls on any of these by adding "disable_parallel_tool_use": true.

Parallel Tool Calls

Both APIs support the model calling multiple tools in a single response. This matters for latency: if a user asks “What’s the weather in Portland and Seattle?”, you want both lookups to happen concurrently, not sequentially.

OpenAI returns multiple function_call items in response.output. You must return a function_call_output for each one, matched by call_id. You can disable parallel calls by setting parallel_tool_calls=False in the request.

Anthropic returns multiple tool_use blocks in response.content. You must return all tool_result blocks in a single user message. To disable it, add "disable_parallel_tool_use": true to tool_choice.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# Anthropic: handling parallel tool results
tool_blocks = [b for b in response.content if b.type == "tool_use"]

results = []
for block in tool_blocks:
    # execute each tool call (could use asyncio.gather for concurrency)
    result = run_tool(block.name, block.input)
    results.append({
        "type": "tool_result",
        "tool_use_id": block.id,
        "content": json.dumps(result),
    })

final = client.messages.create(
    model="claude-sonnet-4-5-20250514",
    max_tokens=1024,
    tools=tools,
    messages=[
        {"role": "user", "content": original_prompt},
        {"role": "assistant", "content": response.content},
        {"role": "user", "content": results},  # all results in one message
    ],
)

If you return only some of the tool results, Anthropic’s API will reject the request. OpenAI similarly returns a 400 if a function_call output is missing for any call in the batch.

Schema Enforcement

Both APIs offer strict schema validation, but they implement it differently.

OpenAI uses "strict": True on individual tool definitions. When enabled, the model is constrained at the token level to produce valid arguments. You must also set "additionalProperties": false in your schema for strict mode to work.

Anthropic recently added strict tool use via their Structured Outputs feature. Add "strict": true to your tool definition to get guaranteed schema conformance. Without it, Claude’s tool inputs are best-effort and may occasionally include unexpected fields or omit optional ones.

Common Errors and Fixes

OpenAI: 400 No tool call found for function call output with call_id

You passed a call_id that does not match any function call in the conversation. Double-check you are forwarding the exact call_id from the function_call item, not generating your own.

OpenAI: Unrecognized request argument supplied: functions

You are using the deprecated functions parameter from the old Chat Completions API. Switch to the tools parameter (Responses API) or tools in Chat Completions.

Anthropic: tool_use_id not found in previous assistant message

The tool_use_id in your tool_result does not match any tool_use block in the preceding assistant message. Make sure you include the full assistant response (with all content blocks) in the messages array before the tool result.

Anthropic: validation error on tool_result message ordering

The Messages API requires strict alternation: user, assistant, user. The tool result must be a user message, and it must immediately follow the assistant message containing the tool_use block. If you accidentally nest messages or skip the assistant turn, you get a 400.

Both: model hallucinates function names

If the model invents a function name that is not in your tools list, your dispatch logic will fail. Always validate tool_call.name (OpenAI) or tool_block.name (Anthropic) against your registered tools before executing.

Agentic Loop Pattern

In production, function calling rarely happens in a single round trip. The model may need multiple tool calls in sequence – for instance, looking up a user’s location, then fetching weather for that location. Here is the standard loop:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
def agent_loop(client, model, tools, user_message, dispatch_fn, max_turns=10):
    """Generic agentic loop for Claude tool use."""
    messages = [{"role": "user", "content": user_message}]

    for _ in range(max_turns):
        response = client.messages.create(
            model=model,
            max_tokens=4096,
            tools=tools,
            messages=messages,
        )

        # If the model stopped without requesting tools, we are done
        if response.stop_reason == "end_turn":
            return response.content[0].text

        # Append the assistant's full response
        messages.append({"role": "assistant", "content": response.content})

        # Collect and execute all tool calls
        tool_results = []
        for block in response.content:
            if block.type == "tool_use":
                result = dispatch_fn(block.name, block.input)
                tool_results.append({
                    "type": "tool_result",
                    "tool_use_id": block.id,
                    "content": json.dumps(result),
                })

        messages.append({"role": "user", "content": tool_results})

    raise RuntimeError("Agent exceeded max turns")

This same pattern works for OpenAI’s Responses API with minor adjustments: check for function_call items instead of tool_use blocks, use call_id instead of block.id, and append function_call_output items instead of tool_result messages.

Quick Reference: API Differences

FeatureOpenAI (Responses API)Anthropic (Messages API)
Schema fieldparametersinput_schema
Tool call typefunction_call itemtool_use content block
Result typefunction_call_output itemtool_result content block
ID fieldcall_idid / tool_use_id
Strict schema"strict": True on tool"strict": true on tool
Force specific tooltool_choice: {"type": "function", "name": "..."}tool_choice: {"type": "tool", "name": "..."}
Force any tooltool_choice: "required"tool_choice: {"type": "any"}
Disable parallelparallel_tool_calls=Falsetool_choice.disable_parallel_tool_use: true
Stop signalNo dedicated stop reasonstop_reason: "tool_use"

The core pattern is the same across both providers: define schemas, parse structured calls from the response, execute your functions, and feed results back. The differences are mostly in field names and message structure. Pick whichever API fits your stack, and refer to this table when porting code between them.