web_search_20250305
, text_editor_20250124
) to ensure compatibility across model versions.Provide Claude with tools and a user prompt
Claude decides to use a tool
stop_reason
of tool_use
, signaling Claude’s intent.Execute the tool and return results
user
message containing a tool_result
content blockClaude uses tool result to formulate a response
Provide Claude with tools and a user prompt
Claude executes the server tool
Claude uses the server tool result to formulate a response
Single tool example
get_weather
function with the provided input, and return the result in a new user
message:Parallel tool use
tool_use
blocks are included in a single assistant message, and all corresponding tool_result
blocks must be provided in the subsequent user message.Multiple tool example
get_weather
and a get_time
tool, along with a user query that asks for both.get_weather
first, then get_time
after receiving the weather resulttool_use
blocks in a single response when the operations are independentuser
message, with each result in its own tool_result
block.Missing information
get_weather
tool above, if you ask Claude “What’s the weather?” without specifying a location, Claude, particularly Claude Sonnet, may make a guess about tools inputs:Sequential tools
get_location
tool to get the user’s location, then passing that location to the get_weather
tool:get_location
tool to get the user’s location. After you return the location in a tool_result
, Claude would then call get_weather
with that location to get the final answer.The full conversation might look like:Role | Content |
---|---|
User | What’s the weather like where I am? |
Assistant | I’ll find your current location first, then check the weather there. [Tool use for get_location] |
User | [Tool result for get_location with matching id and result of San Francisco, CA] |
Assistant | [Tool use for get_weather with the following input]{ “location”: “San Francisco, CA”, “unit”: “fahrenheit” } |
User | [Tool result for get_weather with matching id and result of “59°F (15°C), mostly cloudy”] |
Assistant | Based on your current location in San Francisco, CA, the weather right now is 59°F (15°C) and mostly cloudy. It’s a fairly cool and overcast day in the city. You may want to bring a light jacket if you’re heading outside. |
get_location
tool.get_location
function and returns the result “San Francisco, CA” in a tool_result
block.get_weather
tool, passing in “San Francisco, CA” as the location
parameter (as well as a guessed unit
parameter, as unit
is not a required parameter).get_weather
function with the provided arguments and returns the weather data in another tool_result
block.Chain of thought tool use
Answer the user's request using relevant tools (if they are available). Before calling a tool, do some analysis. First, think about which of the provided tools is the relevant tool to answer the user's request. Second, go through each of the required parameters of the relevant tool and determine if the user has directly provided or given enough information to infer a value. When deciding if the parameter can be inferred, carefully consider all the context to see if it supports a specific value. If all of the required parameters are present or can be reasonably inferred, proceed with the tool call. BUT, if one of the values for a required parameter is missing, DO NOT invoke the function (not even with fillers for the missing params) and instead, ask the user to provide the missing parameters. DO NOT ask for more information on optional parameters if it is not provided.
JSON mode
tool_choice
(see Forcing tool use) to instruct the model to explicitly use that toolinput
to the tool, so the name of the tool and description should be from the model’s perspective.record_summary
tool to describe an image following a particular format.tools
parameter)tools
parameter in API requests (tool names, descriptions, and schemas)tool_use
content blocks in API requests and responsestool_result
content blocks in API requeststools
, we also automatically include a special system prompt for the model which enables tool use. The number of tool use tokens required for each model are listed below (excluding the additional tokens listed above). Note that the table assumes at least 1 tool is provided. If no tools
are provided, then a tool choice of none
uses 0 additional system prompt tokens.
Model | Tool choice | Tool use system prompt token count |
---|---|---|
Claude Opus 4.1 | auto , none any , tool | 346 tokens 313 tokens |
Claude Opus 4 | auto , none any , tool | 346 tokens 313 tokens |
Claude Sonnet 4 | auto , none any , tool | 346 tokens 313 tokens |
Claude Sonnet 3.7 | auto , none any , tool | 346 tokens 313 tokens |
Claude Sonnet 3.5 (Oct) (deprecated) | auto , none any , tool | 346 tokens 313 tokens |
Claude Sonnet 3.5 (June) (deprecated) | auto , none any , tool | 294 tokens 261 tokens |
Claude Haiku 3.5 | auto , none any , tool | 264 tokens 340 tokens |
Claude Opus 3 (deprecated) | auto , none any , tool | 530 tokens 281 tokens |
Claude Sonnet 3 | auto , none any , tool | 159 tokens 235 tokens |
Claude Haiku 3 | auto , none any , tool | 264 tokens 340 tokens |
usage
metrics.