Contar tokens de mensaje
Count the number of tokens in a Message.
The Token Count API can be used to count the number of tokens in a Message, including tools, images, and documents, without creating it.
Headers
Optional header to specify the beta version(s) you want to use.
To use multiple betas, use a comma separated list like beta1,beta2
or specify the header multiple times for each beta.
The version of the Anthropic API you want to use.
Read more about versioning and our version history here.
Body
Input messages.
Our models are trained to operate on alternating user
and assistant
conversational turns. When creating a new Message
, you specify the prior conversational turns with the messages
parameter, and the model then generates the next Message
in the conversation. Consecutive user
or assistant
turns in your request will be combined into a single turn.
Each input message must be an object with a role
and content
. You can specify a single user
-role message, or you can include multiple user
and assistant
messages.
If the final message uses the assistant
role, the response content will continue immediately from the content in that message. This can be used to constrain part of the model's response.
Example with a single user
message:
[{"role": "user", "content": "Hello, Claude"}]
Example with multiple conversational turns:
[
{"role": "user", "content": "Hello there."},
{"role": "assistant", "content": "Hi, I'm Claude. How can I help you?"},
{"role": "user", "content": "Can you explain LLMs in plain English?"},
]
Example with a partially-filled response from Claude:
[
{"role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"},
{"role": "assistant", "content": "The best answer is ("},
]
Each input message content
may be either a single string
or an array of content blocks, where each block has a specific type
. Using a string
for content
is shorthand for an array of one content block of type "text"
. The following input messages are equivalent:
{"role": "user", "content": "Hello, Claude"}
{"role": "user", "content": [{"type": "text", "text": "Hello, Claude"}]}
Starting with Claude 3 models, you can also send image content blocks:
{"role": "user", "content": [
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/jpeg",
"data": "/9j/4AAQSkZJRg...",
}
},
{"type": "text", "text": "What is in this image?"}
]}
We currently support the base64
source type for images, and the image/jpeg
, image/png
, image/gif
, and image/webp
media types.
See examples for more input examples.
Note that if you want to include a system prompt, you can use the top-level system
parameter — there is no "system"
role for input messages in the Messages API.
The model that will complete your prompt.
See models for additional details and options.
1 - 256
How the model should use the provided tools. The model can use a specific tool, any available tool, or decide by itself.
Definitions of tools that the model may use.
If you include tools
in your API request, the model may return tool_use
content blocks that represent the model's use of those tools. You can then run those tools using the tool input generated by the model and then optionally return results back to the model using tool_result
content blocks.
Each tool definition includes:
name
: Name of the tool.description
: Optional, but strongly-recommended description of the tool.input_schema
: JSON schema for the toolinput
shape that the model will produce intool_use
output content blocks.
For example, if you defined tools
as:
[
{
"name": "get_stock_price",
"description": "Get the current stock price for a given ticker symbol.",
"input_schema": {
"type": "object",
"properties": {
"ticker": {
"type": "string",
"description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
}
},
"required": ["ticker"]
}
}
]
And then asked the model "What's the S&P 500 at today?", the model might produce tool_use
content blocks in the response like this:
[
{
"type": "tool_use",
"id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
"name": "get_stock_price",
"input": { "ticker": "^GSPC" }
}
]
You might then run your get_stock_price
tool with {"ticker": "^GSPC"}
as an input, and return the following back to the model in a subsequent user
message:
[
{
"type": "tool_result",
"tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
"content": "259.75 USD"
}
]
Tools can be used for workflows that include running client-side tools and functions, or more generally whenever you want the model to produce a particular JSON structure of output.
See our guide for more details.
System prompt.
A system prompt is a way of providing context and instructions to Claude, such as specifying a particular goal or role. See our guide to system prompts.
Response
The total number of tokens across the provided list of messages, system prompt, and tools.
Was this page helpful?