Token counting is in beta

To access this feature, include the anthropic-beta: token-counting-2024-11-01 header in your API requests, or use client.beta.messages.count_tokens in your SDK calls.

We’ll be iterating on this open beta over the coming weeks, so we appreciate your feedback. Please share your ideas and suggestions using this form.

Token counting enables you to determine the number of tokens in a message before sending it to Claude, helping you make informed decisions about your prompts and usage. With token counting, you can

  • Proactively manage rate limits and costs
  • Make smart model routing decisions
  • Optimize prompts to be a specific length

How to count message tokens

The token counting endpoint accepts the same structured list of inputs for creating a message, including support for system prompts, tools, images, and PDFs. The response contains the total number of input tokens.

The token count should be considered an estimate. In some cases, the actual number of input tokens used when creating a message may differ by a small amount.

Supported models

The token counting endpoint supports the following models:

  • Claude 3.5 Sonnet
  • Claude 3.5 Haiku
  • Claude 3 Haiku
  • Claude 3 Opus

Count tokens in basic messages

JSON
{ "input_tokens": 14 }

Count tokens in messages with tools

JSON
{ "input_tokens": 403 }

Count tokens in messages with images

JSON
{ "input_tokens": 1551 }

Count tokens in messages with PDFs

JSON
{ "input_tokens": 2188 }

The Token Count API supports PDFs with the same limitations as the Messages API.


Pricing and rate limits

Token counting is free to use but subject to requests per minute rate limits based on your usage tier. If you need higher limits, contact sales through the Anthropic Console.

Usage tierRequests per minute (RPM)
1100
22,000
34,000
48,000

Token counting and message creation have separate and independent rate limits — usage of one does not count against the limits of the other.