Message Batches
The Message Batches API is a powerful, cost-effective way to asynchronously process large volumes of Messages requests. This approach is well-suited to tasks that do not require immediate responses, reducing costs by 50% while increasing throughput.
You can explore the API reference directly, in addition to this guide.
How the Message Batches API works
When you send a request to the Message Batches API:
- The system creates a new Message Batch with the provided Messages requests.
- The batch is then processed asynchronously, with each request handled independently.
- You can poll for the status of the batch and retrieve results when processing has ended for all requests.
This is especially useful for bulk operations that don’t require immediate results, such as:
- Large-scale evaluations: Process thousands of test cases efficiently.
- Content moderation: Analyze large volumes of user-generated content asynchronously.
- Data analysis: Generate insights or summaries for large datasets.
- Bulk content generation: Create large amounts of text for various purposes (e.g., product descriptions, article summaries).
Batch limitations
- A Message Batch is limited to either 100,000 Message requests or 256 MB in size, whichever is reached first.
- The batch takes up to 24 hours to generate responses, though processing may end sooner than this. The results for your batch will not be available until the processing of the entire batch ends. Batches will expire if processing does not complete within 24 hours.
- Batch results are available for 29 days after creation. After that, you may still view the Batch, but its results will no longer be available for download.
- Batches are scoped to a Workspace. You may view all batches—and their results—that were created within the Workspace that your API key belongs to.
- Rate limits apply to both Batches API HTTP requests and the number of requests within a batch waiting to be processed. See Message Batches API rate limits. Additionally, we may slow down processing based on current demand and your request volume. In that case, you may see more requests expiring after 24 hours.
- Due to high throughput and concurrent processing, batches may go slightly over your Workspace’s configured spend limit.
Supported models
The Message Batches API currently supports:
- Claude 3.5 Sonnet (
claude-3-5-sonnet-20240620
andclaude-3-5-sonnet-20241022
) - Claude 3.5 Haiku (
claude-3-5-haiku-20241022
) - Claude 3 Haiku (
claude-3-haiku-20240307
) - Claude 3 Opus (
claude-3-opus-20240229
)
What can be batched
Any request that you can make to the Messages API can be included in a batch. This includes:
- Vision
- Tool use
- System messages
- Multi-turn conversations
- Any beta features
Since each request in the batch is processed independently, you can mix different types of requests within a single batch.
Pricing
The Batches API offers significant cost savings. All usage is charged at 50% of the standard API prices.
Model | Batch Input | Batch Output |
---|---|---|
Claude 3.5 Sonnet | $1.50 / MTok | $7.50 / MTok |
Claude 3 Opus | $7.50 / MTok | $37.50 / MTok |
Claude 3 Haiku | $0.125 / MTok | $0.625 / MTok |
How to use the Message Batches API
Prepare and create your batch
A Message Batch is composed of a list of requests to create a Message. The shape of an individual request is comprised of:
- A unique
custom_id
for identifying the Messages request - A
params
object with the standard Messages API parameters
You can create a batch by passing this list into the requests
parameter:
In this example, two separate requests are batched together for asynchronous processing. Each request has a unique custom_id
and contains the standard parameters you’d use for a Messages API call.
Test your batch requests with the Messages API
Validation of the params
object for each message request is performed asynchronously, and validation errors are returned when processing of the entire batch has ended. You can ensure that you are building your input correctly by verifying your request shape with the Messages API first.
When a batch is first created, the response will have a processing status of in_progress
.
Tracking your batch
The Message Batch’s processing_status
field indicates the stage of processing the batch is in. It starts as in_progress
, then updates to ended
once all the requests in the batch have finished processing, and results are ready. You can monitor the state of your batch by visiting the Console, or using the retrieval endpoint:
You can poll this endpoint to know when processing has ended.
Retrieving batch results
Once batch processing has ended, each Messages request in the batch will have a result. There are 4 result types:
Result Type | Description |
---|---|
succeeded | Request was successful. Includes the message result. |
errored | Request encountered an error and a message was not created. Possible errors include invalid requests and internal server errors. You will not be billed for these requests. |
canceled | User canceled the batch before this request could be sent to the model. You will not be billed for these requests. |
expired | Batch reached its 24 hour expiration before this request could be sent to the model. You will not be billed for these requests. |
You will see an overview of your results with the batch’s request_counts
, which shows how many requests reached each of these four states.
Results of the batch are available for download both in the Console and at the results_url
on the Message Batch. Because of the potentially large size of the results, it’s recommended to stream results back rather than download them all at once.
The results will be in .jsonl
format, where each line is a valid JSON object representing the result of a single request in the Message Batch. For each streamed result, you can do something different depending on its custom_id
and result type. Here is an example set of results:
If your result has an error, its result.error
will be set to our standard error shape.
Batch results may not match input order
Batch results can be returned in any order, and may not match the ordering of requests when the batch was created. In the above example, the result for the second batch request is returned before the first. To correctly match results with their corresponding requests, always use the custom_id
field.
Best practices for effective batching
To get the most out of the Batches API:
- Monitor batch processing status regularly and implement appropriate retry logic for failed requests.
- Use meaningful
custom_id
values to easily match results with requests, since order is not guaranteed. - Consider breaking very large datasets into multiple batches for better manageability.
- Dry run a single request shape with the Messages API to avoid validation errors.
Troubleshooting common issues
If experiencing unexpected behavior:
- Verify that the total batch request size doesn’t exceed 256 MB. If the request size is too large, you may get a 413
request_too_large
error. - Check that you’re using supported models for all requests in the batch.
- Ensure each request in the batch has a unique
custom_id
. - Ensure that it has been less than 29 days since batch
created_at
(not processingended_at
) time. If over 29 days have passed, results will no longer be viewable. - Confirm that the batch has not been canceled.
Note that the failure of one request in a batch does not affect the processing of other requests.
Batch storage and privacy
-
Workspace isolation: Batches are isolated within the Workspace they are created in. They can only be accessed by API keys associated with that Workspace, or users with permission to view Workspace batches in the Console.
-
Result availability: Batch results are available for 29 days after the batch is created, allowing ample time for retrieval and processing.
FAQ
Was this page helpful?