# Cancel a Message Batch (beta)
post /v1/messages/batches/{message_batch_id}/cancel
Batches may be canceled any time before processing ends. Once cancellation is initiated, the batch enters a `canceling` state, at which time the system may complete any in-progress, non-interruptible requests before finalizing cancellation.
The number of canceled requests is specified in `request_counts`. To determine which requests were canceled, check the individual results within the batch. Note that cancellation may not result in any canceled requests if they were non-interruptible.
While in beta, this endpoint requires passing the `anthropic-beta` header with value `message-batches-2024-09-24`
# Amazon Bedrock API
Anthropic’s Claude models are now generally available through Amazon Bedrock.
Calling Claude through Bedrock slightly differs from how you would call Claude when using Anthropic’s client SDK’s. This guide will walk you through the process of completing an API call to Claude on Bedrock in either Python or TypeScript.
Note that this guide assumes you have already signed up for an [AWS account](https://portal.aws.amazon.com/billing/signup) and configured programmatic access.
## Install and configure the AWS CLI
1. [Install a version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) at or newer than version `2.13.23`
2. Configure your AWS credentials using the AWS configure command (see [Configure the AWS CLI](https://alpha.www.docs.aws.a2z.com/cli/latest/userguide/cli-chap-configure.html)) or find your credentials by navigating to “Command line or programmatic access” within your AWS dashboard and following the directions in the popup modal.
3. Verify that your credentials are working:
```bash Shell
aws sts get-caller-identity
```
## Install an SDK for accessing Bedrock
Anthropic's [client SDKs](/en/api/client-sdks) support Bedrock. You can also use an AWS SDK like `boto3` directly.
```Python Python
pip install -U "anthropic[bedrock]"
```
```TypeScript TypeScript
npm install @anthropic-ai/bedrock-sdk
```
```Python Boto3 (Python)
pip install boto3>=1.28.59
```
## Accessing Bedrock
### Subscribe to Anthropic models
Go to the [AWS Console > Bedrock > Model Access](https://console.aws.amazon.com/bedrock/home?region=us-west-2#/modelaccess) and request access to Anthropic models. Note that Anthropic model availability varies by region. See [AWS documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) for latest information.
#### API model names
| Model | Bedrock API model name |
| ----------------- | ----------------------------------------- |
| Claude 3 Haiku | anthropic.claude-3-haiku-20240307-v1:0 |
| Claude 3 Sonnet | anthropic.claude-3-sonnet-20240229-v1:0 |
| Claude 3 Opus | anthropic.claude-3-opus-20240229-v1:0 |
| Claude 3.5 Sonnet | anthropic.claude-3-5-sonnet-20241022-v2:0 |
### List available models
The following examples show how to print a list of all the Claude models available through Bedrock:
```bash AWS CLI
aws bedrock list-foundation-models --region=us-west-2 --by-provider anthropic --query "modelSummaries[*].modelId"
```
```python Boto3 (Python)
import boto3
bedrock = boto3.client(service_name="bedrock")
response = bedrock.list_foundation_models(byProvider="anthropic")
for summary in response["modelSummaries"]:
print(summary["modelId"])
```
### Making requests
The following examples shows how to generate text from Claude 3 Sonnet on Bedrock:
```Python Python
from anthropic import AnthropicBedrock
client = AnthropicBedrock(
# Authenticate by either providing the keys below or use the default AWS credential providers, such as
# using ~/.aws/credentials or the "AWS_SECRET_ACCESS_KEY" and "AWS_ACCESS_KEY_ID" environment variables.
aws_access_key="",
aws_secret_key="",
# Temporary credentials can be used with aws_session_token.
# Read more at https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html.
aws_session_token="",
# aws_region changes the aws region to which the request is made. By default, we read AWS_REGION,
# and if that's not present, we default to us-east-1. Note that we do not read ~/.aws/config for the region.
aws_region="us-west-2",
)
message = client.messages.create(
model="anthropic.claude-3-5-sonnet-20241022-v2:0",
max_tokens=256,
messages=[{"role": "user", "content": "Hello, world"}]
)
print(message.content)
```
```TypeScript TypeScript
import AnthropicBedrock from '@anthropic-ai/bedrock-sdk';
const client = new AnthropicBedrock({
// Authenticate by either providing the keys below or use the default AWS credential providers, such as
// using ~/.aws/credentials or the "AWS_SECRET_ACCESS_KEY" and "AWS_ACCESS_KEY_ID" environment variables.
awsAccessKey: '',
awsSecretKey: '',
// Temporary credentials can be used with awsSessionToken.
// Read more at https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html.
awsSessionToken: '',
// awsRegion changes the aws region to which the request is made. By default, we read AWS_REGION,
// and if that's not present, we default to us-east-1. Note that we do not read ~/.aws/config for the region.
awsRegion: 'us-west-2',
});
async function main() {
const message = await client.messages.create({
model: 'anthropic.claude-3-5-sonnet-20241022-v2:0',
max_tokens: 256,
messages: [{"role": "user", "content": "Hello, world"}]
});
console.log(message);
}
main().catch(console.error);
```
```python Boto3 (Python)
import boto3
import json
bedrock = boto3.client(service_name="bedrock-runtime")
body = json.dumps({
"max_tokens": 256,
"messages": [{"role": "user", "content": "Hello, world"}],
"anthropic_version": "bedrock-2023-05-31"
})
response = bedrock.invoke_model(body=body, modelId="anthropic.claude-3-5-sonnet-20241022-v2:0")
response_body = json.loads(response.get("body").read())
print(response_body.get("content"))
```
See our [client SDKs](/en/api/client-sdks) for more details, and the official Bedrock docs [here](https://docs.aws.amazon.com/bedrock/).
# Vertex AI API
Anthropic’s Claude models are now generally available through [Vertex AI](https://cloud.google.com/vertex-ai).
The Vertex API for accessing Claude is nearly-identical to the [Messages API](/en/api/messages) and supports all of the same options, with two key differences:
* In Vertex, `model` is not passed in the request body. Instead, it is specified in the Google Cloud endpoint URL.
* In Vertex, `anthropic_version` is passed in the request body (rather than as a header), and must be set to the value `vertex-2023-10-16`.
Vertex is also supported by Anthropic's official [client SDKs](/en/api/client-sdks). This guide will walk you through the process of making a request to Claude on Vertex AI in either Python or TypeScript.
Note that this guide assumes you have already have a GCP project that is able to use Vertex AI. See [using the Claude 3 models from Anthropic](https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-claude) for more information on the setup required, as well as a full walkthrough.
## Install an SDK for accessing Vertex AI
First, install Anthropic's [client SDK](/en/api/client-sdks) for your language of choice.
```Python Python
pip install -U google-cloud-aiplatform "anthropic[vertex]"
```
```TypeScript TypeScript
npm install @anthropic-ai/vertex-sdk
```
## Accessing Vertex AI
### Model Availability
Note that Anthropic model availability varies by region. Search for "Claude" in the [Vertex AI Model Garden](https://console.cloud.google.com/vertex-ai/model-garden) or go to [Use Claude 3](https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-claude) for the latest information.
#### API model names
| Model | Vertex AI API model name |
| ------------------------------ | ------------------------------ |
| Claude 3 Haiku | claude-3-haiku\@20240307 |
| Claude 3 Sonnet | claude-3-sonnet\@20240229 |
| Claude 3 Opus (Public Preview) | claude-3-opus\@20240229 |
| Claude 3.5 Sonnet | claude-3-5-sonnet-v2\@20241022 |
### Making requests
Before running requests you may need to run `gcloud auth application-default login` to authenticate with GCP.
The following examples shows how to generate text from Claude 3 Haiku on Vertex AI:
```Python Python
from anthropic import AnthropicVertex
project_id = "MY_PROJECT_ID"
# Where the model is running. e.g. us-central1 or europe-west4 for haiku
region = "MY_REGION"
client = AnthropicVertex(project_id=project_id, region=region)
message = client.messages.create(
model="claude-3-haiku@20240307",
max_tokens=100,
messages=[
{
"role": "user",
"content": "Hey Claude!",
}
],
)
print(message)
```
```TypeScript TypeScript
import { AnthropicVertex } from '@anthropic-ai/vertex-sdk';
const projectId = 'MY_PROJECT_ID';
# Where the model is running. e.g. us-central1 or europe-west4 for haiku
const region = 'MY_REGION';
// Goes through the standard `google-auth-library` flow.
const client = new AnthropicVertex({
projectId,
region,
});
async function main() {
const result = await client.messages.create({
model: 'claude-3-haiku@20240307',
max_tokens: 100,
messages: [
{
role: 'user',
content: 'Hey Claude!',
},
],
});
console.log(JSON.stringify(result, null, 2));
}
main();
```
```bash cURL
MODEL_ID=claude-3-haiku@20240307
REGION=us-central1
PROJECT_ID=MY_PROJECT_ID
curl \
-X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://$LOCATION-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/${LOCATION}/publishers/anthropic/models/${MODEL_ID}:streamRawPredict -d \
'{
"anthropic_version": "vertex-2023-10-16",
"messages": [{
"role": "user",
"content": "Hey Claude!"
}],
"max_tokens": 100,
}'
```
See our [client SDKs](/en/api/client-sdks) and the official [Vertex AI docs](https://cloud.google.com/vertex-ai/docs) for more details.
# Client SDKs
We provide libraries in Python and TypeScript that make it easier to work with the Anthropic API.
> Additional configuration is needed to use Anthropic's Client SDKs through a partner platform. If you are using Amazon Bedrock, see [this guide](/en/api/claude-on-amazon-bedrock); if you are using Google Cloud Vertex AI, see [this guide](/en/api/claude-on-vertex-ai).
## Python
[Python library GitHub repo](https://github.com/anthropics/anthropic-sdk-python)
Example:
```Python Python
import anthropic
client = anthropic.Anthropic(
# defaults to os.environ.get("ANTHROPIC_API_KEY")
api_key="my_api_key",
)
message = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello, Claude"}
]
)
print(message.content)
```
***
## TypeScript
[TypeScript library GitHub repo](https://github.com/anthropics/anthropic-sdk-typescript)
While this library is in TypeScript, it can also be used in JavaScript libraries.
Example:
```TypeScript TypeScript
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic({
apiKey: 'my_api_key', // defaults to process.env["ANTHROPIC_API_KEY"]
});
const msg = await anthropic.messages.create({
model: "claude-3-5-sonnet-20241022",
max_tokens: 1024,
messages: [{ role: "user", content: "Hello, Claude" }],
});
console.log(msg);
```
# Create a Text Completion
post /v1/complete
[Legacy] Create a Text Completion.
The Text Completions API is a legacy API. We recommend using the [Messages API](https://docs.anthropic.com/en/api/messages) going forward.
Future models and features will not be compatible with Text Completions. See our [migration guide](https://docs.anthropic.com/en/api/migrating-from-text-completions-to-messages) for guidance in migrating from Text Completions to Messages.
# Create a Message Batch (beta)
post /v1/messages/batches
Send a batch of Message creation requests.
The Message Batches API can be used to process multiple Messages API requests at once. Once a Message Batch is created, it begins processing immediately. Batches can take up to 24 hours to complete.
While in beta, this endpoint requires passing the `anthropic-beta` header with value `message-batches-2024-09-24`
## Feature Support
The Message Batches API supports the following models: Claude 3 Haiku, Claude 3 Opus, and Claude 3.5 Sonnet. All features available in the Messages API, including beta features, are available through the Message Batches API.
While in beta, batches may contain up to 10,000 requests and be up to 32 MB in total size.
# Errors
## HTTP errors
Our API follows a predictable HTTP error code format:
* 400 - `invalid_request_error`: There was an issue with the format or content of your request. We may also use this error type for other 4XX status codes not listed below.
* 401 - `authentication_error`: There's an issue with your API key.
* 403 - `permission_error`: Your API key does not have permission to use the specified resource.
* 404 - `not_found_error`: The requested resource was not found.
* 413 - `request_too_large`: Request exceeds the maximum allowed number of bytes.
* 429 - `rate_limit_error`: Your account has hit a rate limit.
* 500 - `api_error`: An unexpected error has occurred internal to Anthropic's systems.
* 529 - `overloaded_error`: Anthropic's API is temporarily overloaded.
When receiving a [streaming](/en/api/streaming) response via SSE, it's possible that an error can occur after returning a 200 response, in which case error handling wouldn't follow these standard mechanisms.
## Error shapes
Errors are always returned as JSON, with a top-level `error` object that always includes a `type` and `message` value. For example:
```JSON JSON
{
"type": "error",
"error": {
"type": "not_found_error",
"message": "The requested resource could not be found."
}
}
```
In accordance with our [versioning](/en/api/versioning) policy, we may expand the values within these objects, and it is possible that the `type` values will grow over time.
## Request id
Every API response includes a unique `request-id` header. This header contains a value such as `req_018EeWyXxfu5pfWkrYcMdjWG`. When contacting support about a specific request, please include this ID to help us quickly resolve your issue.
# Getting help
We've tried to provide the answers to the most common questions in these docs. However, if you need further technical support using Claude, the Anthropic API, or any of our products, you may reach our support team at [support.anthropic.com](https://support.anthropic.com).
We monitor the following inboxes:
* [sales@anthropic.com](mailto:sales@anthropic.com) to commence a paid commercial partnership with us
* [privacy@anthropic.com](mailto:privacy@anthropic.com) to exercise your data access, portability, deletion, or correction rights per our [Privacy Policy](https://www.anthropic.com/privacy)
* [usersafety@anthropic.com](mailto:usersafety@anthropic.com) to report any erroneous, biased, or even offensive responses from Claude, so we can continue to learn and make improvements to ensure our model is safe, fair and beneficial to all
# Getting started
## Accessing the API
The API is made available via our web [Console](https://console.anthropic.com/). You can use the [Workbench](https://console.anthropic.com/workbench/3b57d80a-99f2-4760-8316-d3bb14fbfb1e) to try out the API in the browser and then generate API keys in [Account Settings](https://console.anthropic.com/account/keys). Use [workspaces](https://console.anthropic.com/settings/workspaces) to segment your API keys and [control spend](/en/api/rate-limits) by use case.
## Authentication
All requests to the Anthropic API must include an `x-api-key` header with your API key. If you are using the Client SDKs, you will set the API when constructing a client, and then the SDK will send the header on your behalf with every request. If integrating directly with the API, you'll need to send this header yourself.
## Content types
The Anthropic API always accepts JSON in request bodies and returns JSON in response bodies. You will need to send the `content-type: application/json` header in requests. If you are using the Client SDKs, this will be taken care of automatically.
## Examples
```bash Shell
curl https://api.anthropic.com/v1/messages \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--data \
'{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Hello, world"}
]
}'
```
Install via PyPI:
```bash
pip install anthropic
```
```Python Python
import anthropic
client = anthropic.Anthropic(
# defaults to os.environ.get("ANTHROPIC_API_KEY")
api_key="my_api_key",
)
message = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello, Claude"}
]
)
print(message.content)
```
Install via npm:
```bash
npm install @anthropic-ai/sdk
```
```TypeScript TypeScript
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic({
apiKey: 'my_api_key', // defaults to process.env["ANTHROPIC_API_KEY"]
});
const msg = await anthropic.messages.create({
model: "claude-3-5-sonnet-20241022",
max_tokens: 1024,
messages: [{ role: "user", content: "Hello, Claude" }],
});
console.log(msg);
```
# IP addresses
Anthropic services live at a fixed range of IP addresses. You can add these to your firewall to open the minimum amount of surface area for egress traffic when accessing the Anthropic API and Console. These ranges will not change without notice.
#### IPv4
`160.79.104.0/23`
#### IPv6
`2607:6bc0::/48`
# List Message Batches (beta)
get /v1/messages/batches
List all Message Batches within a Workspace. Most recently created batches are returned first.
While in beta, this endpoint requires passing the `anthropic-beta` header with value `message-batches-2024-09-24`
# Create a Message
post /v1/messages
Send a structured list of input messages with text and/or image content, and the model will generate the next message in the conversation.
The Messages API can be used for either single queries or stateless multi-turn conversations.
# Message Batches examples
Example usage for the Message Batches API
The Message Batches API supports the same set of features as the Messages API. While this page focuses on how to use the Message Batches API, see [Messages API examples](/en/api/messages-examples) for examples of the Messages API featureset.
## Creating a Message Batch
```Python Python
import anthropic
from anthropic.types.beta.message_create_params import MessageCreateParamsNonStreaming
from anthropic.types.beta.messages.batch_create_params import Request
client = anthropic.Anthropic()
message_batch = client.beta.messages.batches.create(
requests=[
Request(
custom_id="my-first-request",
params=MessageCreateParamsNonStreaming(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[{
"role": "user",
"content": "Hello, world",
}]
)
),
Request(
custom_id="my-second-request",
params=MessageCreateParamsNonStreaming(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[{
"role": "user",
"content": "Hi again, friend",
}]
)
)
]
)
print(message_batch)
```
```TypeScript TypeScript
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
const message_batch = await anthropic.beta.messages.batches.create({
requests: [{
custom_id: "my-first-request",
params: {
model: "claude-3-5-sonnet-20241022",
max_tokens: 1024,
messages: [
{"role": "user", "content": "Hello, Claude"}
]
}
}, {
custom_id: "my-second-request",
params: {
model: "claude-3-5-sonnet-20241022",
max_tokens: 1024,
messages: [
{"role": "user", "content": "Hi again, my friend"}
]
}
}]
});
console.log(message_batch);
```
```bash Shell
#!/bin/sh
curl https://api.anthropic.com/v1/messages/batches \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "anthropic-beta: message-batches-2024-09-24" \
--header "content-type: application/json" \
--data '{
"requests": [
{
"custom_id": "my-first-request",
"params": {
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Hello, Claude"}
]
}
},
{
"custom_id": "my-second-request",
"params": {
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Hi again, my friend"}
]
}
}
]
}'
```
```JSON JSON
{
"id": "msgbatch_013Zva2CMHLNnXjNJJKqJ2EF",
"type": "message_batch",
"processing_status": "in_progress",
"request_counts": {
"processing": 2,
"succeeded": 0,
"errored": 0,
"canceled": 0,
"expired": 0
},
"ended_at": null,
"created_at": "2024-09-24T18:37:24.100435Z",
"expires_at": "2024-09-25T18:37:24.100435Z",
"cancel_initiated_at": null,
"results_url": null
}
```
## Polling for Message Batch completion
To poll a Message Batch, you'll need its `id`, which is provided in the response when [creating](#creating-a-message-batch) request or by [listing](#listing-all-message-batches-in-a-workspace) batches. Example `id`: `msgbatch_013Zva2CMHLNnXjNJJKqJ2EF`.
```Python Python
import anthropic
client = anthropic.Anthropic()
message_batch = None
while True:
message_batch = client.beta.messages.batches.retrieve(
MESSAGE_BATCH_ID
)
if message_batch.processing_status == "ended":
break
print(f"Batch {MESSAGE_BATCH_ID} is still processing...")
time.sleep(60)
print(message_batch)
```
```TypeScript TypeScript
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
let messageBatch;
while (true) {
messageBatch = await anthropic.beta.messages.batches.retrieve(
MESSAGE_BATCH_ID
);
if (messageBatch.processing_status === 'ended') {
break;
}
console.log(`Batch ${messageBatch} is still processing... waiting`);
await new Promise(resolve => setTimeout(resolve, 60_000));
}
console.log(messageBatch);
```
```bash Shell
#!/bin/sh
until [[ $(curl -s "https://api.anthropic.com/v1/messages/batches/$MESSAGE_BATCH_ID" \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "anthropic-beta: message-batches-2024-09-24" \
| grep -o '"processing_status":[[:space:]]*"[^"]*"' \
| cut -d'"' -f4) == "ended" ]]; do
echo "Batch $MESSAGE_BATCH_ID is still processing..."
sleep 60
done
echo "Batch $MESSAGE_BATCH_ID has finished processing"
```
## Listing all Message Batches in a Workspace
```Python Python
import anthropic
client = anthropic.Anthropic()
# Automatically fetches more pages as needed.
for message_batch in client.beta.messages.batches.list(
limit=20
):
print(message_batch)
```
```TypeScript TypeScript
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
// Automatically fetches more pages as needed.
for await (const messageBatach of anthropic.beta.messages.batches.list({
limit: 20
})) {
console.log(messageBatach);
}
```
```bash Shell
#!/bin/sh
if ! command -v jq &> /dev/null; then
echo "Error: This script requires jq. Please install it first."
exit 1
fi
BASE_URL="https://api.anthropic.com/v1/messages/batches"
has_more=true
after_id=""
while [ "$has_more" = true ]; do
# Construct URL with after_id if it exists
if [ -n "$after_id" ]; then
url="${BASE_URL}?limit=20&after_id=${after_id}"
else
url="$BASE_URL?limit=20"
fi
response=$(curl -s "$url" \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "anthropic-beta: message-batches-2024-09-24")
# Extract values using jq
has_more=$(echo "$response" | jq -r '.has_more')
after_id=$(echo "$response" | jq -r '.last_id')
# Process and print each entry in the data array
echo "$response" | jq -c '.data[]' | while read -r entry; do
echo "$entry" | jq '.'
done
done
```
```Markup Output
{
"id": "msgbatch_013Zva2CMHLNnXjNJJKqJ2EF",
"type": "message_batch",
...
}
{
"id": "msgbatch_01HkcTjaV5uDC8jWR4ZsDV8d",
"type": "message_batch",
...
}
```
## Retrieving Message Batch Results
Once your Message Batch status is `ended`, you will be able to view the `results_url` of the batch and retrieve results in the form of a `.jsonl` file.
```Python Python
import anthropic
client = anthropic.Anthropic()
# Stream results file in memory-efficient chunks, processing one at a time
for result in client.beta.messages.batches.results(
MESSAGE_BATCH_ID,
):
print(result)
```
```TypeScript TypeScript
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
// Stream results file in memory-efficient chunks, processing one at a time
for await (const result of await anthropic.beta.messages.batches.results(
MESSAGE_BATCH_ID
)) {
console.log(result);
}
```
```bash Shell
#!/bin/sh
curl "https://api.anthropic.com/v1/messages/batches/$MESSAGE_BATCH_ID" \
--header "anthropic-version: 2023-06-01" \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-beta: message-batches-2024-09-24" \
| grep -o '"results_url":[[:space:]]*"[^"]*"' \
| cut -d'"' -f4 \
| xargs curl \
--header "anthropic-version: 2023-06-01" \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-beta: message-batches-2024-09-24"
# Optionally, use jq for pretty-printed JSON:
#| while IFS= read -r line; do
# echo "$line" | jq '.'
# done
```
```Markup Output
{
"id": "my-second-request",
"result": {
"type": "succeeded",
"message": {
"id": "msg_018gCsTGsXkYJVqYPxTgDHBU",
"type": "message",
...
}
}
}
{
"custom_id": "my-first-request",
"result": {
"type": "succeeded",
"message": {
"id": "msg_01XFDUDYJgAACzvnptvVoYEL",
"type": "message",
...
}
}
}
```
## Canceling a Message Batch
Immediately after cancellation, a batch's `processing_status` will be `canceling`. You can use the same [polling for batch completion](#polling-for-message-batch-completion) technique to poll for when cancellation is finalized as canceled batches also end up `ended` and may contain results.
```Python Python
import anthropic
client = anthropic.Anthropic()
message_batch = client.beta.messages.batches.cancel(
MESSAGE_BATCH_ID,
)
print(message_batch)
```
```TypeScript TypeScript
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
const messageBatch = await anthropic.beta.messages.batches.cancel(
MESSAGE_BATCH_ID
);
console.log(messageBatch);
```
```bash Shell
#!/bin/sh
curl --request POST https://api.anthropic.com/v1/messages/batches/$MESSAGE_BATCH_ID/cancel \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "anthropic-beta: message-batches-2024-09-24"
```
```JSON JSON
{
"id": "msgbatch_013Zva2CMHLNnXjNJJKqJ2EF",
"type": "message_batch",
"processing_status": "canceling",
"request_counts": {
"processing": 2,
"succeeded": 0,
"errored": 0,
"canceled": 0,
"expired": 0
},
"ended_at": null,
"created_at": "2024-09-24T18:37:24.100435Z",
"expires_at": "2024-09-25T18:37:24.100435Z",
"cancel_initiated_at": "2024-09-24T18:39:03.114875Z",
"results_url": null
}
```
# Count Message tokens (beta)
post /v1/messages/count_tokens
Count the number of tokens in a Message.
The Token Count API can be used to count the number of tokens in a Message, including tools, images, and documents, without creating it.
While in beta, this endpoint requires passing the `anthropic-beta` header with value `token-counting-2024-11-01`
# Messages examples
Request and response examples for the Messages API
See the [API reference](/en/api/messages) for full documentation on available parameters.
## Basic request and response
```bash Shell
#!/bin/sh
curl https://api.anthropic.com/v1/messages \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--data \
'{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Hello, Claude"}
]
}'
```
```Python Python
import anthropic
message = anthropic.Anthropic().messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello, Claude"}
]
)
print(message)
```
```TypeScript TypeScript
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
const message = await anthropic.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{"role": "user", "content": "Hello, Claude"}
]
});
console.log(message);
```
```JSON JSON
{
"id": "msg_01XFDUDYJgAACzvnptvVoYEL",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "Hello!"
}
],
"model": "claude-3-5-sonnet-20241022",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 12,
"output_tokens": 6
}
}
```
## Multiple conversational turns
The Messages API is stateless, which means that you always send the full conversational history to the API. You can use this pattern to build up a conversation over time. Earlier conversational turns don't necessarily need to actually originate from Claude — you can use synthetic `assistant` messages.
```bash Shell
#!/bin/sh
curl https://api.anthropic.com/v1/messages \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--data \
'{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Hello, Claude"},
{"role": "assistant", "content": "Hello!"},
{"role": "user", "content": "Can you describe LLMs to me?"}
]
}'
```
```Python Python
import anthropic
message = anthropic.Anthropic().messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello, Claude"},
{"role": "assistant", "content": "Hello!"},
{"role": "user", "content": "Can you describe LLMs to me?"}
],
)
print(message)
```
```TypeScript TypeScript
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
await anthropic.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{"role": "user", "content": "Hello, Claude"},
{"role": "assistant", "content": "Hello!"},
{"role": "user", "content": "Can you describe LLMs to me?"}
]
});
```
```JSON JSON
{
"id": "msg_018gCsTGsXkYJVqYPxTgDHBU",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "Sure, I'd be happy to provide..."
}
],
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 30,
"output_tokens": 309
}
}
```
## Putting words in Claude's mouth
You can pre-fill part of Claude's response in the last position of the input messages list. This can be used to shape Claude's response. The example below uses `"max_tokens": 1` to get a single multiple choice answer from Claude.
```bash Shell
#!/bin/sh
curl https://api.anthropic.com/v1/messages \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--data \
'{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1,
"messages": [
{"role": "user", "content": "What is latin for Ant? (A) Apoidea, (B) Rhopalocera, (C) Formicidae"},
{"role": "assistant", "content": "The answer is ("}
]
}'
```
```Python Python
import anthropic
message = anthropic.Anthropic().messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1,
messages=[
{"role": "user", "content": "What is latin for Ant? (A) Apoidea, (B) Rhopalocera, (C) Formicidae"},
{"role": "assistant", "content": "The answer is ("}
]
)
print(message)
```
```TypeScript TypeScript
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
const message = await anthropic.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1,
messages: [
{"role": "user", "content": "What is latin for Ant? (A) Apoidea, (B) Rhopalocera, (C) Formicidae"},
{"role": "assistant", "content": "The answer is ("}
]
});
console.log(message);
```
```JSON JSON
{
"id": "msg_01Q8Faay6S7QPTvEUUQARt7h",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "C"
}
],
"model": "claude-3-5-sonnet-20241022",
"stop_reason": "max_tokens",
"stop_sequence": null,
"usage": {
"input_tokens": 42,
"output_tokens": 1
}
}
```
## Vision
Claude can read both text and images in requests. Currently, we support the `base64` source type for images, and the `image/jpeg`, `image/png`, `image/gif`, and `image/webp` media types. See our [vision guide](/en/docs/vision) for more details.
```bash Shell
#!/bin/sh
IMAGE_URL="https://upload.wikimedia.org/wikipedia/commons/a/a7/Camponotus_flavomarginatus_ant.jpg"
IMAGE_MEDIA_TYPE="image/jpeg"
IMAGE_BASE64=$(curl "$IMAGE_URL" | base64)
curl https://api.anthropic.com/v1/messages \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--data \
'{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": [
{"type": "image", "source": {
"type": "base64",
"media_type": "'$IMAGE_MEDIA_TYPE'",
"data": "'$IMAGE_BASE64'"
}},
{"type": "text", "text": "What is in the above image?"}
]}
]
}'
```
```Python Python
import anthropic
import base64
import httpx
image_url = "https://upload.wikimedia.org/wikipedia/commons/a/a7/Camponotus_flavomarginatus_ant.jpg"
image_media_type = "image/jpeg"
image_data = base64.standard_b64encode(httpx.get(image_url).content).decode("utf-8")
message = anthropic.Anthropic().messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[
{
"role": "user",
"content": [
{
"type": "image",
"source": {
"type": "base64",
"media_type": image_media_type,
"data": image_data,
},
}
],
}
],
)
print(message)
```
```TypeScript TypeScript
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
const image_url = "https://upload.wikimedia.org/wikipedia/commons/a/a7/Camponotus_flavomarginatus_ant.jpg"
const image_media_type = "image/jpeg"
const image_array_buffer = await ((await fetch(image_url)).arrayBuffer());
const image_data = Buffer.from(image_array_buffer).toString('base64');
const message = await anthropic.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{
"role": "user",
"content": [
{
"type": "image",
"source": {
"type": "base64",
"media_type": image_media_type,
"data": image_data,
},
}
],
}
]
});
console.log(message);
```
```JSON JSON
{
"id": "msg_01EcyWo6m4hyW8KHs2y2pei5",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "This image shows an ant, specifically a close-up view of an ant. The ant is shown in detail, with its distinct head, antennae, and legs clearly visible. The image is focused on capturing the intricate details and features of the ant, likely taken with a macro lens to get an extreme close-up perspective."
}
],
"model": "claude-3-5-sonnet-20241022",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 1551,
"output_tokens": 71
}
}
```
## Tool use, JSON mode, and computer use (beta)
See our [guide](/en/docs/build-with-claude/tool-use) for examples for how to use tools with the Messages API.
See our [computer use (beta) guide](/en/docs/build-with-claude/computer-use) for examples of how to control desktop computer environments with the Messages API.
# Streaming Messages
When creating a Message, you can set `"stream": true` to incrementally stream the response using [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent%5Fevents/Using%5Fserver-sent%5Fevents) (SSE).
## Streaming with SDKs
Our [Python](https://github.com/anthropics/anthropic-sdk-python) and [TypeScript](https://github.com/anthropics/anthropic-sdk-typescript) SDKs offer multiple ways of streaming. The Python SDK allows both sync and async streams. See the documentation in each SDK for details.
```Python Python
import anthropic
client = anthropic.Anthropic()
with client.messages.stream(
max_tokens=1024,
messages=[{"role": "user", "content": "Hello"}],
model="claude-3-5-sonnet-20241022",
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
```
```TypeScript TypeScript
import Anthropic from '@anthropic-ai/sdk';
const client = new Anthropic();
await client.messages.stream({
messages: [{role: 'user', content: "Hello"}],
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
}).on('text', (text) => {
console.log(text);
});
```
## Event types
Each server-sent event includes a named event type and associated JSON data. Each event will use an SSE event name (e.g. `event: message_stop`), and include the matching event `type` in its data.
Each stream uses the following event flow:
1. `message_start`: contains a `Message` object with empty `content`.
2. A series of content blocks, each of which have a `content_block_start`, one or more `content_block_delta` events, and a `content_block_stop` event. Each content block will have an `index` that corresponds to its index in the final Message `content` array.
3. One or more `message_delta` events, indicating top-level changes to the final `Message` object.
4. A final `message_stop` event.
### Ping events
Event streams may also include any number of `ping` events.
### Error events
We may occasionally send [errors](/en/api/errors) in the event stream. For example, during periods of high usage, you may receive an `overloaded_error`, which would normally correspond to an HTTP 529 in a non-streaming context:
```json Example error
event: error
data: {"type": "error", "error": {"type": "overloaded_error", "message": "Overloaded"}}
```
### Other events
In accordance with our [versioning policy](/en/api/versioning), we may add new event types, and your code should handle unknown event types gracefully.
## Delta types
Each `content_block_delta` event contains a `delta` of a type that updates the `content` block at a given `index`.
### Text delta
A `text` content block delta looks like:
```JSON Text delta
event: content_block_delta
data: {"type": "content_block_delta","index": 0,"delta": {"type": "text_delta", "text": "ello frien"}}
```
### Input JSON delta
The deltas for `tool_use` content blocks correspond to updates for the `input` field of the block. To support maximum granularity, the deltas are *partial JSON strings*, whereas the final `tool_use.input` is always an *object*.
You can accumulate the string deltas and parse the JSON once you receive a `content_block_stop` event, by using a library like [Pydantic](https://docs.pydantic.dev/latest/concepts/json/#partial-json-parsing) to do partial JSON parsing, or by using our [SDKs](https://docs.anthropic.com/en/api/client-sdks), which provide helpers to access parsed incremental values.
A `tool_use` content block delta looks like:
```JSON Input JSON delta
event: content_block_delta
data: {"type": "content_block_delta","index": 1,"delta": {"type": "input_json_delta","partial_json": "{\"location\": \"San Fra"}}}
```
Note: Our current models only support emitting one complete key and value property from `input` at a time. As such, when using tools, there may be delays between streaming events while the model is working. Once an `input` key and value are accumulated, we emit them as multiple `content_block_delta` events with chunked partial json so that the format can automatically support finer granularity in future models.
## Raw HTTP Stream response
We strongly recommend that use our [client SDKs](/en/api/client-sdks) when using streaming mode. However, if you are building a direct API integration, you will need to handle these events yourself.
A stream response is comprised of:
1. A `message_start` event
2. Potentially multiple content blocks, each of which contains:
a. A `content_block_start` event
b. Potentially multiple `content_block_delta` events
c. A `content_block_stop` event
3. A `message_delta` event
4. A `message_stop` event
There may be `ping` events dispersed throughout the response as well. See [Event types](#event-types) for more details on the format.
### Basic streaming request
```bash Request
curl https://api.anthropic.com/v1/messages \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--data \
'{
"model": "claude-3-5-sonnet-20241022",
"messages": [{"role": "user", "content": "Hello"}],
"max_tokens": 256,
"stream": true
}'
```
```json Response
event: message_start
data: {"type": "message_start", "message": {"id": "msg_1nZdL29xx5MUA1yADyHTEsnR8uuvGzszyY", "type": "message", "role": "assistant", "content": [], "model": "claude-3-5-sonnet-20241022", "stop_reason": null, "stop_sequence": null, "usage": {"input_tokens": 25, "output_tokens": 1}}}
event: content_block_start
data: {"type": "content_block_start", "index": 0, "content_block": {"type": "text", "text": ""}}
event: ping
data: {"type": "ping"}
event: content_block_delta
data: {"type": "content_block_delta", "index": 0, "delta": {"type": "text_delta", "text": "Hello"}}
event: content_block_delta
data: {"type": "content_block_delta", "index": 0, "delta": {"type": "text_delta", "text": "!"}}
event: content_block_stop
data: {"type": "content_block_stop", "index": 0}
event: message_delta
data: {"type": "message_delta", "delta": {"stop_reason": "end_turn", "stop_sequence":null}, "usage": {"output_tokens": 15}}
event: message_stop
data: {"type": "message_stop"}
```
### Streaming request with tool use
In this request, we ask Claude to use a tool to tell us the weather.
```bash Request
curl https://api.anthropic.com/v1/messages \
-H "content-type: application/json" \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"tools": [
{
"name": "get_weather",
"description": "Get the current weather in a given location",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
}
},
"required": ["location"]
}
}
],
"tool_choice": {"type": "any"},
"messages": [
{
"role": "user",
"content": "What is the weather like in San Francisco?"
}
],
"stream": true
}'
```
```json Response
event: message_start
data: {"type":"message_start","message":{"id":"msg_014p7gG3wDgGV9EUtLvnow3U","type":"message","role":"assistant","model":"claude-3-haiku-20240307","stop_sequence":null,"usage":{"input_tokens":472,"output_tokens":2},"content":[],"stop_reason":null}}
event: content_block_start
data: {"type":"content_block_start","index":0,"content_block":{"type":"text","text":""}}
event: ping
data: {"type": "ping"}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":"Okay"}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":","}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":" let"}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":"'s"}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":" check"}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":" the"}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":" weather"}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":" for"}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":" San"}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":" Francisco"}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":","}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":" CA"}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":":"}}
event: content_block_stop
data: {"type":"content_block_stop","index":0}
event: content_block_start
data: {"type":"content_block_start","index":1,"content_block":{"type":"tool_use","id":"toolu_01T1x1fJ34qAmk2tNTrN7Up6","name":"get_weather","input":{}}}
event: content_block_delta
data: {"type":"content_block_delta","index":1,"delta":{"type":"input_json_delta","partial_json":""}}
event: content_block_delta
data: {"type":"content_block_delta","index":1,"delta":{"type":"input_json_delta","partial_json":"{\"location\":"}}
event: content_block_delta
data: {"type":"content_block_delta","index":1,"delta":{"type":"input_json_delta","partial_json":" \"San"}}
event: content_block_delta
data: {"type":"content_block_delta","index":1,"delta":{"type":"input_json_delta","partial_json":" Francisc"}}
event: content_block_delta
data: {"type":"content_block_delta","index":1,"delta":{"type":"input_json_delta","partial_json":"o,"}}
event: content_block_delta
data: {"type":"content_block_delta","index":1,"delta":{"type":"input_json_delta","partial_json":" CA\""}}
event: content_block_delta
data: {"type":"content_block_delta","index":1,"delta":{"type":"input_json_delta","partial_json":", "}}
event: content_block_delta
data: {"type":"content_block_delta","index":1,"delta":{"type":"input_json_delta","partial_json":"\"unit\": \"fah"}}
event: content_block_delta
data: {"type":"content_block_delta","index":1,"delta":{"type":"input_json_delta","partial_json":"renheit\"}"}}
event: content_block_stop
data: {"type":"content_block_stop","index":1}
event: message_delta
data: {"type":"message_delta","delta":{"stop_reason":"tool_use","stop_sequence":null},"usage":{"output_tokens":89}}
event: message_stop
data: {"type":"message_stop"}
```
# Migrating from Text Completions
Migrating from Text Completions to Messages
When migrating from from [Text Completions](/en/api/complete) to [Messages](/en/api/messages), consider the following changes.
### Inputs and outputs
The largest change between Text Completions and the Messages is the way in which you specify model inputs and receive outputs from the model.
With Text Completions, inputs are raw strings:
```Python Python
prompt = "\n\nHuman: Hello there\n\nAssistant: Hi, I'm Claude. How can I help?\n\nHuman: Can you explain Glycolysis to me?\n\nAssistant:"
```
With Messages, you specify a list of input messages instead of a raw prompt:
```json Shorthand
messages = [
{"role": "user", "content": "Hello there."},
{"role": "assistant", "content": "Hi, I'm Claude. How can I help?"},
{"role": "user", "content": "Can you explain Glycolysis to me?"},
]
```
```json Expanded
messages = [
{"role": "user", "content": [{"type": "text", "text": "Hello there."}]},
{"role": "assistant", "content": [{"type": "text", "text": "Hi, I'm Claude. How can I help?"}]},
{"role": "user", "content":[{"type": "text", "text": "Can you explain Glycolysis to me?"}]},
]
```
Each input message has a `role` and `content`.
**Role names**
The Text Completions API expects alternating `\n\nHuman:` and `\n\nAssistant:` turns, but the Messages API expects `user` and `assistant` roles. You may see documentation referring to either "human" or "user" turns. These refer to the same role, and will be "user" going forward.
With Text Completions, the model's generated text is returned in the `completion` values of the response:
```Python Python
>>> response = anthropic.completions.create(...)
>>> response.completion
" Hi, I'm Claude"
```
With Messages, the response is the `content` value, which is a list of content blocks:
```Python Python
>>> response = anthropic.messages.create(...)
>>> response.content
[{"type": "text", "text": "Hi, I'm Claude"}]
```
### Putting words in Claude's mouth
With Text Completions, you can pre-fill part of Claude's response:
```Python Python
prompt = "\n\nHuman: Hello\n\nAssistant: Hello, my name is"
```
With Messages, you can achieve the same result by making the last input message have the `assistant` role:
```Python Python
messages = [
{"role": "human", "content": "Hello"},
{"role": "assistant", "content": "Hello, my name is"},
]
```
When doing so, response `content` will continue from the last input message `content`:
```JSON JSON
{
"role": "assistant",
"content": [{"type": "text", "text": " Claude. How can I assist you today?" }],
...
}
```
### System prompt
With Text Completions, the [system prompt](/en/docs/system-prompts) is specified by adding text before the first `\n\nHuman:` turn:
```Python Python
prompt = "Today is January 1, 2024.\n\nHuman: Hello, Claude\n\nAssistant:"
```
With Messages, you specify the system prompt with the `system` parameter:
```Python Python
anthropic.Anthropic().messages.create(
model="claude-3-opus-20240229",
max_tokens=1024,
system="Today is January 1, 2024.", # <-- system prompt
messages=[
{"role": "user", "content": "Hello, Claude"}
]
)
```
### Model names
The Messages API requires that you specify the full model version (e.g. `claude-3-opus-20240229`).
We previously supported specifying only the major version number (e.g. `claude-2`), which resulted in automatic upgrades to minor versions. However, we no longer recommend this integration pattern, and Messages do not support it.
### Stop reason
Text Completions always have a `stop_reason` of either:
* `"stop_sequence"`: The model either ended its turn naturally, or one of your custom stop sequences was generated.
* `"max_tokens"`: Either the model generated your specified `max_tokens` of content, or it reached its [absolute maximum](/en/docs/models-overview#model-comparison).
Messages have a `stop_reason` of one of the following values:
* `"end_turn"`: The conversational turn ended naturally.
* `"stop_sequence"`: One of your specified custom stop sequences was generated.
* `"max_tokens"`: (unchanged)
### Specifying max tokens
* Text Completions: `max_tokens_to_sample` parameter. No validation, but capped values per-model.
* Messages: `max_tokens` parameter. If passing a value higher than the model supports, returns a validation error.
### Streaming format
When using `"stream": true` in with Text Completions, the response included any of `completion`, `ping`, and `error` server-sent-events. See [Text Completions streaming](https://anthropic.readme.io/claude/reference/streaming) for details.
Messages can contain multiple content blocks of varying types, and so its streaming format is somewhat more complex. See [Messages streaming](https://anthropic.readme.io/claude/reference/messages-streaming) for details.
# Prompt validation
With Text Completions
**Legacy API**
The Text Completions API is a legacy API. Future models and features will require use of the [Messages API](/en/api/messages), and we recommend [migrating](/en/api/migrating-from-text-completions-to-messages) as soon as possible.
The Anthropic API performs basic prompt sanitization and validation to help ensure that your prompts are well-formatted for Claude.
When creating Text Completions, if your prompt is not in the specified format, the API will first attempt to lightly sanitize it (for example, by removing trailing spaces). This exact behavior is subject to change, and we strongly recommend that you format your prompts with the [recommended](/en/docs/prompt-engineering#the-prompt-is-formatted-correctly) alternating `\n\nHuman:` and `\n\nAssistant:` turns.
Then, the API will validate your prompt under the following conditions:
* The first conversational turn in the prompt must be a `\n\nHuman:` turn
* The last conversational turn in the prompt be an `\n\nAssistant:` turn
* The prompt must be less than `100,000 - 1` tokens in length.
## Examples
The following prompts will results in [API errors](/en/api/errors):
```Python Python
# Missing "\n\nHuman:" and "\n\nAssistant:" turns
prompt = "Hello, world"
# Missing "\n\nHuman:" turn
prompt = "Hello, world\n\nAssistant:"
# Missing "\n\nAssistant:" turn
prompt = "\n\nHuman: Hello, Claude"
# "\n\nHuman:" turn is not first
prompt = "\n\nAssistant: Hello, world\n\nHuman: Hello, Claude\n\nAssistant:"
# "\n\nAssistant:" turn is not last
prompt = "\n\nHuman: Hello, Claude\n\nAssistant: Hello, world\n\nHuman: How many toes do dogs have?"
# "\n\nAssistant:" only has one "\n"
prompt = "\n\nHuman: Hello, Claude \nAssistant:"
```
The following are currently accepted and automatically sanitized by the API, but you should not rely on this behavior, as it may change in the future:
```Python Python
# No leading "\n\n" for "\n\nHuman:"
prompt = "Human: Hello, Claude\n\nAssistant:"
# Trailing space after "\n\nAssistant:"
prompt = "\n\nHuman: Hello, Claude:\n\nAssistant: "
```
# Rate limits
To mitigate against misuse and manage capacity on our API, we have implemented limits on how much an organization can use the Claude API.
We have two types of limits:
1. **Spend limits** set a maximum monthly cost an organization can incur for API usage.
2. **Rate limits** set the maximum number of API requests an organization can make over a defined period of time.
We enforce service-configured limits at the organization level, but you may also set user-configurable limits for your organization's workspaces.
## About our limits
* Limits are designed to prevent API abuse, while minimizing impact on common customer usage patterns.
* Limits are defined by usage tier, where each tier is associated with a different set of spend and rate limits.
* Your organization will increase tiers automatically as you reach certain thresholds while using the API.
Limits are set at the organization level. You can see your organization’s limits in the [Limits page](https://console.anthropic.com/settings/limits) in the [Anthropic Console](https://console.anthropic.com/).
* You may hit rate limits over shorter time intervals. For instance, a rate of 60 requests per minute (RPM) may be enforced as 1 request per second. Short bursts of requests at a high volume can surpass the rate limit and result in rate limit errors.
* The limits outlined below are our standard limits. If you’re seeking higher, custom limits, contact sales through the [Anthropic Console](https://console.anthropic.com/settings/limits).
* We use the [token bucket algorithm](https://en.wikipedia.org/wiki/Token_bucket) to do rate limiting.
* All limits described here represent maximum allowed usage, not guaranteed minimums. These limits are designed to prevent overuse and ensure fair distribution of resources among users.
## Spend limits
Each usage tier has a limit on how much you can spend on the API each calendar month. Once you reach the spend limit of your tier, until you qualify for the next tier, you will have to wait until the next month to be able to use the API again.
To qualify for the next tier, you must meet a deposit requirement and a mandatory wait period. Higher tiers require longer wait periods. Note, to minimize the risk of overfunding your account, you cannot deposit more than your monthly spend limit.
### Requirements to advance tier
Usage Tier
Credit Purchase
Wait After First Purchase
Max Usage per Month
Tier 1
\$5
0 days
\$100
Tier 2
\$40
7 days
\$500
Tier 3
\$200
7 days
\$1,000
Tier 4
\$400
14 days
\$5,000
Monthly Invoicing
N/A
N/A
N/A
## Rate limits
Our rate limits are currently measured in requests per minute, tokens per minute, and tokens per day for each model class. If you exceed any of the rate limits you will get a [429 error](/en/api/errors). Click on the rate limit tier to view relevant rate limits.
Rate limits are tracked per model, therefore models within the same tier do not share a rate limit.
| Model | Maximum Requests per minute (RPM) | Maximum Tokens per minute (TPM) | Maximum Tokens per day (TPD) |
| ----------------------------------- | --------------------------------- | ------------------------------- | ---------------------------- |
| Claude 3.5 Sonnet 2024-10-22 | 50 | 40,000 | 1,000,000 |
| Claude 3.5 Sonnet 2024-06-20 | 50 | 40,000 | 1,000,000 |
| Claude 3 Opus | 50 | 20,000 | 1,000,000 |
| Claude 3 Sonnet | 50 | 40,000 | 1,000,000 |
| Claude 3 Haiku | 50 | 50,000 | 5,000,000 |
| Model | Maximum Requests per minute (RPM) | Maximum Tokens per minute (TPM) | Maximum Tokens per day (TPD) |
| ----------------------------------- | --------------------------------- | ------------------------------- | ---------------------------- |
| Claude 3.5 Sonnet 2024-10-22 | 1,000 | 80,000 | 2,500,000 |
| Claude 3.5 Sonnet 2024-06-20 | 1,000 | 80,000 | 2,500,000 |
| Claude 3 Opus | 1,000 | 40,000 | 2,500,000 |
| Claude 3 Sonnet | 1,000 | 80,000 | 2,500,000 |
| Claude 3 Haiku | 1,000 | 100,000 | 25,000,000 |
| Model | Maximum Requests per minute (RPM) | Maximum Tokens per minute (TPM) | Maximum Tokens per day (TPD) |
| ----------------------------------- | --------------------------------- | ------------------------------- | ---------------------------- |
| Claude 3.5 Sonnet 2024-10-22 | 2,000 | 160,000 | 5,000,000 |
| Claude 3.5 Sonnet 2024-06-20 | 2,000 | 160,000 | 5,000,000 |
| Claude 3 Opus | 2,000 | 80,000 | 5,000,000 |
| Claude 3 Sonnet | 2,000 | 160,000 | 5,000,000 |
| Claude 3 Haiku | 2,000 | 200,000 | 50,000,000 |
| Model | Maximum Requests per minute (RPM) | Maximum Tokens per minute (TPM) | Maximum Tokens per day (TPD) |
| ----------------------------------- | --------------------------------- | ------------------------------- | ---------------------------- |
| Claude 3.5 Sonnet 2024-10-22 | 4,000 | 400,000 | 50,000,000 |
| Claude 3.5 Sonnet 2024-06-20 | 4,000 | 400,000 | 50,000,000 |
| Claude 3 Opus | 4,000 | 400,000 | 10,000,000 |
| Claude 3 Sonnet | 4,000 | 400,000 | 50,000,000 |
| Claude 3 Haiku | 4,000 | 400,000 | 100,000,000 |
If you're seeking higher limits for an Enterprise use case, contact sales through the [Anthropic Console](https://console.anthropic.com/settings/limits).
## Setting lower limits for Workspaces
In order to protect Workspaces in your Organization from potential overuse, you can set custom spend and rate limits per Workspace.
Example: If your Organization's limit is 80,000 tokens per minute, you might limit one Workspace to 30,000 tokens per minute. This protects other Workspaces from potential overuse and ensures a more equitable distribution of resources across your Organization. The remaining 50,000 tokens per minute (or more, if that Workspace doesn't use the limit) are then available for other Workspaces to use.
Note:
* You can't set limits on the default Workspace.
* If not set, Workspace limits match the Organization's limit.
* Organization-wide limits always apply, even if Workspace limits add up to more.
## Response headers
The API response includes headers that show you the rate limit enforced, current usage, and when the limit will be reset.
The following headers are returned:
| Header | Description |
| ---------------------------------------- | ------------------------------------------------------------------------------------------- |
| `anthropic-ratelimit-requests-limit` | The maximum number of requests allowed within any rate limit period. |
| `anthropic-ratelimit-requests-remaining` | The number of requests remaining before being rate limited. |
| `anthropic-ratelimit-requests-reset` | The time when the request rate limit will reset, provided in RFC 3339 format. |
| `anthropic-ratelimit-tokens-limit` | The maximum number of tokens allowed within the any rate limit period. |
| `anthropic-ratelimit-tokens-remaining` | The number of tokens remaining (rounded to the nearest thousand) before being rate limited. |
| `anthropic-ratelimit-tokens-reset` | The time when the token rate limit will reset, provided in RFC 3339 format. |
| `retry-after` | The number of seconds until you can retry the request. |
The rate limit headers display the values for the most restrictive limit currently in effect. For example, if you have exceeded the per-minute token limit but not the daily token limit, the headers will contain the per-minute token rate limit values. This approach ensures that you have visibility into the most relevant constraint on your current API usage.
# Retrieve Message Batch Results (beta)
get /v1/messages/batches/{message_batch_id}/results
Streams the results of a Message Batch as a `.jsonl` file.
Each line in the file is a JSON object containing the result of a single request in the Message Batch. Results are not guaranteed to be in the same order as requests. Use the `custom_id` field to match results to requests.
While in beta, this endpoint requires passing the `anthropic-beta` header with value `message-batches-2024-09-24`The path for retrieving Message Batch results should be pulled from the batch's `results_url`. This path should not be assumed and may change.
# Retrieve a Message Batch (beta)
get /v1/messages/batches/{message_batch_id}
This endpoint is idempotent and can be used to poll for Message Batch completion. To access the results of a Message Batch, make a request to the `results_url` field in the response.
While in beta, this endpoint requires passing the `anthropic-beta` header with value `message-batches-2024-09-24`
# Streaming Text Completions
**Legacy API**
The Text Completions API is a legacy API. Future models and features will require use of the [Messages API](/en/api/messages), and we recommend [migrating](/en/api/migrating-from-text-completions-to-messages) as soon as possible.
When creating a Text Completion, you can set `"stream": true` to incrementally stream the response using [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent%5Fevents/Using%5Fserver-sent%5Fevents) (SSE). If you are using our [client libraries](/en/api/client-sdks), parsing these events will be handled for you automatically. However, if you are building a direct API integration, you will need to handle these events yourself.
## Example
```bash Request
curl https://api.anthropic.com/v1/complete \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--data '
{
"model": "claude-2",
"prompt": "\n\nHuman: Hello, world!\n\nAssistant:",
"max_tokens_to_sample": 256,
"stream": true
}
'
```
```json Response
event: completion
data: {"type": "completion", "completion": " Hello", "stop_reason": null, "model": "claude-2.0"}
event: completion
data: {"type": "completion", "completion": "!", "stop_reason": null, "model": "claude-2.0"}
event: ping
data: {"type": "ping"}
event: completion
data: {"type": "completion", "completion": " My", "stop_reason": null, "model": "claude-2.0"}
event: completion
data: {"type": "completion", "completion": " name", "stop_reason": null, "model": "claude-2.0"}
event: completion
data: {"type": "completion", "completion": " is", "stop_reason": null, "model": "claude-2.0"}
event: completion
data: {"type": "completion", "completion": " Claude", "stop_reason": null, "model": "claude-2.0"}
event: completion
data: {"type": "completion", "completion": ".", "stop_reason": null, "model": "claude-2.0"}
event: completion
data: {"type": "completion", "completion": "", "stop_reason": "stop_sequence", "model": "claude-2.0"}
```
## Events
Each event includes a named event type and associated JSON data.
Event types: `completion`, `ping`, `error`.
### Error event types
We may occasionally send [errors](/en/api/errors) in the event stream. For example, during periods of high usage, you may receive an `overloaded_error`, which would normally correspond to an HTTP 529 in a non-streaming context:
```json Example error
event: completion
data: {"completion": " Hello", "stop_reason": null, "model": "claude-2.0"}
event: error
data: {"error": {"type": "overloaded_error", "message": "Overloaded"}}
```
## Older API versions
If you are using an [API version](/en/api/versioning) prior to `2023-06-01`, the response shape will be different. See [versioning](/en/api/versioning) for details.
# Supported regions
Here are the countries, regions, and territories we can currently support access from:
* Albania
* Algeria
* Andorra
* Angola
* Antigua and Barbuda
* Argentina
* Armenia
* Australia
* Austria
* Azerbaijan
* Bahamas
* Bangladesh
* Barbados
* Belgium
* Belize
* Benin
* Bhutan
* Bolivia
* Botswana
* Brazil
* Brunei
* Bulgaria
* Burkina Faso
* Cabo Verde
* Canada
* Chile
* Colombia
* Comoros
* Congo, Republic of the
* Costa Rica
* Côte d'Ivoire
* Croatia
* Cyprus
* Czechia (Czech Republic)
* Denmark
* Djibouti
* Dominica
* Dominican Republic
* Ecuador
* El Salvador
* Estonia
* Fiji
* Finland
* France
* Gabon
* Gambia
* Georgia
* Germany
* Ghana
* Greece
* Grenada
* Guatemala
* Guinea
* Guinea-Bissau
* Guyana
* Haiti
* Holy See (Vatican City)
* Honduras
* Hungary
* Iceland
* India
* Indonesia
* Iraq
* Ireland
* Israel
* Italy
* Jamaica
* Japan
* Jordan
* Kazakhstan
* Kenya
* Kiribati
* Kuwait
* Kyrgyzstan
* Latvia
* Lebanon
* Lesotho
* Liberia
* Liechtenstein
* Lithuania
* Luxembourg
* Madagascar
* Malawi
* Malaysia
* Maldives
* Malta
* Marshall Islands
* Mauritania
* Mauritius
* Mexico
* Micronesia
* Moldova
* Monaco
* Mongolia
* Montenegro
* Morocco
* Mozambique
* Namibia
* Nauru
* Nepal
* Netherlands
* New Zealand
* Niger
* Nigeria
* North Macedonia
* Norway
* Oman
* Pakistan
* Palau
* Palestine
* Panama
* Papua New Guinea
* Paraguay
* Peru
* Philippines
* Poland
* Portugal
* Qatar
* Romania
* Rwanda
* Saint Kitts and Nevis
* Saint Lucia
* Saint Vincent and the Grenadines
* Samoa
* San Marino
* Sao Tome and Principe
* Saudi Arabia
* Senegal
* Serbia
* Seychelles
* Sierra Leone
* Singapore
* Slovakia
* Slovenia
* Solomon Islands
* South Africa
* South Korea
* Spain
* Sri Lanka
* Suriname
* Sweden
* Switzerland
* Taiwan
* Tanzania
* Thailand
* Timor-Leste, Democratic Republic of
* Togo
* Tonga
* Trinidad and Tobago
* Tunisia
* Turkey
* Tuvalu
* Uganda
* Ukraine (except Crimea, Donetsk, and Luhansk regions)
* United Arab Emirates
* United Kingdom
* United States of America
* Uruguay
* Vanuatu
* Vietnam
* Zambia
# Versions
When making API requests, you must send an `anthropic-version` request header. For example, `anthropic-version: 2023-06-01`. If you are using our [client libraries](/en/api/client-libraries), this is handled for you automatically.
For any given API version, we will preserve:
* Existing input parameters
* Existing output parameters
However, we may do the following:
* Add additional optional inputs
* Add additional values to the output
* Change conditions for specific error types
* Add new variants to enum-like output values (for example, streaming event types)
Generally, if you are using the API as documented in this reference, we will not break your usage.
## Version history
We always recommend using the latest API version whenever possible. Previous versions are considered deprecated and may be unavailable for new users.
* `2023-06-01`
* New format for [streaming](/en/api/streaming) server-sent events (SSE):
* Completions are incremental. For example, `" Hello"`, `" my"`, `" name"`, `" is"`, `" Claude." ` instead of `" Hello"`, `" Hello my"`, `" Hello my name"`, `" Hello my name is"`, `" Hello my name is Claude."`.
* All events are [named events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent%5Fevents/Using%5Fserver-sent%5Fevents#named%5Fevents), rather than [data-only events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent%5Fevents/Using%5Fserver-sent%5Fevents#data-only%5Fmessages).
* Removed unnecessary `data: [DONE]` event.
* Removed legacy `exception` and `truncated` values in responses.
* `2023-01-01`: Initial release.
# Models
Claude is a family of state-of-the-art large language models developed by Anthropic. This guide introduces our models and compares their performance with legacy models.
Our fastest model
Text input Text output 200k context window
Our most intelligent model
Text and image input Text output 200k context window
***
## Model names
| Model | Anthropic API | AWS Bedrock | GCP Vertex AI |
| ----------------- | --------------------------------------------------------- | ------------------------------------------- | ------------------------------- |
| Claude 3.5 Sonnet | `claude-3-5-sonnet-20241022` (`claude-3-5-sonnet-latest`) | `anthropic.claude-3-5-sonnet-20241022-v2:0` | `claude-3-5-sonnet-v2@20241022` |
| Claude 3.5 Haiku | `claude-3-5-haiku-20241022` (`claude-3-5-haiku-latest`) | `anthropic.claude-3-5-haiku-20241022-v1:0` | `claude-3-5-haiku@20241022` |
| Model | Anthropic API | AWS Bedrock | GCP Vertex AI |
| --------------- | ------------------------------------------------- | ----------------------------------------- | -------------------------- |
| Claude 3 Opus | `claude-3-opus-20240229` (`claude-3-opus-latest`) | `anthropic.claude-3-opus-20240229-v1:0` | `claude-3-opus@20240229` |
| Claude 3 Sonnet | `claude-3-sonnet-20240229` | `anthropic.claude-3-sonnet-20240229-v1:0` | `claude-3-sonnet@20240229` |
| Claude 3 Haiku | `claude-3-haiku-20240307` | `anthropic.claude-3-haiku-20240307-v1:0` | `claude-3-haiku@20240307` |
Models with the same snapshot date (e.g., 20240620) are identical across all platforms and do not change. The snapshot date in the model name ensures consistency and allows developers to rely on stable performance across different environments.
For convenience during development and testing, we offer "`-latest`" aliases for our models (e.g., `claude-3-5-sonnet-latest`). These aliases automatically point to the most recent snapshot of a given model. While useful for experimentation, we recommend using specific model versions (e.g., `claude-3-5-sonnet-20241022`) in production applications to ensure consistent behavior. When we release new model snapshots, we'll migrate the -latest alias to point to the new version (typically within a week of the new release). The -latest alias is subject to the same rate limits and pricing as the underlying model version it references.
### Model comparison table
To help you choose the right model for your needs, we've compiled a table comparing the key features and capabilities of each model in the Claude family:
| | Claude 3.5 Sonnet | Claude 3.5 Haiku | Claude 3 Opus | Claude 3 Sonnet | Claude 3 Haiku |
| :----------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------------------------------- | :-------------------------------------------------------------------------- | :-------------------------------------------------------------------------- | :-------------------------------------------------------------------------- |
| **Description** | Our most intelligent model | Our fastest model | Powerful model for highly complex tasks | Balance of intelligence and speed | Fastest and most compact model for near-instant responsiveness |
| **Strengths** | Highest level of intelligence and capability | Intelligence at blazing speeds | Top-level intelligence, fluency, and understanding | Strong utility, balanced for scaled deployments | Quick and accurate targeted performance |
| **Multilingual** | Yes | Yes | Yes | Yes | Yes |
| **Vision** | Yes | No | Yes | Yes | Yes |
| **Message Batches API** | Yes | Yes | Yes | No | Yes |
| **API model name** | Upgraded version: `claude-3-5-sonnet-20241022`
Previous version:`claude-3-5-sonnet-20240620` | `claude-3-5-haiku-20241022` | `claude-3-opus-20240229` | `claude-3-sonnet-20240229` | `claude-3-haiku-20240307` |
| **Comparative latency** | Fast | Fastest | Moderately fast | Fast | Fastest |
| **Context window** | 200K | 200K | 200K | 200K | 200K |
| **Max output** | 8192 tokens | 8192 tokens | 4096 tokens | 4096 tokens | 4096 tokens |
| **Cost (Input / Output per MTok)** | \$3.00 / \$15.00 | \$1.00 / \$5.00 | \$15.00 / \$75.00 | \$3.00 / \$15.00 | \$0.25 / \$1.25 |
| **Training data cut-off** | Apr 2024 | July 2024 | Aug 2023 | Aug 2023 | Aug 2023 |
## Prompt and output performance
The Claude 3.5 family excels in:
* **Benchmark performance**: Top-tier results in reasoning, coding, multilingual tasks, long-context handling, honesty, and image processing. See the [Claude 3 model card](https://assets.anthropic.com/m/61e7d27f8c8f5919/original/Claude-3-Model-Card.pdf) for more information.
* **Engaging responses**: Claude 3 models are ideal for applications that require rich, human-like interactions.
* If you prefer more concise responses, you can adjust your prompts to guide the model toward the desired output length. Refer to our [prompt engineering guides](/en/docs/build-with-claude/prompt-engineering) for details.
* **Output quality**: When migrating from previous model generations to the Claude 3 family, you may notice larger improvements in overall performance.
***
## Legacy models
We recommend migrating to the Claude 3 family of models. However, we understand that some users may need time to transition from our legacy models:
* **Claude Instant 1.2**: A fast and efficient model predecessor of Claude Haiku.
* **Claude 2.0**: The strong-performing predecessor to Claude 3.
* **Claude 2.1**: An updated version of Claude 2 with improved accuracy and consistency.
These models do not have the vision capabilities of the Claude 3 family and are generally slower, less performant and intelligent.
The [model deprecation page](/en/docs/resources/model-deprecations) contains information on when legacy models will be deprecated.
***
## Legacy model comparison
To help you choose the right model for your needs, this table compares key features and capabilities.
| | Claude 2.1 | Claude 2 | Claude Instant 1.2 |
| :----------------------------------------------------------------------------- | :-------------------------------------------------------------------------- | :-------------------------------------------------------------------------- | :-------------------------------------------------------------------------- |
| **Description** | Updated version of Claude 2 with improved accuracy | Predecessor to Claude 3, offering strong all-round performance | Our cheapest small and fast model, a predecessor of Claude Haiku |
| **Strengths** | Legacy model - performs less well than Claude 3 models | Legacy model - performs less well than Claude 3 models | Legacy model - performs less well than Claude 3 models |
| **Multilingual** | Yes, with less coverage, understanding, and skill than Claude 3 | Yes, with less coverage, understanding, and skill than Claude 3 | Yes, with less coverage, understanding, and skill than Claude 3 |
| **Vision** | No | No | No |
| **API model name** | claude-2.1 | claude-2.0 | claude-instant-1.2 |
| **API format** | Messages & Text Completions API | Messages & Text Completions API | Messages & Text Completions API |
| **Comparative latency** | Slower than Claude 3 model of similar intelligence | Slower than Claude 3 model of similar intelligence | Slower than Claude 3 model of similar intelligence |
| **Context window** | 200K | 100K | 100K |
| **Max output** | 4096 tokens | 4096 tokens | 4096 tokens |
| **Cost (Input / Output per MTok)** | \$8.00 / \$24.00 | \$8.00 / \$24.00 | \$0.80 / \$2.40 |
| **Training data cut-off** | Early 2023 | Early 2023 | Early 2023 |
## Get started with Claude
If you're ready to start exploring what Claude can do for you, let's dive in! Whether you're a developer looking to integrate Claude into your applications or a user wanting to experience the power of AI firsthand, we've got you covered.
Looking to chat with Claude? Visit [claude.ai](http://www.claude.ai)!
Explore Claude’s capabilities and development flow.
Learn how to make your first API call in minutes.
Craft and test powerful prompts directly in your browser.
If you have any questions or need assistance, don't hesitate to reach out to our [support team](https://support.anthropic.com/) or consult the [Discord community](https://www.anthropic.com/discord).
# Security and compliance
# Content moderation
Content moderation is a critical aspect of maintaining a safe, respectful, and productive environment in digital applications. In this guide, we'll discuss how Claude can be used to moderate content within your digital application.
> Visit our [content moderation cookbook](https://github.com/anthropics/anthropic-cookbook/blob/main/misc/building%5Fmoderation%5Ffilter.ipynb) to see an example content moderation implementation using Claude.
This guide is focused on moderating user-generated content within your application. If you're looking for guidance on moderating interactions with Claude, please refer to our [guardrails guide](https://docs.anthropic.com/en/docs/test-and-evaluate/strengthen-guardrails/reduce-hallucinations).
## Before building with Claude
### Decide whether to use Claude for content moderation
Here are some key indicators that you should use an LLM like Claude instead of a traditional ML or rules-based approach for content moderation:
Traditional ML methods require significant engineering resources, ML expertise, and infrastructure costs. Human moderation systems incur even higher costs. With Claude, you can have a sophisticated moderation system up and running in a fraction of the time for a fraction of the price.Traditional ML approaches, such as bag-of-words models or simple pattern matching, often struggle to understand the tone, intent, and context of the content. While human moderation systems excel at understanding semantic meaning, they require time for content to be reviewed. Claude bridges the gap by combining semantic understanding with the ability to deliver moderation decisions quickly.By leveraging its advanced reasoning capabilities, Claude can interpret and apply complex moderation guidelines uniformly. This consistency helps ensure fair treatment of all content, reducing the risk of inconsistent or biased moderation decisions that can undermine user trust.Once a traditional ML approach has been established, changing it is a laborious and data-intensive undertaking. On the other hand, as your product or customer needs evolve, Claude can easily adapt to changes or additions to moderation policies without extensive relabeling of training data.If you wish to provide users or regulators with clear explanations behind moderation decisions, Claude can generate detailed and coherent justifications. This transparency is important for building trust and ensuring accountability in content moderation practices.Traditional ML approaches typically require separate models or extensive translation processes for each supported language. Human moderation requires hiring a workforce fluent in each supported language. Claude’s multilingual capabilities allow it to classify tickets in various languages without the need for separate models or extensive translation processes, streamlining moderation for global customer bases.Claude's multimodal capabilities allow it to analyze and interpret content across both text and images. This makes it a versatile tool for comprehensive content moderation in environments where different media types need to be evaluated together.Anthropic has trained all Claude models to be honest, helpful and harmless. This may result in Claude moderating content deemed particularly dangerous (in line with our [Acceptable Use Policy](https://www.anthropic.com/legal/aup)), regardless of the prompt used. For example, an adult website that wants to allow users to post explicit sexual content may find that Claude still flags explicit content as requiring moderation, even if they specify in their prompt not to moderate explicit sexual content. We recommend reviewing our AUP in advance of building a moderation solution.
### Generate examples of content to moderate
Before developing a content moderation solution, first create examples of content that should be flagged and content that should not be flagged. Ensure that you include edge cases and challenging scenarios that may be difficult for a content moderation system to handle effectively. Afterwards, review your examples to create a well-defined list of moderation categories.
For instance, the examples generated by a social media platform might include the following:
```python
allowed_user_comments = [
'This movie was great, I really enjoyed it. The main actor really killed it!',
'I hate Mondays.',
'It is a great time to invest in gold!'
]
disallowed_user_comments = [
'Delete this post now or you better hide. I am coming after you and your family.',
'Stay away from the 5G cellphones!! They are using 5G to control you.',
'Congratulations! You have won a $1,000 gift card. Click here to claim your prize!'
]
# Sample user comments to test the content moderation
user_comments = allowed_user_comments + disallowed_user_comments
# List of categories considered unsafe for content moderation
unsafe_categories = [
'Child Exploitation',
'Conspiracy Theories',
'Hate',
'Indiscriminate Weapons',
'Intellectual Property',
'Non-Violent Crimes',
'Privacy',
'Self-Harm',
'Sex Crimes',
'Sexual Content',
'Specialized Advice',
'Violent Crimes'
]
```
Effectively moderating these examples requires a nuanced understanding of language. In the comment, `This movie was great, I really enjoyed it. The main actor really killed it!`, the content moderation system needs to recognize that "killed it" is a metaphor, not an indication of actual violence. Conversely, despite the lack of explicit mentions of violence, the comment `Delete this post now or you better hide. I am coming after you and your family.` should be flagged by the content moderation system.
The `unsafe_categories` list can be customized to fit your specific needs. For example, if you wish to prevent minors from creating content on your website, you could append "Underage Posting" to the list.
***
## How to moderate content using Claude
### Select the right Claude model
When selecting a model, it’s important to consider the size of your data. If costs are a concern, a smaller model like Claude 3 Haiku is an excellent choice due to its cost-effectiveness. Below is an estimate of the cost to moderate text for a social media platform that receives one billion posts per month:
* **Content size**
* Posts per month: 1bn
* Characters per post: 100
* Total characters: 100bn
* **Estimated tokens**
* Input tokens: 28.6bn (assuming 1 token per 3.5 characters)
* Percentage of messages flagged: 3%
* Output tokens per flagged message: 50
* Total output tokens: 1.5bn
* **Claude 3 Haiku estimated cost**
* Input token cost: 2,860 MTok \* \$0.25/MTok = \$715
* Output token cost: 1,500 MTok \* \$1.25/MTok = \$1,875
* Monthly cost: \$715 + \$1,875 = \$2,590
* **Claude 3.5 Sonnet estimated cost**
* Input token cost: 2,860 MTok \* \$3.00/MTok = \$8,580
* Output token cost: 1,500 MTok \* \$15.00/MTok = \$22,500
* Monthly cost: \$8,580 + \$22,500 = \$31,080
Actual costs may differ from these estimates. These estimates are based on the prompt highlighted in the section on [batch processing](#consider-batch-processing). Output tokens can be reduced even further by removing the `explanation` field from the response.
### Build a strong prompt
In order to use Claude for content moderation, Claude must understand the moderation requirements of your application. Let’s start by writing a prompt that allows you to define your moderation needs:
```python
import anthropic
import json
# Initialize the Anthropic client
client = anthropic.Anthropic()
def moderate_message(message, unsafe_categories):
# Convert the list of unsafe categories into a string, with each category on a new line
unsafe_category_str = '\n'.join(unsafe_categories)
# Construct the prompt for Claude, including the message and unsafe categories
assessment_prompt = f"""
Determine whether the following message warrants moderation,
based on the unsafe categories outlined below.
Message:
{message}
Unsafe Categories:
{unsafe_category_str}
Respond with ONLY a JSON object, using the format below:
{{
"violation": ,
"categories": [Comma-separated list of violated categories],
"explanation": [Optional. Only include if there is a violation.]
}}"""
# Send the request to Claude for content moderation
response = client.messages.create(
model="claude-3-haiku-20240307", # Using the Haiku model for lower costs
max_tokens=200,
temperature=0, # Use 0 temperature for increased consistency
messages=[
{"role": "user", "content": assessment_prompt}
]
)
# Parse the JSON response from Claude
assessment = json.loads(response.content[0].text)
# Extract the violation status from the assessment
contains_violation = assessment['violation']
# If there's a violation, get the categories and explanation; otherwise, use empty defaults
violated_categories = assessment.get('categories', []) if contains_violation else []
explanation = assessment.get('explanation') if contains_violation else None
return contains_violation, violated_categories, explanation
# Process each comment and print the results
for comment in user_comments:
print(f"\nComment: {comment}")
violation, violated_categories, explanation = moderate_message(comment, unsafe_categories)
if violation:
print(f"Violated Categories: {', '.join(violated_categories)}")
print(f"Explanation: {explanation}")
else:
print("No issues detected.")
```
In this example, the `moderate_message` function contains an assessment prompt that includes the unsafe content categories and the message we wish to evaluate. The prompt asks Claude to assess whether the message should be moderated, based on the unsafe categories we defined.
The model's assessment is then parsed to determine if there is a violation. If there is a violation, Claude also returns a list of violated categories, as well as an explanation as to why the message is unsafe.
### Evaluate your prompt
Content moderation is a classification problem. Thus, you can use the same techniques outlined in our [classification cookbook](https://github.com/anthropics/anthropic-cookbook/blob/main/skills/classification/guide.ipynb) to determine the accuracy of your content moderation system.
One additional consideration is that instead of treating content moderation as a binary classification problem, you may instead create multiple categories to represent various risk levels. Creating multiple risk levels allows you to adjust the aggressiveness of your moderation. For example, you might want to automatically block user queries that are deemed high risk, while users with many medium risk queries are flagged for human review.
```python
import anthropic
import json
# Initialize the Anthropic client
client = anthropic.Anthropic()
def assess_risk_level(message, unsafe_categories):
# Convert the list of unsafe categories into a string, with each category on a new line
unsafe_category_str = '\n'.join(unsafe_categories)
# Construct the prompt for Claude, including the message, unsafe categories, and risk level definitions
assessment_prompt = f"""
Assess the risk level of the following message warrants moderation,
based on the unsafe categories listed below.
Message:
{message}
Unsafe Categories:
{unsafe_category_str}
Assign a risk level based on your confidence that the user's message should be moderated
based on the defined unsafe categories:
0 - No risk
1 - Low risk
2 - Medium risk
3 - High risk
Respond with ONLY a JSON object, using the format below:
{{
"risk_level": ,
"categories": [Comma-separated list of violated categories],
"explanation":
}}"""
# Send the request to Claude for risk assessment
response = client.messages.create(
model="claude-3-haiku-20240307", # Using the Haiku model for lower costs
max_tokens=200,
temperature=0, # Use 0 temperature for increased consistency
messages=[
{"role": "user", "content": assessment_prompt}
]
)
# Parse the JSON response from Claude
assessment = json.loads(response.content[0].text)
# Extract the risk level, violated categories, and explanation from the assessment
risk_level = assessment["risk_level"]
violated_categories = assessment["categories"]
explanation = assessment.get("explanation")
return risk_level, violated_categories, explanation
# Process each comment and print the results
for comment in user_comments:
print(f"\nComment: {comment}")
risk_level, violated_categories, explanation = assess_risk_level(comment, unsafe_categories)
print(f"Risk Level: {risk_level}")
if violated_categories:
print(f"Violated Categories: {', '.join(violated_categories)}")
if explanation:
print(f"Explanation: {explanation}")
```
This code implements an `assess_risk_level` function that uses Claude to evaluate the risk level of a message. The function accepts a message and a list of unsafe categories as inputs.
Within the function, a prompt is generated for Claude, including the message to be assessed, the unsafe categories, and specific instructions for evaluating the risk level. The prompt instructs Claude to respond with a JSON object that includes the risk level, the violated categories, and an optional explanation.
This approach enables flexible content moderation by assigning risk levels. It can be seamlessly integrated into a larger system to automate content filtering or flag comments for human review based on their assessed risk level. For instance, when executing this code, the comment `Delete this post now or you better hide. I am coming after you and your family.` is identified as high risk due to its dangerous threat. Conversely, the comment `Stay away from the 5G cellphones!! They are using 5G to control you.` is categorized as medium risk.
### Deploy your prompt
Once you are confident in the quality of your solution, it's time to deploy it to production. Here are some best practices to follow when using content moderation in production:
1. **Provide clear feedback to users:** When user input is blocked or a response is flagged due to content moderation, provide informative and constructive feedback to help users understand why their message was flagged and how they can rephrase it appropriately. In the coding examples above, this is done through the `explanation` tag in the Claude response.
2. **Analyze moderated content:** Keep track of the types of content being flagged by your moderation system to identify trends and potential areas for improvement.
3. **Continuously evaluate and improve:** Regularly assess the performance of your content moderation system using metrics such as precision and recall tracking. Use this data to iteratively refine your moderation prompts, keywords, and assessment criteria.
***
## Improve performance
In complex scenarios, it may be helpful to consider additional strategies to improve performance beyond standard [prompt engineering techniques](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview). Here are some advanced strategies:
### Define topics and provide examples
In addition to listing the unsafe categories in the prompt, further improvements can be made by providing definitions and phrases related to each category.
```python
import anthropic
import json
# Initialize the Anthropic client
client = anthropic.Anthropic()
# Dictionary of categories considered unsafe for content moderation, with their definitions
unsafe_category_definitions = {
'Child Exploitation': 'Content that depicts child nudity or that enables, encourages, excuses, or depicts the sexual abuse of children.',
'Conspiracy Theories': """Content that promotes or endorses unfounded, false, or misleading theories about events, situations, or individuals,
which can incite harm or undermine public trust in institutions or facts.""",
'Hate': """Content that is hateful toward people on the basis of their protected characteristics
(race, color, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease),
or content that perpetuates negative stereotypes.""",
'Indiscriminate Weapons': 'Content that enables the creation of weapons of mass destruction.',
'Intellectual Property': 'Content that violates the intellectual property rights of third parties.',
'Non-Violent Crimes': 'Content that enables, encourages, or excuses the commission of non-violent crimes.',
'Privacy': 'Content that contains sensitive, personal information about private individuals.',
'Self-Harm': 'Content that enables, encourages, or excuses acts of intentional self-harm.',
'Sex Crimes': 'Content that enables, encourages, or excuses the commission of sex crimes.',
'Sexual Content': 'Content that contains erotic descriptions or explicit depictions of sex acts.',
'Specialized Advice': """Content that contains financial, medical, or legal advice.
Financial advice includes guidance on investments, stocks, bonds, or any financial planning.""",
'Violent Crimes': 'Content that enables, encourages, or excuses the commission of violent crimes.',
}
def moderate_message_with_definitions(message, unsafe_category_definitions):
# Format unsafe categories string, with each category and its definition on a new line
unsafe_category_str = '\n'.join(f'{category}: {definition}'
for category, definition in unsafe_category_definitions.items()
)
# Construct the prompt for Claude, including the message and unsafe categories
assessment_prompt = f"""Determine whether the following message warrants moderation, based on the unsafe categories outlined below.
Message:
{message}
Unsafe Categories and Their Definitions:
{unsafe_category_str}
It's important that you remember all unsafe categories and their definitions.
Respond with ONLY a JSON object, using the format below:
{{
"violation": ,
"categories": [Comma-separated list of violated categories],
"explanation": [Optional. Only include if there is a violation.]
}}"""
# Send the request to Claude for content moderation
response = client.messages.create(
model="claude-3-haiku-20240307", # Using the Haiku model for lower costs
max_tokens=200,
temperature=0, # Use 0 temperature for increased consistency
messages=[
{"role": "user", "content": assessment_prompt}
]
)
# Parse the JSON response from Claude
assessment = json.loads(response.content[0].text)
# Extract the violation status from the assessment
contains_violation = assessment['violation']
# If there's a violation, get the categories and explanation; otherwise, use empty defaults
violated_categories = assessment.get('categories', []) if contains_violation else []
explanation = assessment.get('explanation') if contains_violation else None
return contains_violation, violated_categories, explanation
# Process each comment and print the results
for comment in user_comments:
print(f"\nComment: {comment}")
violation, violated_categories, explanation = moderate_message_with_definitions(comment, unsafe_category_definitions)
if violation:
print(f"Violated Categories: {', '.join(violated_categories)}")
print(f"Explanation: {explanation}")
else:
print("No issues detected.")
```
The `moderate_message_with_definitions` function expands upon the earlier `moderate_message` function by allowing each unsafe category to be paired with a detailed definition. This occurs in the code by replacing the `unsafe_categories` list from the original function with an `unsafe_category_definitions` dictionary. This dictionary maps each unsafe category to its corresponding definition. Both the category names and their definitions are included in the prompt.
Notably, the definition for the `Specialized Advice` category now specifies the types of financial advice that should be prohibited. As a result, the comment `It's a great time to invest in gold!`, which previously passed the `moderate_message` assessment, now triggers a violation.
### Consider batch processing
To reduce costs in situations where real-time moderation isn't necessary, consider moderating messages in batches. Include multiple messages within the prompt's context, and ask Claude to assess which messages should be moderated.
```python
import anthropic
import json
# Initialize the Anthropic client
client = anthropic.Anthropic()
def batch_moderate_messages(messages, unsafe_categories):
# Convert the list of unsafe categories into a string, with each category on a new line
unsafe_category_str = '\n'.join(unsafe_categories)
# Format messages string, with each message wrapped in XML-like tags and given an ID
messages_str = '\n'.join([f'{msg}' for idx, msg in enumerate(messages)])
# Construct the prompt for Claude, including the messages and unsafe categories
assessment_prompt = f"""Determine the messages to moderate, based on the unsafe categories outlined below.
Messages:
{messages_str}
Unsafe categories and their definitions:
{unsafe_category_str}
Respond with ONLY a JSON object, using the format below:
{{
"violations": [
{{
"id": ,
"categories": [list of violated categories],
"explanation":
}},
...
]
}}
Important Notes:
- Remember to analyze every message for a violation.
- Select any number of violations that reasonably apply."""
# Send the request to Claude for content moderation
response = client.messages.create(
model="claude-3-haiku-20240307", # Using the Haiku model for lower costs
max_tokens=2048, # Increased max token count to handle batches
temperature=0, # Use 0 temperature for increased consistency
messages=[
{"role": "user", "content": assessment_prompt}
]
)
# Parse the JSON response from Claude
assessment = json.loads(response.content[0].text)
return assessment
# Process the batch of comments and get the response
response_obj = batch_moderate_messages(user_comments, unsafe_categories)
# Print the results for each detected violation
for violation in response_obj['violations']:
print(f"""Comment: {user_comments[violation['id']]}
Violated Categories: {', '.join(violation['categories'])}
Explanation: {violation['explanation']}
""")
```
In this example, the `batch_moderate_messages` function handles the moderation of an entire batch of messages with a single Claude API call.
Inside the function, a prompt is created that includes the list of messages to evaluate, the defined unsafe content categories, and their descriptions. The prompt directs Claude to return a JSON object listing all messages that contain violations. Each message in the response is identified by its id, which corresponds to the message's position in the input list.
Keep in mind that finding the optimal batch size for your specific needs may require some experimentation. While larger batch sizes can lower costs, they might also lead to a slight decrease in quality. Additionally, you may need to increase the `max_tokens` parameter in the Claude API call to accommodate longer responses. For details on the maximum number of tokens your chosen model can output, refer to the [model comparison page](https://docs.anthropic.com/en/docs/about-claude/models#model-comparison).
View a fully implemented code-based example of how to use Claude for content moderation.
Explore our guardrails guide for techniques to moderate interactions with Claude.
# Customer support agent
This guide walks through how to leverage Claude's advanced conversational capabilities to handle customer inquiries in real time, providing 24/7 support, reducing wait times, and managing high support volumes with accurate responses and positive interactions.
## Before building with Claude
### Decide whether to use Claude for support chat
Here are some key indicators that you should employ an LLM like Claude to automate portions of your customer support process:
Claude excels at handling a large number of similar questions efficiently, freeing up human agents for more complex issues.
Claude can quickly retrieve, process, and combine information from vast knowledge bases, while human agents may need time to research or consult multiple sources.
Claude can provide round-the-clock support without fatigue, whereas staffing human agents for continuous coverage can be costly and challenging.
Claude can handle sudden increases in query volume without the need for hiring and training additional staff.
You can instruct Claude to consistently represent your brand's tone and values, whereas human agents may vary in their communication styles.
Some considerations for choosing Claude over other LLMs:
* You prioritize natural, nuanced conversation: Claude's sophisticated language understanding allows for more natural, context-aware conversations that feel more human-like than chats with other LLMs.
* You often receive complex and open-ended queries: Claude can handle a wide range of topics and inquiries without generating canned responses or requiring extensive programming of permutations of user utterances.
* You need scalable multilingual support: Claude's multilingual capabilities allow it to engage in conversations in over 200 languages without the need for separate chatbots or extensive translation processes for each supported language.
### Define your ideal chat interaction
Outline an ideal customer interaction to define how and when you expect the customer to interact with Claude. This outline will help to determine the technical requirements of your solution.
Here is an example chat interaction for car insurance customer support:
* **Customer**: Initiates support chat experience
* **Claude**: Warmly greets customer and initiates conversation
* **Customer**: Asks about insurance for their new electric car
* **Claude**: Provides relevant information about electric vehicle coverage
* **Customer**: Asks questions related to unique needs for electric vehicle insurances
* **Claude**: Responds with accurate and informative answers and provides links to the sources
* **Customer**: Asks off-topic questions unrelated to insurance or cars
* **Claude**: Clarifies it does not discuss unrelated topics and steers the user back to car insurance
* **Customer**: Expresses interest in an insurance quote
* **Claude**: Ask a set of questions to determine the appropriate quote, adapting to their responses
* **Claude**: Sends a request to use the quote generation API tool along with necessary information collected from the user
* **Claude**: Receives the response information from the API tool use, synthesizes the information into a natural response, and presents the provided quote to the user
* **Customer**: Asks follow up questions
* **Claude**: Answers follow up questions as needed
* **Claude**: Guides the customer to the next steps in the insurance process and closes out the conversation
In the real example that you write for your own use case, you might find it useful to write out the actual words in this interaction so that you can also get a sense of the ideal tone, response length, and level of detail you want Claude to have.
### Break the interaction into unique tasks
Customer support chat is a collection of multiple different tasks, from question answering to information retrieval to taking action on requests, wrapped up in a single customer interaction. Before you start building, break down your ideal customer interaction into every task you want Claude to be able to perform. This ensures you can prompt and evaluate Claude for every task, and gives you a good sense of the range of interactions you need to account for when writing test cases.
Customers sometimes find it helpful to visualize this as an interaction flowchart of possible conversation inflection points depending on user requests.
Here are the key tasks associated with the example insurance interaction above:
1. Greeting and general guidance
* Warmly greet the customer and initiate conversation
* Provide general information about the company and interaction
2. Product Information
* Provide information about electric vehicle coverage
This will require that Claude have the necessary information in its context, and might imply that a [RAG integration](https://github.com/anthropics/anthropic-cookbook/blob/main/skills/retrieval_augmented_generation/guide.ipynb) is necessary.
* Answer questions related to unique electric vehicle insurance needs
* Answer follow-up questions about the quote or insurance details
* Offer links to sources when appropriate
3. Conversation Management
* Stay on topic (car insurance)
* Redirect off-topic questions back to relevant subjects
4. Quote Generation
* Ask appropriate questions to determine quote eligibility
* Adapt questions based on customer responses
* Submit collected information to quote generation API
* Present the provided quote to the customer
### Establish success criteria
Work with your support team to [define clear success criteria](https://docs.anthropic.com/en/docs/build-with-claude/define-success) and write [detailed evaluations](https://docs.anthropic.com/en/docs/build-with-claude/develop-tests) with measurable benchmarks and goals.
Here are criteria and benchmarks that can be used to evaluate how successfully Claude performs the defined tasks:
This metric evaluates how accurately Claude understands customer inquiries across various topics. Measure this by reviewing a sample of conversations and assessing whether Claude has the correct interpretation of customer intent, critical next steps, what successful resolution looks like, and more. Aim for a comprehension accuracy of 95% or higher.
This assesses how well Claude's response addresses the customer's specific question or issue. Evaluate a set of conversations and rate the relevance of each response (using LLM-based grading for scale). Target a relevance score of 90% or above.
Assess the correctness of general company and product information provided to the user, based on the information provided to Claude in context. Target 100% accuracy in this introductory information.
Track the frequency and relevance of links or sources offered. Target providing relevant sources in 80% of interactions where additional information could be beneficial.
Measure how well Claude stays on topic, such as the topic of car insurance in our example implementation. Aim for 95% of responses to be directly related to car insurance or the customer's specific query.
Measure how successful Claude is at determining when to generate informational content and how relevant that content is. For example, in our implementation, we would be determining how well Claude understands when to generate a quote and how accurate that quote is. Target 100% accuracy, as this is vital information for a successful customer interaction.
This measures Claude's ability to recognize when a query needs human intervention and escalate appropriately. Track the percentage of correctly escalated conversations versus those that should have been escalated but weren't. Aim for an escalation accuracy of 95% or higher.
Here are criteria and benchmarks that can be used to evaluate the business impact of employing Claude for support:
This assesses Claude's ability to maintain or improve customer sentiment throughout the conversation. Use sentiment analysis tools to measure sentiment at the beginning and end of each conversation. Aim for maintained or improved sentiment in 90% of interactions.
The percentage of customer inquiries successfully handled by the chatbot without human intervention. Typically aim for 70-80% deflection rate, depending on the complexity of inquiries.
A measure of how satisfied customers are with their chatbot interaction. Usually done through post-interaction surveys. Aim for a CSAT score of 4 out of 5 or higher.
The average time it takes for the chatbot to resolve an inquiry. This varies widely based on the complexity of issues, but generally, aim for a lower AHT compared to human agents.
## How to implement Claude as a customer service agent
### Choose the right Claude model
The choice of model depends on the trade-offs between cost, accuracy, and response time.
For customer support chat, `claude-3-5-sonnet-20241022` is well suited to balance intelligence, latency, and cost. However, for instances where you have conversation flow with multiple prompts including RAG, tool use, and/or long-context prompts, `claude-3-haiku-20240307` may be more suitable to optimize for latency.
### Build a strong prompt
Using Claude for customer support requires Claude having enough direction and context to respond appropriately, while having enough flexibility to handle a wide range of customer inquiries.
Let's start by writing the elements of a strong prompt, starting with a system prompt:
```python
IDENTITY = """You are Eva, a friendly and knowledgeable AI assistant for Acme Insurance
Company. Your role is to warmly welcome customers and provide information on
Acme's insurance offerings, which include car insurance and electric car
insurance. You can also help customers get quotes for their insurance needs."""
```
While you may be tempted to put all your information inside a system prompt as a way to separate instructions from the user conversation, Claude actually works best with the bulk of its prompt content written inside the first `User` turn (with the only exception being role prompting). Read more at [Giving Claude a role with a system prompt](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/system-prompts).
It's best to break down complex prompts into subsections and write one part at a time. For each task, you might find greater success by following a step by step process to define the parts of the prompt Claude would need to do the task well. For this car insurance customer support example, we'll be writing piecemeal all the parts for a prompt starting with the "Greeting and general guidance" task. This also makes debugging your prompt easier as you can more quickly adjust individual parts of the overall prompt.
We'll put all of these pieces in a file called `config.py`.
```python
STATIC_GREETINGS_AND_GENERAL = """
Acme Auto Insurance: Your Trusted Companion on the Road
About:
At Acme Insurance, we understand that your vehicle is more than just a mode of transportation—it's your ticket to life's adventures.
Since 1985, we've been crafting auto insurance policies that give drivers the confidence to explore, commute, and travel with peace of mind.
Whether you're navigating city streets or embarking on cross-country road trips, Acme is there to protect you and your vehicle.
Our innovative auto insurance policies are designed to adapt to your unique needs, covering everything from fender benders to major collisions.
With Acme's award-winning customer service and swift claim resolution, you can focus on the joy of driving while we handle the rest.
We're not just an insurance provider—we're your co-pilot in life's journeys.
Choose Acme Auto Insurance and experience the assurance that comes with superior coverage and genuine care. Because at Acme, we don't just
insure your car—we fuel your adventures on the open road.
Note: We also offer specialized coverage for electric vehicles, ensuring that drivers of all car types can benefit from our protection.
Acme Insurance offers the following products:
- Car insurance
- Electric car insurance
- Two-wheeler insurance
Business hours: Monday-Friday, 9 AM - 5 PM EST
Customer service number: 1-800-123-4567
"""
```
We'll then do the same for our car insurance and electric car insurance information.
```python
STATIC_CAR_INSURANCE="""
Car Insurance Coverage:
Acme's car insurance policies typically cover:
1. Liability coverage: Pays for bodily injury and property damage you cause to others.
2. Collision coverage: Pays for damage to your car in an accident.
3. Comprehensive coverage: Pays for damage to your car from non-collision incidents.
4. Medical payments coverage: Pays for medical expenses after an accident.
5. Uninsured/underinsured motorist coverage: Protects you if you're hit by a driver with insufficient insurance.
Optional coverages include:
- Rental car reimbursement
- Roadside assistance
- New car replacement
"""
STATIC_ELECTRIC_CAR_INSURANCE="""
Electric Car Insurance:
Our specialized electric car insurance goes beyond traditional auto coverage,
offering tailored protection for your high-tech investment. In addition to standard coverages, we offer:
- Safeguarding expensive battery systems
- Covering home charging equipment
- Adapting to the rapidly evolving world of electric vehicles
Whether you're driving a Rivian, Nissan Leaf, Chevrolet Bolt,
or any other electric vehicle, our knowledgeable agents are ready to craft a policy
that gives you the confidence to enjoy every electron-powered mile.
"""
```
Now that we have our static content, let's add at least 4-5 sample "good" interactions to guide Claude's responses. These examples should be representative of your ideal customer interaction and can include guardrails, tool calls, etc.
```python
EXAMPLES="""
Here are a few examples of how you can interact with customers:
H: Hi, do you offer commercial insurance for small businesses?
A: Ah! Unfortunately, we don't offer commercial insurance at this time.
However, we do provide a range of personal insurance products including car insurance and electric car insurance.
Would you like to know more about these offerings?
H: Hi, what are your business hours?
A: Our business hours are Monday to Friday, 9 AM to 5 PM EST.
If you have any questions or need assistance, feel free to reach out during those times.
How else can I assist you today?
H: I'm shopping for car insurance. What does Acme offer?
A: Great choice! Acme's car insurance policies typically include:
- Liability coverage: Protects you if you're at fault in an accident
- Collision coverage: Pays for damage to your car in an accident
- Comprehensive coverage: Covers non-collision incidents like theft or vandalism
- Medical payments coverage: Pays for medical expenses after an accident
- Uninsured/underinsured motorist coverage: Protects you from drivers with insufficient insurance
We also offer optional coverages like rental car reimbursement, roadside assistance, and new car replacement.
Would you like more details on any of these coverages, or do you have specific questions about insuring your vehicle?
H: I'd like to hear about your electric car insurance.
A: Certainly! Our electric car insurance is specially designed to meet the unique needs of electric vehicle owners.
In addition to all the standard coverages we offer for regular cars, our electric car policies include:
- Battery coverage: Protection against damage or degradation of your vehicle's expensive battery system
- Tech package: Covers repairs or replacement of advanced electronics and onboard computer systems
- Charging station coverage: Protects both home and portable charging equipment
- Cybersecurity protection: Coverage against hacking or digital threats
- Specialized roadside assistance: Includes flatbed towing to protect the vehicle's undercarriage
We cover a wide range of electric vehicles, including popular models from Tesla, Rivian, Nissan, Chevrolet, and more.
Is there a specific electric vehicle you're looking to insure, or would you like more details on any of these coverages?
H: I'd like to get a quote for my car insurance.
A: Certainly! I'd be happy to help you get a quote for your car insurance.
To provide you with an accurate quote, I'll need to collect some information about your vehicle and the primary driver.
Let's start with the basics:
1. What is the make and model of your vehicle?
2. What year was it manufactured?
3. Approximately how many miles have you driven?
4. What is the age of the primary driver?
Once you provide this information, I'll use our quoting tool to generate a personalized insurance quote for you.
"""
```
You will also want to include any important instructions outlining Do's and Don'ts for how Claude should interact with the customer.
This may draw from brand guardrails or support policies.
```python
ADDITIONAL_GUARDRAILS = """Please adhere to the following guardrails:
1. Only provide information about insurance types listed in our offerings.
2. If asked about an insurance type we don't offer, politely state
that we don't provide that service.
3. Do not speculate about future product offerings or company plans.
4. Don't make promises or enter into agreements it's not authorized to make.
You only provide information and guidance.
5. Do not mention any competitor's products or services.
"""
```
Now let’s combine all these sections into a single string to use as our prompt.
```python
TASK_SPECIFIC_INSTRUCTIONS = ' '.join([
STATIC_GREETINGS_AND_GENERAL,
STATIC_CAR_INSURANCE,
STATIC_ELECTRIC_CAR_INSURANCE,
EXAMPLES,
ADDITIONAL_GUARDRAILS,
])
```
### Add dynamic and agentic capabilities with tool use
Claude is capable of taking actions and retrieving information dynamically using client-side tool use functionality. Start by listing any external tools or APIs the prompt should utilize.
For this example, we will start with one tool for calculating the quote.
As a reminder, this tool will not perform the actual calculation, it will just signal to the application that a tool should be used with whatever arguments specified.
Example insurance quote calculator:
```python
TOOLS = [{
"name": "get_quote",
"description": "Calculate the insurance quote based on user input. Returned value is per month premium.",
"input_schema": {
"type": "object",
"properties": {
"make": {"type": "string", "description": "The make of the vehicle."},
"model": {"type": "string", "description": "The model of the vehicle."},
"year": {"type": "integer", "description": "The year the vehicle was manufactured."},
"mileage": {"type": "integer", "description": "The mileage on the vehicle."},
"driver_age": {"type": "integer", "description": "The age of the primary driver."}
},
"required": ["make", "model", "year", "mileage", "driver_age"]
}
}]
def get_quote(make, model, year, mileage, driver_age):
"""Returns the premium per month in USD"""
# You can call an http endpoint or a database to get the quote.
# Here, we simulate a delay of 1 seconds and return a fixed quote of 100.
time.sleep(1)
return 100
```
### Deploy your prompts
It's hard to know how well your prompt works without deploying it in a test production setting and [running evaluations](https://docs.anthropic.com/en/docs/build-with-claude/develop-tests) so let's build a small application using our prompt, the Anthropic SDK, and streamlit for a user interface.
In a file called `chatbot.py`, start by setting up the ChatBot class, which will encapsulate the interactions with the Anthropic SDK.
The class should have two main methods: `generate_message` and `process_user_input`.
```python
from anthropic import Anthropic
from config import IDENTITY, TOOLS, MODEL, get_quote
from dotenv import load_dotenv
load_dotenv()
class ChatBot:
def __init__(self, session_state):
self.anthropic = Anthropic()
self.session_state = session_state
def generate_message(
self,
messages,
max_tokens,
):
try:
response = self.anthropic.messages.create(
model=MODEL,
system=IDENTITY,
max_tokens=max_tokens,
messages=messages,
tools=TOOLS,
)
return response
except Exception as e:
return {"error": str(e)}
def process_user_input(self, user_input):
self.session_state.messages.append({"role": "user", "content": user_input})
response_message = self.generate_message(
messages=self.session_state.messages,
max_tokens=2048,
)
if "error" in response_message:
return f"An error occurred: {response_message['error']}"
if response_message.content[-1].type == "tool_use":
tool_use = response_message.content[-1]
func_name = tool_use.name
func_params = tool_use.input
tool_use_id = tool_use.id
result = self.handle_tool_use(func_name, func_params)
self.session_state.messages.append(
{"role": "assistant", "content": response_message.content}
)
self.session_state.messages.append({
"role": "user",
"content": [{
"type": "tool_result",
"tool_use_id": tool_use_id,
"content": f"{result}",
}],
})
follow_up_response = self.generate_message(
messages=self.session_state.messages,
max_tokens=2048,
)
if "error" in follow_up_response:
return f"An error occurred: {follow_up_response['error']}"
response_text = follow_up_response.content[0].text
self.session_state.messages.append(
{"role": "assistant", "content": response_text}
)
return response_text
elif response_message.content[0].type == "text":
response_text = response_message.content[0].text
self.session_state.messages.append(
{"role": "assistant", "content": response_text}
)
return response_text
else:
raise Exception("An error occurred: Unexpected response type")
def handle_tool_use(self, func_name, func_params):
if func_name == "get_quote":
premium = get_quote(**func_params)
return f"Quote generated: ${premium:.2f} per month"
raise Exception("An unexpected tool was used")
```
### Build your user interface
Test deploying this code with Streamlit using a main method. This `main()` function sets up a Streamlit-based chat interface.
We'll do this in a file called `app.py`
```python
import streamlit as st
from chatbot import ChatBot
from config import TASK_SPECIFIC_INSTRUCTIONS
def main():
st.title("Chat with Eva, Acme Insurance Company's Assistant🤖")
if "messages" not in st.session_state:
st.session_state.messages = [
{'role': "user", "content": TASK_SPECIFIC_INSTRUCTIONS},
{'role': "assistant", "content": "Understood"},
]
chatbot = ChatBot(st.session_state)
# Display user and assistant messages skipping the first two
for message in st.session_state.messages[2:]:
# ignore tool use blocks
if isinstance(message["content"], str):
with st.chat_message(message["role"]):
st.markdown(message["content"])
if user_msg := st.chat_input("Type your message here..."):
st.chat_message("user").markdown(user_msg)
with st.chat_message("assistant"):
with st.spinner("Eva is thinking..."):
response_placeholder = st.empty()
full_response = chatbot.process_user_input(user_msg)
response_placeholder.markdown(full_response)
if __name__ == "__main__":
main()
```
Run the program with:
```
streamlit run app.py
```
### Evaluate your prompts
Prompting often requires testing and optimization for it to be production ready. To determine the readiness of your solution, evaluate the chatbot performance using a systematic process combining quantitative and qualitative methods. Creating a [strong empirical evaluation](https://docs.anthropic.com/en/docs/build-with-claude/develop-tests#building-evals-and-test-cases) based on your defined success criteria will allow you to optimize your prompts.
The [Anthropic Console](https://console.anthropic.com/dashboard) now features an Evaluation tool that allows you to test your prompts under various scenarios.
### Improve performance
In complex scenarios, it may be helpful to consider additional strategies to improve performance beyond standard [prompt engineering techniques](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview) & [guardrail implementation strategies](https://docs.anthropic.com/en/docs/test-and-evaluate/strengthen-guardrails/reduce-hallucinations). Here are some common scenarios:
#### Reduce long context latency with RAG
When dealing with large amounts of static and dynamic context, including all information in the prompt can lead to high costs, slower response times, and reaching context window limits. In this scenario, implementing Retrieval Augmented Generation (RAG) techniques can significantly improve performance and efficiency.
By using [embedding models like Voyage](https://docs.anthropic.com/en/docs/build-with-claude/embeddings) to convert information into vector representations, you can create a more scalable and responsive system. This approach allows for dynamic retrieval of relevant information based on the current query, rather than including all possible context in every prompt.
Implementing RAG for support use cases [RAG recipe](https://github.com/anthropics/anthropic-cookbook/blob/82675c124e1344639b2a875aa9d3ae854709cd83/skills/classification/guide.ipynb) has been shown to increase accuracy, reduce response times, and reduce API costs in systems with extensive context requirements.
#### Integrate real-time data with tool use
When dealing with queries that require real-time information, such as account balances or policy details, embedding-based RAG approaches are not sufficient. Instead, you can leverage tool use to significantly enhance your chatbot's ability to provide accurate, real-time responses. For example, you can use tool use to look up customer information, retrieve order details, and cancel orders on behalf of the customer.
This approach, [outlined in our tool use: customer service agent recipe](https://github.com/anthropics/anthropic-cookbook/blob/main/tool_use/customer_service_agent.ipynb), allows you to seamlessly integrate live data into your Claude's responses and provide a more personalized and efficient customer experience.
#### Strengthen input and output guardrails
When deploying a chatbot, especially in customer service scenarios, it's crucial to prevent risks associated with misuse, out-of-scope queries, and inappropriate responses. While Claude is inherently resilient to such scenarios, here are additional steps to strengthen your chatbot guardrails:
* [Reduce hallucination](https://docs.anthropic.com/en/docs/test-and-evaluate/strengthen-guardrails/reduce-hallucinations): Implement fact-checking mechanisms and [citations](https://github.com/anthropics/anthropic-cookbook/blob/main/skills/citations/guide.ipynb) to ground responses in provided information.
* Cross-check information: Verify that the agent's responses align with your company's policies and known facts.
* Avoid contractual commitments: Ensure the agent doesn't make promises or enter into agreements it's not authorized to make.
* [Mitigate jailbreaks](https://docs.anthropic.com/en/docs/test-and-evaluate/strengthen-guardrails/mitigate-jailbreaks): Use methods like harmlessness screens and input validation to prevent users from exploiting model vulnerabilities, aiming to generate inappropriate content.
* Avoid mentioning competitors: Implement a competitor mention filter to maintain brand focus and not mention any competitor's products or services.
* [Keep Claude in character](https://docs.anthropic.com/en/docs/test-and-evaluate/strengthen-guardrails/keep-claude-in-character): Prevent Claude from changing their style of context, even during long, complex interactions.
* Remove Personally Identifiable Information (PII): Unless explicitly required and authorized, strip out any PII from responses.
#### Reduce perceived response time with streaming
When dealing with potentially lengthy responses, implementing streaming can significantly improve user engagement and satisfaction. In this scenario, users receive the answer progressively instead of waiting for the entire response to be generated.
Here is how to implement streaming:
1. Use the [Anthropic Streaming API](https://docs.anthropic.com/en/api/messages-streaming) to support streaming responses.
2. Set up your frontend to handle incoming chunks of text.
3. Display each chunk as it arrives, simulating real-time typing.
4. Implement a mechanism to save the full response, allowing users to view it if they navigate away and return.
In some cases, streaming enables the use of more advanced models with higher base latencies, as the progressive display mitigates the impact of longer processing times.
#### Scale your Chatbot
As the complexity of your Chatbot grows, your application architecture can evolve to match. Before you add further layers to your architecture, consider the following less exhaustive options:
* Ensure that you are making the most out of your prompts and optimizing through prompt engineering. Use our [prompt engineering guides](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview) to write the most effective prompts.
* Add additional [tools](https://docs.anthropic.com/en/docs/build-with-claude/tool-use) to the prompt (which can include [prompt chains](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/chain-prompts)) and see if you can achieve the functionality required.
If your Chatbot handles incredibly varied tasks, you may want to consider adding a [separate intent classifier](https://github.com/anthropics/anthropic-cookbook/blob/main/skills/classification/guide.ipynb) to route the initial customer query. For the existing application, this would involve creating a decision tree that would route customer queries through the classifier and then to specialized conversations (with their own set of tools and system prompts). Note, this method requires an additional call to Claude that can increase latency.
### Integrate Claude into your support workflow
While our examples have focused on Python functions callable within a Streamlit environment, deploying Claude for real-time support chatbot requires an API service.
Here's how you can approach this:
1. Create an API wrapper: Develop a simple API wrapper around your classification function. For example, you can use Flask API or Fast API to wrap your code into a HTTP Service. Your HTTP service could accept the user input and return the Assistant response in its entirety. Thus, your service could have the following characteristics:
* Server-Sent Events (SSE): SSE allows for real-time streaming of responses from the server to the client. This is crucial for providing a smooth, interactive experience when working with LLMs.
* Caching: Implementing caching can significantly improve response times and reduce unnecessary API calls.
* Context retention: Maintaining context when a user navigates away and returns is important for continuity in conversations.
2. Build a web interface: Implement a user-friendly web UI for interacting with the Claude-powered agent.
Visit our RAG cookbook recipe for more example code and detailed guidance.
Explore our Citations cookbook recipe for how to ensure accuracy and explainability of information.
# Legal summarization
This guide walks through how to leverage Claude's advanced natural language processing capabilities to efficiently summarize legal documents, extracting key information and expediting legal research. With Claude, you can streamline the review of contracts, litigation prep, and regulatory work, saving time and ensuring accuracy in your legal processes.
> Visit our [summarization cookbook](https://github.com/anthropics/anthropic-cookbook/blob/main/skills/summarization/guide.ipynb) to see an example legal summarization implementation using Claude.
## Before building with Claude
### Decide whether to use Claude for legal summarization
Here are some key indicators that you should employ an LLM like Claude to summarize legal documents:
Large-scale document review can be time-consuming and expensive when done manually. Claude can process and summarize vast amounts of legal documents rapidly, significantly reducing the time and cost associated with document review. This capability is particularly valuable for tasks like due diligence, contract analysis, or litigation discovery, where efficiency is crucial.Claude can efficiently extract and categorize important metadata from legal documents, such as parties involved, dates, contract terms, or specific clauses. This automated extraction can help organize information, making it easier to search, analyze, and manage large document sets. It's especially useful for contract management, compliance checks, or creating searchable databases of legal information. Claude can generate structured summaries that follow predetermined formats, making it easier for legal professionals to quickly grasp the key points of various documents. These standardized summaries can improve readability, facilitate comparison between documents, and enhance overall comprehension, especially when dealing with complex legal language or technical jargon.When creating legal summaries, proper attribution and citation are crucial to ensure credibility and compliance with legal standards. Claude can be prompted to include accurate citations for all referenced legal points, making it easier for legal professionals to review and verify the summarized information.Claude can assist in legal research by quickly analyzing large volumes of case law, statutes, and legal commentary. It can identify relevant precedents, extract key legal principles, and summarize complex legal arguments. This capability can significantly speed up the research process, allowing legal professionals to focus on higher-level analysis and strategy development.
### Determine the details you want the summarization to extract
There is no single correct summary for any given document. Without clear direction, it can be difficult for Claude to determine which details to include. To achieve optimal results, identify the specific information you want to include in the summary.
For instance, when summarizing a sublease agreement, you might wish to extract the following key points:
```python
details_to_extract = [
'Parties involved (sublessor, sublessee, and original lessor)',
'Property details (address, description, and permitted use)',
'Term and rent (start date, end date, monthly rent, and security deposit)',
'Responsibilities (utilities, maintenance, and repairs)',
'Consent and notices (landlord\'s consent, and notice requirements)',
'Special provisions (furniture, parking, and subletting restrictions)'
]
```
### Establish success criteria
Evaluating the quality of summaries is a notoriously challenging task. Unlike many other natural language processing tasks, evaluation of summaries often lacks clear-cut, objective metrics. The process can be highly subjective, with different readers valuing different aspects of a summary. Here are criteria you may wish to consider when assessing how well Claude performs legal summarization.
The summary should accurately represent the facts, legal concepts, and key points in the document.Terminology and references to statutes, case law, or regulations must be correct and aligned with legal standards. The summary should condense the legal document to its essential points without losing important details.If summarizing multiple documents, the LLM should maintain a consistent structure and approach to each summary.The text should be clear and easy to understand. If the audience is not legal experts, the summarization should not include legal jargon that could confuse the audience.The summary should present an unbiased and fair depiction of the legal arguments and positions.
See our guide on [establishing success criteria](/en/docs/build-with-claude/define-success) for more information.
***
## How to summarize legal documents using Claude
### Select the right Claude model
Model accuracy is extremely important when summarizing legal documents. Claude 3.5 Sonnet is an excellent choice for use cases such as this where high accuracy is required. If the size and quantity of your documents is large such that costs start to become a concern, you can also try using a smaller model like Claude 3 Haiku.
To help estimate these costs, below is a comparison of the cost to summarize 1,000 sublease agreements using both Sonnet and Haiku:
* **Content size**
* Number of agreements: 1,000
* Characters per agreement: 300,000
* Total characters: 300M
* **Estimated tokens**
* Input tokens: 86M (assuming 1 token per 3.5 characters)
* Output tokens per summary: 350
* Total output tokens: 350,000
* **Claude 3.5 Sonnet estimated cost**
* Input token cost: 86 MTok \* \$3.00/MTok = \$258
* Output token cost: 0.35 MTok \* \$15.00/MTok = \$5.25
* Total cost: \$258.00 + \$5.25 = \$263.25
* **Claude 3 Haiku estimated cost**
* Input token cost: 86 MTok \* \$0.25/MTok = \$21.50
* Output token cost: 0.35 MTok \* \$1.25/MTok = \$0.44
* Total cost: \$21.50 + \$0.44 = \$21.96
Actual costs may differ from these estimates. These estimates are based on the example highlighted in the section on [prompting](#build-a-strong-prompt).
### Transform documents into a format that Claude can process
Before you begin summarizing documents, you need to prepare your data. This involves extracting text from PDFs, cleaning the text, and ensuring it's ready to be processed by Claude.
Here is a demonstration of this process on a sample pdf:
```python
from io import BytesIO
import re
import pypdf
import requests
def get_llm_text(pdf_file):
reader = pypdf.PdfReader(pdf_file)
text = "\n".join([page.extract_text() for page in reader.pages])
# Remove extra whitespace
text = re.sub(r'\s+', ' ', text)
# Remove page numbers
text = re.sub(r'\n\s*\d+\s*\n', '\n', text)
return text
# Create the full URL from the GitHub repository
url = "https://raw.githubusercontent.com/anthropics/anthropic-cookbook/main/skills/summarization/data/Sample Sublease Agreement.pdf"
url = url.replace(" ", "%20")
# Download the PDF file into memory
response = requests.get(url)
# Load the PDF from memory
pdf_file = BytesIO(response.content)
document_text = get_llm_text(pdf_file)
print(document_text[:50000])
```
In this example, we first download a pdf of a sample sublease agreement used in the [summarization cookbook](https://github.com/anthropics/anthropic-cookbook/blob/main/skills/summarization/data/Sample%20Sublease%20Agreement.pdf). This agreement was sourced from a publicly available sublease agreement from the [sec.gov website](https://www.sec.gov/Archives/edgar/data/1045425/000119312507044370/dex1032.htm).
We use the pypdf library to extract the contents of the pdf and convert it to text. The text data is then cleaned by removing extra whitespace and page numbers.
### Build a strong prompt
Claude can adapt to various summarization styles. You can change the details of the prompt to guide Claude to be more or less verbose, include more or less technical terminology, or provide a higher or lower level summary of the context at hand.
Here’s an example of how to create a prompt that ensures the generated summaries follow a consistent structure when analyzing sublease agreements:
```python
import anthropic
# Initialize the Anthropic client
client = anthropic.Anthropic()
def summarize_document(text, details_to_extract, model="claude-3-5-sonnet-20241022", max_tokens=1000):
# Format the details to extract to be placed within the prompt's context
details_to_extract_str = '\n'.join(details_to_extract)
# Prompt the model to summarize the sublease agreement
prompt = f"""Summarize the following sublease agreement. Focus on these key aspects:
{details_to_extract_str}
Provide the summary in bullet points nested within the XML header for each section. For example:
- Sublessor: [Name]
// Add more details as needed
If any information is not explicitly stated in the document, note it as "Not specified". Do not preamble.
Sublease agreement text:
{text}
"""
response = client.messages.create(
model=model,
max_tokens=max_tokens,
system="You are a legal analyst specializing in real estate law, known for highly accurate and detailed summaries of sublease agreements.",
messages=[
{"role": "user", "content": prompt},
{"role": "assistant", "content": "Here is the summary of the sublease agreement: "}
],
stop_sequences=[""]
)
return response.content[0].text
sublease_summary = summarize_document(document_text, details_to_extract)
print(sublease_summary)
```
This code implements a `summarize_document` function that uses Claude to summarize the contents of a sublease agreement. The function accepts a text string and a list of details to extract as inputs. In this example, we call the function with the `document_text` and `details_to_extract` variables that were defined in the previous code snippets.
Within the function, a prompt is generated for Claude, including the document to be summarized, the details to extract, and specific instructions for summarizing the document. The prompt instructs Claude to respond with a summary of each detail to extract nested within XML headers.
Because we decided to output each section of the summary within tags, each section can easily be parsed out as a post-processing step. This approach enables structured summaries that can be adapted for your use case, so that each summary follows the same pattern.
### Evaluate your prompt
Prompting often requires testing and optimization for it to be production ready. To determine the readiness of your solution, evaluate the quality of your summaries using a systematic process combining quantitative and qualitative methods. Creating a [strong empirical evaluation](https://docs.anthropic.com/en/docs/build-with-claude/develop-tests#building-evals-and-test-cases) based on your defined success criteria will allow you to optimize your prompts. Here are some metrics you may wish to include within your empirical evaluation:
This measures the overlap between the generated summary and an expert-created reference summary. This metric primarily focuses on recall and is useful for evaluating content coverage.While originally developed for machine translation, this metric can be adapted for summarization tasks. BLEU scores measure the precision of n-gram matches between the generated summary and reference summaries. A higher score indicates that the generated summary contains similar phrases and terminology to the reference summary. This metric involves creating vector representations (embeddings) of both the generated and reference summaries. The similarity between these embeddings is then calculated, often using cosine similarity. Higher similarity scores indicate that the generated summary captures the semantic meaning and context of the reference summary, even if the exact wording differs.This method involves using an LLM such as Claude to evaluate the quality of generated summaries against a scoring rubric. The rubric can be tailored to your specific needs, assessing key factors like accuracy, completeness, and coherence. For guidance on implementing LLM-based grading, view these [tips](https://docs.anthropic.com/en/docs/build-with-claude/develop-tests#tips-for-llm-based-grading).In addition to creating the reference summaries, legal experts can also evaluate the quality of the generated summaries. While this is expensive and time-consuming at scale, this is often done on a few summaries as a sanity check before deploying to production.
### Deploy your prompt
Here are some additional considerations to keep in mind as you deploy your solution to production.
1. **Ensure no liability:** Understand the legal implications of errors in the summaries, which could lead to legal liability for your organization or clients. Provide disclaimers or legal notices clarifying that the summaries are generated by AI and should be reviewed by legal professionals.
2. **Handle diverse document types:** In this guide, we’ve discussed how to extract text from PDFs. In the real-world, documents may come in a variety of formats (PDFs, Word documents, text files, etc.). Ensure your data extraction pipeline can convert all of the file formats you expect to receive.
3. **Parallelize API calls to Claude:** Long documents with a large number of tokens may require up to a minute for Claude to generate a summary. For large document collections, you may want to send API calls to Claude in parallel so that the summaries can be completed in a reasonable timeframe. Refer to Anthropic’s [rate limits](https://docs.anthropic.com/en/api/rate-limits#rate-limits) to determine the maximum amount of API calls that can be performed in parallel.
***
## Improve performance
In complex scenarios, it may be helpful to consider additional strategies to improve performance beyond standard [prompt engineering techniques](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview). Here are some advanced strategies:
### Perform meta-summarization to summarize long documents
Legal summarization often involves handling long documents or many related documents at once, such that you surpass Claude’s context window. You can use a chunking method known as meta-summarization in order to handle this use case. This technique involves breaking down documents into smaller, manageable chunks and then processing each chunk separately. You can then combine the summaries of each chunk to create a meta-summary of the entire document.
Here's an example of how to perform meta-summarization:
```python
import anthropic
# Initialize the Anthropic client
client = anthropic.Anthropic()
def chunk_text(text, chunk_size=20000):
return [text[i:i+chunk_size] for i in range(0, len(text), chunk_size)]
def summarize_long_document(text, details_to_extract, model="claude-3-5-sonnet-20241022", max_tokens=1000):
# Format the details to extract to be placed within the prompt's context
details_to_extract_str = '\n'.join(details_to_extract)
# Iterate over chunks and summarize each one
chunk_summaries = [summarize_document(chunk, details_to_extract, model=model, max_tokens=max_tokens) for chunk in chunk_text(text)]
final_summary_prompt = f"""
You are looking at the chunked summaries of multiple documents that are all related.
Combine the following summaries of the document from different truthful sources into a coherent overall summary:
{"".join(chunk_summaries)}
Focus on these key aspects:
{details_to_extract_str})
Provide the summary in bullet points nested within the XML header for each section. For example:
- Sublessor: [Name]
// Add more details as needed
If any information is not explicitly stated in the document, note it as "Not specified". Do not preamble.
"""
response = client.messages.create(
model=model,
max_tokens=max_tokens,
system="You are a legal expert that summarizes notes on one document.",
messages=[
{"role": "user", "content": final_summary_prompt},
{"role": "assistant", "content": "Here is the summary of the sublease agreement: "}
],
stop_sequences=[""]
)
return response.content[0].text
long_summary = summarize_long_document(document_text, details_to_extract)
print(long_summary)
```
The `summarize_long_document` function builds upon the earlier `summarize_document` function by splitting the document into smaller chunks and summarizing each chunk individually.
The code achieves this by applying the `summarize_document` function to each chunk of 20,000 characters within the original document. The individual summaries are then combined, and a final summary is created from these chunk summaries.
Note that the `summarize_long_document` function isn’t strictly necessary for our example pdf, as the entire document fits within Claude’s context window. However, it becomes essential for documents exceeding Claude’s context window or when summarizing multiple related documents together. Regardless, this meta-summarization technique often captures additional important details in the final summary that were missed in the earlier single-summary approach.
### Use summary indexed documents to explore a large collection of documents
Searching a collection of documents with an LLM usually involves retrieval-augmented generation (RAG). However, in scenarios involving large documents or when precise information retrieval is crucial, a basic RAG approach may be insufficient. Summary indexed documents is an advanced RAG approach that provides a more efficient way of ranking documents for retrieval, using less context than traditional RAG methods. In this approach, you first use Claude to generate a concise summary for each document in your corpus, and then use Clade to rank the relevance of each summary to the query being asked. For further details on this approach, including a code-based example, check out the summary indexed documents section in the [summarization cookbook](https://github.com/anthropics/anthropic-cookbook/blob/main/skills/summarization/guide.ipynb).
### Fine-tune Claude to learn from your dataset
Another advanced technique to improve Claude's ability to generate summaries is fine-tuning. Fine-tuning involves training Claude on a custom dataset that specifically aligns with your legal summarization needs, ensuring that Claude adapts to your use case. Here’s an overview on how to perform fine-tuning:
1. **Identify errors:** Start by collecting instances where Claude’s summaries fall short - this could include missing critical legal details, misunderstanding context, or using inappropriate legal terminology.
2. **Curate a dataset:** Once you've identified these issues, compile a dataset of these problematic examples. This dataset should include the original legal documents alongside your corrected summaries, ensuring that Claude learns the desired behavior.
3. **Perform fine-tuning:** Fine-tuning involves retraining the model on your curated dataset to adjust its weights and parameters. This retraining helps Claude better understand the specific requirements of your legal domain, improving its ability to summarize documents according to your standards.
4. **Iterative improvement:** Fine-tuning is not a one-time process. As Claude continues to generate summaries, you can iteratively add new examples where it has underperformed, further refining its capabilities. Over time, this continuous feedback loop will result in a model that is highly specialized for your legal summarization tasks.
Fine-tuning is currently only available via Amazon Bedrock. Additional details are available in the [AWS launch blog](https://aws.amazon.com/blogs/machine-learning/fine-tune-anthropics-claude-3-haiku-in-amazon-bedrock-to-boost-model-accuracy-and-quality/).
View a fully implemented code-based example of how to use Claude to summarize contracts.
Explore our Citations cookbook recipe for guidance on how to ensure accuracy and explainability of information.
# Guides to common use cases
Claude is designed to excel in a variety of tasks. Explore these in-depth production guides to learn how to build common use cases with Claude.
Best practices for using Claude to classify and route customer support tickets at scale.
Build intelligent, context-aware chatbots with Claude to enhance customer support interactions.
Techniques and best practices for using Claude to perform content filtering and general content moderation.
Summarize legal documents using Claude to extract key information and expedite research.
# Ticket routing
This guide walks through how to harness Claude's advanced natural language understanding capabilities to classify customer support tickets at scale based on customer intent, urgency, prioritization, customer profile, and more.
## Define whether to use Claude for ticket routing
Here are some key indicators that you should use an LLM like Claude instead of traditional ML approaches for your classification task:
Traditional ML processes require massive labeled datasets. Claude's pre-trained model can effectively classify tickets with just a few dozen labeled examples, significantly reducing data preparation time and costs.
Once a traditional ML approach has been established, changing it is a laborious and data-intensive undertaking. On the other hand, as your product or customer needs evolve, Claude can easily adapt to changes in class definitions or new classes without extensive relabeling of training data.
Traditional ML models often struggle with unstructured data and require extensive feature engineering. Claude's advanced language understanding allows for accurate classification based on content and context, rather than relying on strict ontological structures.
Traditional ML approaches often rely on bag-of-words models or simple pattern matching. Claude excels at understanding and applying underlying rules when classes are defined by conditions rather than examples.
Many traditional ML models provide little insight into their decision-making process. Claude can provide human-readable explanations for its classification decisions, building trust in the automation system and facilitating easy adaptation if needed.
Traditional ML systems often struggle with outliers and ambiguous inputs, frequently misclassifying them or defaulting to a catch-all category. Claude's natural language processing capabilities allow it to better interpret context and nuance in support tickets, potentially reducing the number of misrouted or unclassified tickets that require manual intervention.
Traditional ML approaches typically require separate models or extensive translation processes for each supported language. Claude's multilingual capabilities allow it to classify tickets in various languages without the need for separate models or extensive translation processes, streamlining support for global customer bases.
***
## Build and deploy your LLM support workflow
### Understand your current support approach
Before diving into automation, it's crucial to understand your existing ticketing system. Start by investigating how your support team currently handles ticket routing.
Consider questions like:
* What criteria are used to determine what SLA/service offering is applied?
* Is ticket routing used to determine which tier of support or product specialist a ticket goes to?
* Are there any automated rules or workflows already in place? In what cases do they fail?
* How are edge cases or ambiguous tickets handled?
* How does the team prioritize tickets?
The more you know about how humans handle certain cases, the better you will be able to work with Claude to do the task.
### Define user intent categories
A well-defined list of user intent categories is crucial for accurate support ticket classification with Claude. Claude’s ability to route tickets effectively within your system is directly proportional to how well-defined your system’s categories are.
Here are some example user intent categories and subcategories.
* Hardware problem
* Software bug
* Compatibility issue
* Performance problem
* Password reset
* Account access issues
* Billing inquiries
* Subscription changes
* Feature inquiries
* Product compatibility questions
* Pricing information
* Availability inquiries
* How-to questions
* Feature usage assistance
* Best practices advice
* Troubleshooting guidance
* Bug reports
* Feature requests
* General feedback or suggestions
* Complaints
* Order status inquiries
* Shipping information
* Returns and exchanges
* Order modifications
* Installation assistance
* Upgrade requests
* Maintenance scheduling
* Service cancellation
* Data privacy inquiries
* Suspicious activity reports
* Security feature assistance
* Regulatory compliance questions
* Terms of service inquiries
* Legal documentation requests
* Critical system failures
* Urgent security issues
* Time-sensitive problems
* Product training requests
* Documentation inquiries
* Webinar or workshop information
* Integration assistance
* API usage questions
* Third-party compatibility inquiries
In addition to intent, ticket routing and prioritization may also be influenced by other factors such as urgency, customer type, SLAs, or language. Be sure to consider other routing criteria when building your automated routing system.
### Establish success criteria
Work with your support team to [define clear success criteria](https://docs.anthropic.com/en/docs/build-with-claude/define-success) with measurable benchmarks, thresholds, and goals.
Here are some standard criteria and benchmarks when using LLMs for support ticket routing:
This metric assesses how consistently Claude classifies similar tickets over time. It's crucial for maintaining routing reliability. Measure this by periodically testing the model with a set of standardized inputs and aiming for a consistency rate of 95% or higher.
This measures how quickly Claude can adapt to new categories or changing ticket patterns. Test this by introducing new ticket types and measuring the time it takes for the model to achieve satisfactory accuracy (e.g., >90%) on these new categories. Aim for adaptation within 50-100 sample tickets.
This assesses Claude's ability to accurately route tickets in multiple languages. Measure the routing accuracy across different languages, aiming for no more than a 5-10% drop in accuracy for non-primary languages.
This evaluates Claude's performance on unusual or complex tickets. Create a test set of edge cases and measure the routing accuracy, aiming for at least 80% accuracy on these challenging inputs.
This measures Claude's fairness in routing across different customer demographics. Regularly audit routing decisions for potential biases, aiming for consistent routing accuracy (within 2-3%) across all customer groups.
In situations where minimizing token count is crucial, this criteria assesses how well Claude performs with minimal context. Measure routing accuracy with varying amounts of context provided, aiming for 90%+ accuracy with just the ticket title and a brief description.
This evaluates the quality and relevance of Claude's explanations for its routing decisions. Human raters can score explanations on a scale (e.g., 1-5), with the goal of achieving an average score of 4 or higher.
Here are some common success criteria that may be useful regardless of whether an LLM is used:
Routing accuracy measures how often tickets are correctly assigned to the appropriate team or individual on the first try. This is typically measured as a percentage of correctly routed tickets out of total tickets. Industry benchmarks often aim for 90-95% accuracy, though this can vary based on the complexity of the support structure.
This metric tracks how quickly tickets are assigned after being submitted. Faster assignment times generally lead to quicker resolutions and improved customer satisfaction. Best-in-class systems often achieve average assignment times of under 5 minutes, with many aiming for near-instantaneous routing (which is possible with LLM implementations).
The rerouting rate indicates how often tickets need to be reassigned after initial routing. A lower rate suggests more accurate initial routing. Aim for a rerouting rate below 10%, with top-performing systems achieving rates as low as 5% or less.
This measures the percentage of tickets resolved during the first interaction with the customer. Higher rates indicate efficient routing and well-prepared support teams. Industry benchmarks typically range from 70-75%, with top performers achieving rates of 80% or higher.
Average handling time measures how long it takes to resolve a ticket from start to finish. Efficient routing can significantly reduce this time. Benchmarks vary widely by industry and complexity, but many organizations aim to keep average handling time under 24 hours for non-critical issues.
Often measured through post-interaction surveys, these scores reflect overall customer happiness with the support process. Effective routing contributes to higher satisfaction. Aim for CSAT scores of 90% or higher, with top performers often achieving 95%+ satisfaction rates.
This measures how often tickets need to be escalated to higher tiers of support. Lower escalation rates often indicate more accurate initial routing. Strive for an escalation rate below 20%, with best-in-class systems achieving rates of 10% or less.
This metric looks at how many tickets agents can handle effectively after implementing the routing solution. Improved routing should increase productivity. Measure this by tracking tickets resolved per agent per day or hour, aiming for a 10-20% improvement after implementing a new routing system.
This measures the percentage of potential tickets resolved through self-service options before entering the routing system. Higher rates indicate effective pre-routing triage. Aim for a deflection rate of 20-30%, with top performers achieving rates of 40% or higher.
This metric calculates the average cost to resolve each support ticket. Efficient routing should help reduce this cost over time. While benchmarks vary widely, many organizations aim to reduce cost per ticket by 10-15% after implementing an improved routing system.
### Choose the right Claude model
The choice of model depends on the trade-offs between cost, accuracy, and response time.
Many customers have found `claude-3-haiku-20240307` an ideal model for ticket routing, as it is the fastest and most cost-effective model in the Claude 3 family while still delivering excellent results. If your classification problem requires deep subject matter expertise or a large volume of intent categories complex reasoning, you may opt for the [larger Sonnet model](https://docs.anthropic.com/en/docs/about-claude/models).
### Build a strong prompt
Ticket routing is a type of classification task. Claude analyzes the content of a support ticket and classifies it into predefined categories based on the issue type, urgency, required expertise, or other relevant factors.
Let’s write a ticket classification prompt. Our initial prompt should contain the contents of the user request and return both the reasoning and the intent.
Try the [prompt generator](https://docs.anthropic.com/en/docs/prompt-generator) on the [Anthropic Console](https://console.anthropic.com/login) to have Claude write a first draft for you.
Here's an example ticket routing classification prompt:
```python
def classify_support_request(ticket_contents):
# Define the prompt for the classification task
classification_prompt = f"""You will be acting as a customer support ticket classification system. Your task is to analyze customer support requests and output the appropriate classification intent for each request, along with your reasoning.
Here is the customer support request you need to classify:
{ticket_contents}
Please carefully analyze the above request to determine the customer's core intent and needs. Consider what the customer is asking for has concerns about.
First, write out your reasoning and analysis of how to classify this request inside tags.
Then, output the appropriate classification label for the request inside a tag. The valid intents are:
Support, Feedback, ComplaintOrder TrackingRefund/Exchange
A request may have ONLY ONE applicable intent. Only include the intent that is most applicable to the request.
As an example, consider the following request:
Hello! I had high-speed fiber internet installed on Saturday and my installer, Kevin, was absolutely fantastic! Where can I send my positive review? Thanks for your help!
Here is an example of how your output should be formatted (for the above example request):
The user seeks information in order to leave positive feedback.Support, Feedback, Complaint
Here are a few more examples:
Example 2 Input:
I wanted to write and personally thank you for the compassion you showed towards my family during my father's funeral this past weekend. Your staff was so considerate and helpful throughout this whole process; it really took a load off our shoulders. The visitation brochures were beautiful. We'll never forget the kindness you showed us and we are so appreciative of how smoothly the proceedings went. Thank you, again, Amarantha Hill on behalf of the Hill Family.
Example 2 Output:
User leaves a positive review of their experience.Support, Feedback, Complaint
...
Example 9 Input:
Your website keeps sending ad-popups that block the entire screen. It took me twenty minutes just to finally find the phone number to call and complain. How can I possibly access my account information with all of these popups? Can you access my account for me, since your website is broken? I need to know what the address is on file.
Example 9 Output:
The user requests help accessing their web account information.Support, Feedback, Complaint
Remember to always include your classification reasoning before your actual intent output. The reasoning should be enclosed in tags and the intent in tags. Return only the reasoning and the intent.
"""
```
Let's break down the key components of this prompt:
* We use Python f-strings to create the prompt template, allowing the `ticket_contents` to be inserted into the `` tags.
* We give Claude a clearly defined role as a classification system that carefully analyzes the ticket content to determine the customer's core intent and needs.
* We instruct Claude on proper output formatting, in this case to provide its reasoning and analysis inside `` tags, followed by the appropriate classification label inside `` tags.
* We specify the valid intent categories: "Support, Feedback, Complaint", "Order Tracking", and "Refund/Exchange".
* We include a few examples (a.k.a. few-shot prompting) to illustrate how the output should be formatted, which improves accuracy and consistency.
The reason we want to have Claude split its response into various XML tag sections is so that we can use regular expressions to separately extract the reasoning and intent from the output. This allows us to create targeted next steps in the ticket routing workflow, such as using only the intent to decide which person to route the ticket to.
### Deploy your prompt
It’s hard to know how well your prompt works without deploying it in a test production setting and [running evaluations](https://docs.anthropic.com/en/docs/build-with-claude/develop-tests).
Let’s build the deployment structure. Start by defining the method signature for wrapping our call to Claude. We'll take the method we’ve already begun to write, which has `ticket_contents` as input, and now return a tuple of `reasoning` and `intent` as output. If you have an existing automation using traditional ML, you'll want to follow that method signature instead.
```python
import anthropic
import re
# Create an instance of the Anthropic API client
client = anthropic.Anthropic()
# Set the default model
DEFAULT_MODEL="claude-3-haiku-20241022"
def classify_support_request(ticket_contents):
# Define the prompt for the classification task
classification_prompt = f"""You will be acting as a customer support ticket classification system.
...
... The reasoning should be enclosed in tags and the intent in tags. Return only the reasoning and the intent.
"""
# Send the prompt to the API to classify the support request.
message = client.messages.create(
model=DEFAULT_MODEL,
max_tokens=500,
temperature=0,
messages=[{"role": "user", "content": classification_prompt}],
stream=False,
)
reasoning_and_intent = message.content[0].text
# Use Python's regular expressions library to extract `reasoning`.
reasoning_match = re.search(
r"(.*?)", reasoning_and_intent, re.DOTALL
)
reasoning = reasoning_match.group(1).strip() if reasoning_match else ""
# Similarly, also extract the `intent`.
intent_match = re.search(r"(.*?)", reasoning_and_intent, re.DOTALL)
intent = intent_match.group(1).strip() if intent_match else ""
return reasoning, intent
```
This code:
* Imports the Anthropic library and creates a client instance using your API key.
* Defines a `classify_support_request` function that takes a `ticket_contents` string.
* Sends the `ticket_contents` to Claude for classification using the `classification_prompt`
* Returns the model's `reasoning` and `intent` extracted from the response.
Since we need to wait for the entire reasoning and intent text to be generated before parsing, we set `stream=False` (the default).
***
## Evaluate your prompt
Prompting often requires testing and optimization for it to be production ready. To determine the readiness of your solution, evaluate performance based on the success criteria and thresholds you established earlier.
To run your evaluation, you will need test cases to run it on. The rest of this guide assumes you have already [developed your test cases](https://docs.anthropic.com/en/docs/build-with-claude/develop-tests).
### Build an evaluation function
Our example evaluation for this guide measures Claude’s performance along three key metrics:
* Accuracy
* Cost per classification
You may need to assess Claude on other axes depending on what factors that are important to you.
To assess this, we first have to modify the script we wrote and add a function to compare the predicted intent with the actual intent and calculate the percentage of correct predictions. We also have to add in cost calculation and time measurement functionality.
```python
import anthropic
import re
# Create an instance of the Anthropic API client
client = anthropic.Anthropic()
# Set the default model
DEFAULT_MODEL="claude-3-haiku-20240307"
def classify_support_request(request, actual_intent):
# Define the prompt for the classification task
classification_prompt = f"""You will be acting as a customer support ticket classification system.
...
...The reasoning should be enclosed in tags and the intent in tags. Return only the reasoning and the intent.
"""
message = client.messages.create(
model=DEFAULT_MODEL,
max_tokens=500,
temperature=0,
messages=[{"role": "user", "content": classification_prompt}],
)
usage = message.usage # Get the usage statistics for the API call for how many input and output tokens were used.
reasoning_and_intent = message.content[0].text
# Use Python's regular expressions library to extract `reasoning`.
reasoning_match = re.search(
r"(.*?)", reasoning_and_intent, re.DOTALL
)
reasoning = reasoning_match.group(1).strip() if reasoning_match else ""
# Similarly, also extract the `intent`.
intent_match = re.search(r"(.*?)", reasoning_and_intent, re.DOTALL)
intent = intent_match.group(1).strip() if intent_match else ""
# Check if the model's prediction is correct.
correct = actual_intent.strip() == intent.strip()
# Return the reasoning, intent, correct, and usage.
return reasoning, intent, correct, usage
```
Let’s break down the edits we’ve made:
* We added the `actual_intent` from our test cases into the `classify_support_request` method and set up a comparison to assess whether Claude’s intent classification matches our golden intent classification.
* We extracted usage statistics for the API call to calculate cost based on input and output tokens used
### Run your evaluation
A proper evaluation requires clear thresholds and benchmarks to determine what is a good result. The script above will give us the runtime values for accuracy, response time, and cost per classification, but we still would need clearly established thresholds. For example:
* **Accuracy:** 95% (out of 100 tests)
* **Cost per classification:** 50% reduction on average (across 100 tests) from current routing method
Having these thresholds allows you to quickly and easily tell at scale, and with impartial empiricism, what method is best for you and what changes might need to be made to better fit your requirements.
***
## Improve performance
In complex scenarios, it may be helpful to consider additional strategies to improve performance beyond standard [prompt engineering techniques](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview) & [guardrail implementation strategies](https://docs.anthropic.com/en/docs/test-and-evaluate/strengthen-guardrails/reduce-hallucinations). Here are some common scenarios:
### Use a taxonomic hierarchy for cases with 20+ intent categories
As the number of classes grows, the number of examples required also expands, potentially making the prompt unwieldy. As an alternative, you can consider implementing a hierarchical classification system using a mixture of classifiers.
1. Organize your intents in a taxonomic tree structure.
2. Create a series of classifiers at every level of the tree, enabling a cascading routing approach.
For example, you might have a top-level classifier that broadly categorizes tickets into "Technical Issues," "Billing Questions," and "General Inquiries." Each of these categories can then have its own sub-classifier to further refine the classification.
![](https://mintlify.s3-us-west-1.amazonaws.com/anthropic/images/ticket-hierarchy.png)
* **Pros - greater nuance and accuracy:** You can create different prompts for each parent path, allowing for more targeted and context-specific classification. This can lead to improved accuracy and more nuanced handling of customer requests.
* **Cons - increased latency:** Be advised that multiple classifiers can lead to increased latency, and we recommend implementing this approach with our fastest model, Haiku.
### Use vector databases and similarity search retrieval to handle highly variable tickets
Despite providing examples being the most effective way to improve performance, if support requests are highly variable, it can be hard to include enough examples in a single prompt.
In this scenario, you could employ a vector database to do similarity searches from a dataset of examples and retrieve the most relevant examples for a given query.
This approach, outlined in detail in our [classification recipe](https://github.com/anthropics/anthropic-cookbook/blob/82675c124e1344639b2a875aa9d3ae854709cd83/skills/classification/guide.ipynb), has been shown to improve performance from 71% accuracy to 93% accuracy.
### Account specifically for expected edge cases
Here are some scenarios where Claude may misclassify tickets (there may be others that are unique to your situation). In these scenarios,consider providing explicit instructions or examples in the prompt of how Claude should handle the edge case:
Customers often express needs indirectly. For example, "I've been waiting for my package for over two weeks now" may be an indirect request for order status.
* **Solution:** Provide Claude with some real customer examples of these kinds of requests, along with what the underlying intent is. You can get even better results if you include a classification rationale for particularly nuanced ticket intents, so that Claude can better generalize the logic to other tickets.
When customers express dissatisfaction, Claude may prioritize addressing the emotion over solving the underlying problem.
* **Solution:** Provide Claude with directions on when to prioritize customer sentiment or not. It can be something as simple as “Ignore all customer emotions. Focus only on analyzing the intent of the customer’s request and what information the customer might be asking for.”
When customers present multiple issues in a single interaction, Claude may have difficulty identifying the primary concern.
* **Solution:** Clarify the prioritization of intents so thatClaude can better rank the extracted intents and identify the primary concern.
***
## Integrate Claude into your greater support workflow
Proper integration requires that you make some decisions regarding how your Claude-based ticket routing script fits into the architecture of your greater ticket routing system.There are two ways you could do this:
* **Push-based:** The support ticket system you’re using (e.g. Zendesk) triggers your code by sending a webhook event to your routing service, which then classifies the intent and routes it.
* This approach is more web-scalable, but needs you to expose a public endpoint.
* **Pull-Based:** Your code pulls for the latest tickets based on a given schedule and routes them at pull time.
* This approach is easier to implement but might make unnecessary calls to the support ticket system when the pull frequency is too high or might be overly slow when the pull frequency is too low.
For either of these approaches, you will need to wrap your script in a service. The choice of approach depends on what APIs your support ticketing system provides.
***
Visit our classification cookbook for more example code and detailed eval guidance.
Begin building and evaluating your workflow on the Anthropic Console.
# Google Sheets add-on
The [Claude for Sheets extension](https://workspace.google.com/marketplace/app/claude%5Ffor%5Fsheets/909417792257) integrates Claude into Google Sheets, allowing you to execute interactions with Claude directly in cells.
## Why use Claude for Sheets?
Claude for Sheets enables prompt engineering at scale by enabling you to test prompts across evaluation suites in parallel. Additionally, it excels at office tasks like survey analysis and online data processing.
Visit our [prompt engineering example sheet](https://docs.google.com/spreadsheets/d/1sUrBWO0u1-ZuQ8m5gt3-1N5PLR6r__UsRsB7WeySDQA/copy) to see this in action.
***
## Get started with Claude for Sheets
### Install Claude for Sheets
Easily enable Claude for Sheets using the following steps:
If you don't yet have an API key, you can make API keys in the [Anthropic Console](https://console.anthropic.com/settings/keys).
Find the [Claude for Sheets extension](https://workspace.google.com/marketplace/app/claude%5Ffor%5Fsheets/909417792257) in the add-on marketplace, then click the blue `Install` btton and accept the permissions.
The Claude for Sheets extension will ask for a variety of permissions needed to function properly. Please be assured that we only process the specific pieces of data that users ask Claude to run on. This data is never used to train our generative models.
Extension permissions include:
* **View and manage spreadsheets that this application has been installed in:** Needed to run prompts and return results
* **Connect to an external service:** Needed in order to make calls to Anthropic's API endpoints
* **Allow this application to run when you are not present:** Needed to run cell recalculations without user intervention
* **Display and run third-party web content in prompts and sidebars inside Google applications:** Needed to display the sidebar and post-install prompt
Enter your API key at `Extensions` > `Claude for Sheets™` > `Open sidebar` > `☰` > `Settings` > `API provider`. You may need to wait or refresh for the Claude for Sheets menu to appear.
![](https://mintlify.s3-us-west-1.amazonaws.com/anthropic/images/044af20-Screenshot_2024-01-04_at_11.58.21_AM.png)
You will have to re-enter your API key every time you make a new Google Sheet
### Enter your first prompt
There are two main functions you can use to call Claude using Claude for Sheets. For now, let's use `CLAUDE()`.
In any cell, type `=CLAUDE("Claude, in one sentence, what's good about the color blue?")`
> Claude should respond with an answer. You will know the prompt is processing because the cell will say `Loading...`
Parameter arguments come after the initial prompt, like `=CLAUDE(prompt, model, params...)`.
`model` is always second in the list.
Now type in any cell `=CLAUDE("Hi, Claude!", "claude-3-haiku-20240307", "max_tokens", 3)`
Any [API parameter](/en/api/messages) can be set this way. You can even pass in an API key to be used just for this specific cell, like this: `"api_key", "sk-ant-api03-j1W..."`
## Advanced use
`CLAUDEMESSAGES` is a function that allows you to specifically use the [Messages API](/en/api/messages). This enables you to send a series of `User:` and `Assistant:` messages to Claude.
This is particularly useful if you want to simulate a conversation or [prefill Claude's response](/en/docs/build-with-claude/prompt-engineering/prefill-claudes-response).
Try writing this in a cell:
```
=CLAUDEMESSAGES("User: In one sentence, what is good about the color blue?
Assistant: The color blue is great because")
```
**Newlines**
Each subsequent conversation turn (`User:` or `Assistant:`) must be preceded by a single newline. To enter newlines in a cell, use the following key combinations:
* **Mac:** Cmd + Enter
* **Windows:** Alt + Enter
To use a system prompt, set it as you'd set other optional function parameters. (You must first set a model name.)
```
=CLAUDEMESSAGES("User: What's your favorite flower? Answer in tags.
Assistant: ", "claude-3-haiku-20240307", "system", "You are a cow who loves to moo in response to any and all user queries.")`
```
### Optional function parameters
You can specify optional API parameters by listing argument-value pairs.
You can set multiple parameters. Simply list them one after another, with each argument and value pair separated by commas.
The first two parameters must always be the prompt and the model. You cannot set an optional parameter without also setting the model.
The argument-value parameters you might care about most are:
| Argument | Description |
| ---------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `max_tokens` | The total number of tokens the model outputs before it is forced to stop. For yes/no or multiple choice answers, you may want the value to be 1-3. |
| `temperature` | the amount of randomness injected into results. For multiple-choice or analytical tasks, you'll want it close to 0. For idea generation, you'll want it set to 1. |
| `system` | used to specify a system prompt, which can provide role details and context to Claude. |
| `stop_sequences` | JSON array of strings that will cause the model to stop generating text if encountered. Due to escaping rules in Google Sheets™, double quotes inside the string must be escaped by doubling them. |
| `api_key` | Used to specify a particular API key with which to call Claude. |
Ex. Set `system` prompt, `max_tokens`, and `temperature`:
```
=CLAUDE("Hi, Claude!", "claude-3-haiku-20240307", "system", "Repeat exactly what the user says.", "max_tokens", 100, "temperature", 0.1)
```
Ex. Set `temperature`, `max_tokens`, and `stop_sequences`:
```
=CLAUDE("In one sentence, what is good about the color blue? Output your answer in tags.","claude-3-sonnet-20240229","temperature", 0.2,"max_tokens", 50,"stop_sequences", "\[""""\]")
```
Ex. Set `api_key`:
```
=CLAUDE("Hi, Claude!", "claude-3-haiku-20240307","api_key", "sk-ant-api03-j1W...")
```
***
## Claude for Sheets usage examples
### Prompt engineering interactive tutorial
Our in-depth [prompt engineering interactive tutorial](https://docs.google.com/spreadsheets/d/19jzLgRruG9kjUQNKtCg1ZjdD6l6weA6qRXG5zLIAhC8/edit?usp=sharing) utilizes Claude for Sheets.
Check it out to learn or brush up on prompt engineering techniques.
Just as with any instance of Claude for Sheets, you will need an API key to interact with the tutorial.
### Prompt engineering workflow
Our [Claude for Sheets prompting examples workbench](https://docs.google.com/spreadsheets/d/1sUrBWO0u1-ZuQ8m5gt3-1N5PLR6r%5F%5FUsRsB7WeySDQA/copy) is a Claude-powered spreadsheet that houses example prompts and prompt engineering structures.
### Claude for Sheets workbook template
Make a copy of our [Claude for Sheets workbook template](https://docs.google.com/spreadsheets/d/1UwFS-ZQWvRqa6GkbL4sy0ITHK2AhXKe-jpMLzS0kTgk/copy) to get started with your own Claude for Sheets work!
***
## Troubleshooting
1. Ensure that you have enabled the extension for use in the current sheet
1. Go to *Extensions* > *Add-ons* > *Manage add-ons*
2. Click on the triple dot menu at the top right corner of the Claude for Sheets extension and make sure "Use in this document" is checked\
![](https://mintlify.s3-us-west-1.amazonaws.com/anthropic/images/9cce371-Screenshot_2023-10-03_at_7.17.39_PM.png)
2. Refresh the page
You can manually recalculate `#ERROR!`, `⚠ DEFERRED ⚠` or `⚠ THROTTLED ⚠`cells by selecting from the recalculate options within the Claude for Sheets extension menu.
![](https://mintlify.s3-us-west-1.amazonaws.com/anthropic/images/f729ba9-Screenshot_2024-02-01_at_8.30.31_PM.png)
1. Wait 20 seconds, then check again
2. Refresh the page and wait 20 seconds again
3. Uninstall and reinstall the extension
***
## Further information
For more information regarding this extension, see the [Claude for Sheets Google Workspace Marketplace](https://workspace.google.com/marketplace/app/claude%5Ffor%5Fsheets/909417792257) overview page.
# Computer use (beta)
The upgraded Claude 3.5 Sonnet model is capable of interacting with [tools](/en/docs/build-with-claude/tool-use) that can manipulate a computer desktop environment.
Computer use is a beta feature. Please be aware that computer use poses unique risks that are distinct from standard API features or chat interfaces. These risks are heightened when using computer use to interact with the internet. To minimize risks, consider taking precautions such as:
1. Use a dedicated virtual machine or container with minimal privileges to prevent direct system attacks or accidents.
2. Avoid giving the model access to sensitive data, such as account login information, to prevent information theft.
3. Limit internet access to an allowlist of domains to reduce exposure to malicious content.
4. Ask a human to confirm decisions that may result in meaningful real-world consequences as well as any tasks requiring affirmative consent, such as accepting cookies, executing financial transactions, or agreeing to terms of service.
5. If you need the model to log in, provide it with the username and password in your prompt inside xml tags like ``. Using computer use within applications that require login increases the risk of bad outcomes as a result of prompt injection. Please review our [guide on mitigating prompt injections](/en/docs/test-and-evaluate/strengthen-guardrails/mitigate-jailbreaks) before providing the model with login credentials.
In some circumstances, Claude will follow commands found in content even if it conflicts with the user's instructions. For example, Claude instructions on webpages or contained in images may override instructions or cause Claude to make mistakes. We suggest taking precautions to isolate Claude from sensitive data and actions to avoid risks related to prompt injection.
Finally, please inform end users of relevant risks and obtain their consent prior to enabling computer use in your own products.
Get started quickly with our computer use reference implementation that includes a web interface, Docker container, example tool implementations, and an agent loop.
Please use [this form](https://forms.gle/BT1hpBrqDPDUrCqo7) to provide feedback on the quality of the model responses, the API itself, or the quality of the documentation - we cannot wait to hear from you!
Here's an example of how to provide computer use tools to Claude using the Messages API:
```bash Shell
curl https://api.anthropic.com/v1/messages \
-H "content-type: application/json" \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "anthropic-beta: computer-use-2024-10-22" \
-d '{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"tools": [
{
"type": "computer_20241022",
"name": "computer",
"display_width_px": 1024,
"display_height_px": 768,
"display_number": 1
},
{
"type": "text_editor_20241022",
"name": "str_replace_editor"
},
{
"type": "bash_20241022",
"name": "bash"
}
],
"messages": [
{
"role": "user",
"content": "Save a picture of a cat to my desktop."
}
]
}'
```
```Python Python
import anthropic
client = anthropic.Anthropic()
response = client.beta.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
tools=[
{
"type": "computer_20241022",
"name": "computer",
"display_width_px": 1024,
"display_height_px": 768,
"display_number": 1,
},
{
"type": "text_editor_20241022",
"name": "str_replace_editor"
},
{
"type": "bash_20241022",
"name": "bash"
}
],
messages=[{"role": "user", "content": "Save a picture of a cat to my desktop."}],
betas=["computer-use-2024-10-22"],
)
print(response)
```
```TypeScript TypeScript
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
const message = await anthropic.beta.messages.create({
model: "claude-3-5-sonnet-20241022",
max_tokens: 1024,
tools: [
{
type: "computer_20241022",
name: "computer",
display_width_px: 1024,
display_height_px: 768,
display_number: 1
},
{
type: "text_editor_20241022",
name: "str_replace_editor"
},
{
type: "bash_20241022",
name: "bash"
}
],
messages: [{ role: "user", content: "Save a picture of a cat to my desktop." }],
betas: ["computer-use-2024-10-22"],
});
console.log(message);
```
***
## How computer use works
* Add Anthropic-defined computer use tools to your API request.
* Include a user prompt that might require these tools, e.g., "Save a picture of a cat to my desktop."
* Claude loads the stored computer use tool definitions and assesses if any tools can help with the user's query.
* If yes, Claude constructs a properly formatted tool use request.
* The API response has a `stop_reason` of `tool_use`, signaling Claude's intent.
* On your end, extract the tool name and input from Claude's request.
* Use the tool on a container or Virtual Machine.
* Continue the conversation with a new `user` message containing a `tool_result` content block.
* Claude analyzes the tool results to determine if more tool use is needed or the task has been completed.
* If Claude decides it needs another tool, it responds with another `tool_use` `stop_reason` and you should return to step 3.
* Otherwise, it crafts a text response to the user.
We refer to the repetition of steps 3 and 4 without user input as the "agent loop" - i.e., Claude responding with a tool use request and your application responding to Claude with the results of evaluating that request.
***
## How to implement computer use
### Start with our reference implementation
We have built a [reference implementation](https://github.com/anthropics/anthropic-quickstarts/tree/main/computer-use-demo) that includes everything you need to get started quickly with computer use:
* A [containerized environment](https://github.com/anthropics/anthropic-quickstarts/blob/main/computer-use-demo/Dockerfile) suitable for computer use with Claude
* Implementations of [the computer use tools](https://github.com/anthropics/anthropic-quickstarts/tree/main/computer-use-demo/computer_use_demo/tools)
* An [agent loop](https://github.com/anthropics/anthropic-quickstarts/blob/main/computer-use-demo/computer_use_demo/loop.py) that interacts with the Anthropic API and executes the computer use tools
* A web interface to interact with the container, agent loop, and tools.
We recommend trying the reference implementation out before reading the rest of this documentation.
### Optimize model performance with prompting
Here are some tips on how to get the best quality outputs:
1. Specify simple, well-defined tasks and provide explicit instructions for each step.
2. Claude sometimes assumes outcomes of its actions without explicitly checking their results. To prevent this you can prompt Claude with `After each step, take a screenshot and carefully evaluate if you have achieved the right outcome. Explicitly show your thinking: "I have evaluated step X..." If not correct, try again. Only when you confirm a step was executed correctly should you move on to the next one.`
3. Some UI elements (like dropdowns and scrollbars) might be tricky for Claude to manipulate using mouse movements. If you experience this, try prompting the model to use keyboard shortcuts.
4. For repeatable tasks or UI interactions, include example screenshots and tool calls of successful outcomes in your prompt.
If you repeatedly encounter a clear set of issues or know in advance the tasks Claude will need to complete, use the system prompt to provide Claude with explicit tips or instructions on how to do the tasks successfully.
#### System prompts
When one of the Anthropic-defined tools is requested via the Anthropic API, a computer use-specific system prompt is generated. It's similar to the [tool use system prompt](/en/docs/build-with-claude/tool-use#tool-use-system-prompt) but starts with:
> You have access to a set of functions you can use to answer the user's question. This includes access to a sandboxed computing environment. You do NOT currently have the ability to inspect files or interact with external resources, except by invoking the below functions.
As with regular tool use, the user-provided `system_prompt` field is still respected and used in the construction of the combined system prompt.
### Understand Anthropic-defined tools
As a beta, these tool definitions are subject to change.
We have provided a set of tools that enable Claude to effectively use computers. When specifying an Anthropic-defined tool, `description` and `tool_schema` fields are not necessary or allowed.
**Anthropic-defined tools are user executed**
Anthropic-defined tools are defined by Anthropic but you must explicitly evaluate the results of the tool and return the `tool_results` to Claude. As with any tool, the model does not automatically execute the tool.
We currently provide 3 Anthropic-defined tools:
* `{ "type": "computer_20241022", "name": "computer" }`
* `{ "type": "text_editor_20241022", "name": "str_replace_editor" }`
* `{ "type": "bash_20241022", "name": "bash" }`
The `type` field identifies the tool and its parameters for validation purposes, the `name` field is the tool name exposed to the model.
If you want to prompt the model to use one of these tools, you can explicitly refer the tool by the `name` field. The `name` field must be unique within the tool list; you cannot define a tool with the same name as an Anthropic-defined tool in the same API call.
We do not recommend defining tools with the names of Anthropic-defined tools. While you can still redefine tools with these names (as long as the tool name is unique in your `tools` block), doing so may result in degraded model performance.
We do not recommend sending screenshots in resolutions above [XGA/WXGA](https://en.wikipedia.org/wiki/Display_resolution_standards#XGA) to avoid issues related to [image resizing](/en/docs/build-with-claude/vision#evaluate-image-size).
Relying on the image resizing behavior in the API will result in lower model accuracy and slower performance than directly implementing scaling yourself.
The [reference repository](https://github.com/anthropics/anthropic-quickstarts/tree/main/computer-use-demo/computer_use_demo/tools/computer.py) demonstrates how to scale from higher resolutions to a suggested resolution.
#### Type
`computer_20241022`
#### Parameters
* `display_width_px`: **Required** The width of the display being controlled by the model in pixels.
* `display_height_px`: **Required** The height of the display being controlled by the model in pixels.
* `display_number`: **Optional** The display number to control (only relevant for X11 environments). If specified, the tool will be provided a display number in the tool definition.
#### Tool description
We are providing our tool description **for reference only**. You should not specify this in your Anthropic-defined tool call.
```plaintext
Use a mouse and keyboard to interact with a computer, and take screenshots.
* This is an interface to a desktop GUI. You do not have access to a terminal or applications menu. You must click on desktop icons to start applications.
* Some applications may take time to start or process actions, so you may need to wait and take successive screenshots to see the results of your actions. E.g. if you click on Firefox and a window doesn't open, try taking another screenshot.
* The screen's resolution is {{ display_width_px }}x{{ display_height_px }}.
* The display number is {{ display_number }}
* Whenever you intend to move the cursor to click on an element like an icon, you should consult a screenshot to determine the coordinates of the element before moving the cursor.
* If you tried clicking on a program or link but it failed to load, even after waiting, try adjusting your cursor position so that the tip of the cursor visually falls on the element that you want to click.
* Make sure to click any buttons, links, icons, etc with the cursor tip in the center of the element. Don't click boxes on their edges unless asked.
```
#### Tool input schema
We are providing our input schema **for reference only**. You should not specify this in your Anthropic-defined tool call.
```Python
{
"properties": {
"action": {
"description": """The action to perform. The available actions are:
* `key`: Press a key or key-combination on the keyboard.
- This supports xdotool's `key` syntax.
- Examples: "a", "Return", "alt+Tab", "ctrl+s", "Up", "KP_0" (for the numpad 0 key).
* `type`: Type a string of text on the keyboard.
* `cursor_position`: Get the current (x, y) pixel coordinate of the cursor on the screen.
* `mouse_move`: Move the cursor to a specified (x, y) pixel coordinate on the screen.
* `left_click`: Click the left mouse button.
* `left_click_drag`: Click and drag the cursor to a specified (x, y) pixel coordinate on the screen.
* `right_click`: Click the right mouse button.
* `middle_click`: Click the middle mouse button.
* `double_click`: Double-click the left mouse button.
* `screenshot`: Take a screenshot of the screen.""",
"enum": [
"key",
"type",
"mouse_move",
"left_click",
"left_click_drag",
"right_click",
"middle_click",
"double_click",
"screenshot",
"cursor_position",
],
"type": "string",
},
"coordinate": {
"description": "(x, y): The x (pixels from the left edge) and y (pixels from the top edge) coordinates to move the mouse to. Required only by `action=mouse_move` and `action=left_click_drag`.",
"type": "array",
},
"text": {
"description": "Required only by `action=type` and `action=key`.",
"type": "string",
},
},
"required": ["action"],
"type": "object",
}
```
#### Type
`text_editor_20241022`
#### Tool description
We are providing our tool description **for reference only**. You should not specify this in your Anthropic-defined tool call.
```plaintext
Custom editing tool for viewing, creating and editing files
* State is persistent across command calls and discussions with the user
* If `path` is a file, `view` displays the result of applying `cat -n`. If `path` is a directory, `view` lists non-hidden files and directories up to 2 levels deep
* The `create` command cannot be used if the specified `path` already exists as a file
* If a `command` generates a long output, it will be truncated and marked with ``
* The `undo_edit` command will revert the last edit made to the file at `path`
Notes for using the `str_replace` command:
* The `old_str` parameter should match EXACTLY one or more consecutive lines from the original file. Be mindful of whitespaces!
* If the `old_str` parameter is not unique in the file, the replacement will not be performed. Make sure to include enough context in `old_str` to make it unique
* The `new_str` parameter should contain the edited lines that should replace the `old_str`
```
#### Tool input schema
We are providing our input schema **for reference only**. You should not specify this in your Anthropic-defined tool call.
```JSON
{
"properties": {
"command": {
"description": "The commands to run. Allowed options are: `view`, `create`, `str_replace`, `insert`, `undo_edit`.",
"enum": ["view", "create", "str_replace", "insert", "undo_edit"],
"type": "string",
},
"file_text": {
"description": "Required parameter of `create` command, with the content of the file to be created.",
"type": "string",
},
"insert_line": {
"description": "Required parameter of `insert` command. The `new_str` will be inserted AFTER the line `insert_line` of `path`.",
"type": "integer",
},
"new_str": {
"description": "Optional parameter of `str_replace` command containing the new string (if not given, no string will be added). Required parameter of `insert` command containing the string to insert.",
"type": "string",
},
"old_str": {
"description": "Required parameter of `str_replace` command containing the string in `path` to replace.",
"type": "string",
},
"path": {
"description": "Absolute path to file or directory, e.g. `/repo/file.py` or `/repo`.",
"type": "string",
},
"view_range": {
"description": "Optional parameter of `view` command when `path` points to a file. If none is given, the full file is shown. If provided, the file will be shown in the indicated line number range, e.g. [11, 12] will show lines 11 and 12. Indexing at 1 to start. Setting `[start_line, -1]` shows all lines from `start_line` to the end of the file.",
"items": {"type": "integer"},
"type": "array",
},
},
"required": ["command", "path"],
"type": "object",
}
```
#### Type
`bash_20241022`
#### Tool description
We are providing our tool description **for reference only**. You should not specify this in your Anthropic-defined tool call.
```plaintext
Run commands in a bash shell
* When invoking this tool, the contents of the "command" parameter does NOT need to be XML-escaped.
* You have access to a mirror of common linux and python packages via apt and pip.
* State is persistent across command calls and discussions with the user.
* To inspect a particular line range of a file, e.g. lines 10-25, try 'sed -n 10,25p /path/to/the/file'.
* Please avoid commands that may produce a very large amount of output.
* Please run long lived commands in the background, e.g. 'sleep 10 &' or start a server in the background.
```
#### Tool input schema
We are providing our input schema **for reference only**. You should not specify this in your Anthropic-defined tool call.
```JSON
{
"properties": {
"command": {
"description": "The bash command to run. Required unless the tool is being restarted.",
"type": "string",
},
"restart": {
"description": "Specifying true will restart this tool. Otherwise, leave this unspecified.",
"type": "boolean",
},
}
}
```
### Combine computer use with other tools
You can combine [regular tool use](https://docs.anthropic.com/en/docs/build-with-claude/tool-use#single-tool-example) with the Anthropic-defined tools for computer use.
```bash Shell
curl https://api.anthropic.com/v1/messages \
-H "content-type: application/json" \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "anthropic-beta: computer-use-2024-10-22" \
-d '{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"tools": [
{
"type": "computer_20241022",
"name": "computer",
"display_width_px": 1024,
"display_height_px": 768,
"display_number": 1
},
{
"type": "text_editor_20241022",
"name": "str_replace_editor"
},
{
"type": "bash_20241022",
"name": "bash"
},
{
"name": "get_weather",
"description": "Get the current weather in a given location",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The unit of temperature, either 'celsius' or 'fahrenheit'"
}
},
"required": ["location"]
}
}
],
"messages": [
{
"role": "user",
"content": "Find flights from San Francisco to a place with warmer weather."
}
]
}'
```
```Python Python
import anthropic
client = anthropic.Anthropic()
response = client.beta.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
tools=[
{
"type": "computer_20241022",
"name": "computer",
"display_width_px": 1024,
"display_height_px": 768,
"display_number": 1,
},
{
"type": "text_editor_20241022",
"name": "str_replace_editor"
},
{
"type": "bash_20241022",
"name": "bash"
},
{
"name": "get_weather",
"description": "Get the current weather in a given location",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The unit of temperature, either 'celsius' or 'fahrenheit'"
}
},
"required": ["location"]
}
},
],
messages=[{"role": "user", "content": "Find flights from San Francisco to a place with warmer weather."}],
betas=["computer-use-2024-10-22"],
)
print(response)
```
```TypeScript TypeScript
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
const message = await anthropic.beta.messages.create({
model: "claude-3-5-sonnet-20241022",
max_tokens: 1024,
tools: [
{
type: "computer_20241022",
name: "computer",
display_width_px: 1024,
display_height_px: 768,
display_number: 1,
},
{
type: "text_editor_20241022",
name: "str_replace_editor"
},
{
type: "bash_20241022",
name: "bash"
},
{
name: "get_weather",
description: "Get the current weather in a given location",
input_schema: {
type: "object",
properties: {
location: {
type: "string",
description: "The city and state, e.g. San Francisco, CA"
},
unit: {
type: "string",
enum: ["celsius", "fahrenheit"],
description: "The unit of temperature, either 'celsius' or 'fahrenheit'"
}
},
required: ["location"]
}
},
],
messages: [{ role: "user", content: "Find flights from San Francisco to a place with warmer weather." }],
betas: ["computer-use-2024-10-22"],
});
console.log(message_batch);
```
### Build a custom computer use environment
The [reference implementation](https://github.com/anthropics/anthropic-quickstarts/tree/main/computer-use-demo) is meant to help you get started with computer use. It includes all of the components needed have Claude use a computer. However, you can build your own environment for computer use to suit your needs. You'll need:
* A virtualized or containerized environment suitable for computer use with Claude
* An implementation of at least one of the Anthropic-defined computer use tools
* An agent loop that interacts with the Anthropic API and executes the `tool_use` results using your tool implementations
* An API or UI that allows user input to start the agent loop
***
## Understand computer use limitations
The computer use functionality is in beta. While Claude’s capabilities are cutting edge, developers should be aware of its limitations:
1. **Latency**: the current computer use latency for human-AI interactions may be too slow compared to regular human-directed computer actions. We recommend focusing on use cases where speed isn’t critical (e.g., background information gathering, automated software testing) in trusted environments.
2. **Computer vision accuracy and reliability**: Claude may make mistakes or hallucinate when outputting specific coordinates while generating actions.
3. **Tool selection accuracy and reliability**: Claude may make mistakes or hallucinate when selecting tools while generating actions or take unexpected actions to solve problems. Additionally, reliability may be lower when interacting with niche applications or multiple applications at once. We recommend that users prompt the model carefully when requesting complex tasks.
4. **Scrolling reliability**: Scrolling may be unreliable in the current experience, and the model may not reliably scroll to the bottom of a page. Scrolling-like behavior can be improved via keystrokes (PgUp/PgDown).
5. **Spreadsheet interaction**: Mouse clicks for spreadsheet interaction are unreliable. Cell selection may not always work as expected. This can be mitigated by prompting the model to use arrow keys.
6. **Account creation and content generation on social and communications platforms**: While Claude will visit websites, we are limiting its ability to create accounts or generate and share content or otherwise engage in human impersonation across social media websites and platforms. We may update this capability in the future.
7. **Vulnerabilities**: Vulnerabilities like jailbreaking or prompt injection may persist across frontier AI systems, including the beta computer use API. In some circumstances, Claude will follow commands found in content, sometimes even in conflict with the user's instructions. For example, Claude instructions on webpages or contained in images may override instructions or cause Claude to make mistakes. We recommend:
a. Limiting computer use to trusted environments such as virtual machines or containers with minimal privileges
b. Avoiding giving computer use access to sensitive accounts or data without strict oversight
c. Informing end users of relevant risks and obtaining their consent before enabling or requesting permissions necessary for computer use features in your applications
8. **Inappropriate or illegal actions**: Per Anthropic’s terms of service, you must not employ computer use to violate any laws or our Acceptable Use Policy.
Always carefully review and verify Claude’s computer use actions and logs. Do not use Claude for tasks requiring perfect precision or sensitive user information without human oversight.
***
## Pricing
See the [tool use pricing](/en/docs/build-with-claude/tool-use#pricing) documentation for a detailed explanation of how Claude Tool Use API requests are priced.
As a subset of tool use requests, computer use requests are priced the same as any other Claude API request.
We also automatically include a special system prompt for the model, which enables computer use.
| Model | Tool choice | System prompt token count |
| ----------------------- | ------------------------------------------ | ------------------------------------------- |
| Claude 3.5 Sonnet (new) | `auto``any`, `tool` | 466 tokens499 tokens |
In addition to the base tokens, the following additional input tokens are needed for the Anthropic-defined tools:
| Tool | Additional input tokens |
| ---------------------- | ----------------------- |
| `computer_20241022` | 683 tokens |
| `text_editor_20241022` | 700 tokens |
| `bash_20241022` | 245 tokens |
# Define your success criteria
Building a successful LLM-based application starts with clearly defining your success criteria. How will you know when your application is good enough to publish?
Having clear success criteria ensures that your prompt engineering & optimization efforts are focused on achieving specific, measurable goals.
***
## Building strong criteria
Good success criteria are:
* **Specific**: Clearly define what you want to achieve. Instead of "good performance," specify "accurate sentiment classification."
* **Measurable**: Use quantitative metrics or well-defined qualitative scales. Numbers provide clarity and scalability, but qualitative measures can be valuable if consistently applied *along* with quantitative measures.
* Even "hazy" topics such as ethics and safety can be quantified:
| | Safety criteria |
| ---- | ------------------------------------------------------------------------------------------ |
| Bad | Safe outputs |
| Good | Less than 0.1% of outputs out of 10,000 trials flagged for toxicity by our content filter. |
**Quantitative metrics**:
* Task-specific: F1 score, BLEU score, perplexity
* Generic: Accuracy, precision, recall
* Operational: Response time (ms), uptime (%)
**Quantitative methods**:
* A/B testing: Compare performance against a baseline model or earlier version.
* User feedback: Implicit measures like task completion rates.
* Edge case analysis: Percentage of edge cases handled without errors.
**Qualitative scales**:
* Likert scales: "Rate coherence from 1 (nonsensical) to 5 (perfectly logical)"
* Expert rubrics: Linguists rating translation quality on defined criteria
* **Achievable**: Base your targets on industry benchmarks, prior experiments, AI research, or expert knowledge. Your success metrics should not be unrealistic to current frontier model capabilities.
* **Relevant**: Align your criteria with your application's purpose and user needs. Strong citation accuracy might be critical for medical apps but less so for casual chatbots.
| | Criteria |
| ---- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Bad | The model should classify sentiments well |
| Good | Our sentiment analysis model should achieve an F1 score of at least 0.85 (Measurable, Specific) on a held-out test set\* of 10,000 diverse Twitter posts (Relevant), which is a 5% improvement over our current baseline (Achievable). |
\**More on held-out test sets in the next section*
***
## Common success criteria to consider
Here are some criteria that might be important for your use case. This list is non-exhaustive.
How well does the model need to perform on the task? You may also need to consider edge case handling, such as how well the model needs to perform on rare or challenging inputs.
How similar does the model's responses need to be for similar types of input? If a user asks the same question twice, how important is it that they get semantically similar answers?
How well does the model directly address the user's questions or instructions? How important is it for the information to be presented in a logical, easy to follow manner?
How well does the model's output style match expectations? How appropriate is its language for the target audience?
What is a successful metric for how the model handles personal or sensitive information? Can it follow instructions not to use or share certain details?
How effectively does the model use provided context? How well does it reference and build upon information given in its history?
What is the acceptable response time for the model? This will depend on your application's real-time requirements and user expectations.
What is your budget for running the model? Consider factors like the cost per API call, the size of the model, and the frequency of usage.
Most use cases will need multidimensional evaluation along several success criteria.
| | Criteria |
| ---- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Bad | The model should classify sentiments well |
| Good | On a held-out test set of 10,000 diverse Twitter posts, our sentiment analysis model should achieve: - an F1 score of at least 0.85 - 99.5% of outputs are non-toxic - 90% of errors are would cause inconvenience, not egregious error\* - 95% response time \< 200ms |
\**In reality, we would also define what "inconvenience" and "egregious" means.*
***
## Next steps
Brainstorm success criteria for your use case with Claude on claude.ai.
**Tip**: Drop this page into the chat as guidance for Claude!
Learn to build strong test sets to gauge Claude's performance against your criteria.
# Create strong empirical evaluations
After defining your success criteria, the next step is designing evaluations to measure LLM performance against those criteria. This is a vital part of the prompt engineering cycle.
![](https://mintlify.s3-us-west-1.amazonaws.com/anthropic/images/how-to-prompt-eng.png)
This guide focuses on how to develop your test cases.
## Building evals and test cases
### Eval design principles
1. **Be task-specific**: Design evals that mirror your real-world task distribution. Don't forget to factor in edge cases!
* Irrelevant or nonexistent input data
* Overly long input data or user input
* \[Chat use cases] Poor, harmful, or irrelevant user input
* Ambiguous test cases where even humans would find it hard to reach an assessment consensus
2. **Automate when possible**: Structure questions to allow for automated grading (e.g., multiple-choice, string match, code-graded, LLM-graded).
3. **Prioritize volume over quality**: More questions with slightly lower signal automated grading is better than fewer questions with high-quality human hand-graded evals.
### Example evals
**What it measures**: Exact match evals measure whether the model's output exactly matches a predefined correct answer. It's a simple, unambiguous metric that's perfect for tasks with clear-cut, categorical answers like sentiment analysis (positive, negative, neutral).
**Example eval test cases**: 1000 tweets with human-labeled sentiments.
```python
import anthropic
tweets = [
{"text": "This movie was a total waste of time. 👎", "sentiment": "negative"},
{"text": "The new album is 🔥! Been on repeat all day.", "sentiment": "positive"},
{"text": "I just love it when my flight gets delayed for 5 hours. #bestdayever", "sentiment": "negative"}, # Edge case: Sarcasm
{"text": "The movie's plot was terrible, but the acting was phenomenal.", "sentiment": "mixed"}, # Edge case: Mixed sentiment
# ... 996 more tweets
]
client = anthropic.Anthropic()
def get_completion(prompt: str):
message = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=50,
messages=[
{"role": "user", "content": prompt}
]
)
return message.content[0].text
def evaluate_exact_match(model_output, correct_answer):
return model_output.strip().lower() == correct_answer.lower()
outputs = [get_completion(f"Classify this as 'positive', 'negative', 'neutral', or 'mixed': {tweet['text']}") for tweet in tweets]
accuracy = sum(evaluate_exact_match(output, tweet['sentiment']) for output, tweet in zip(outputs, tweets)) / len(tweets)
print(f"Sentiment Analysis Accuracy: {accuracy * 100}%")
```
**What it measures**: Cosine similarity measures the similarity between two vectors (in this case, sentence embeddings of the model's output using SBERT) by computing the cosine of the angle between them. Values closer to 1 indicate higher similarity. It's ideal for evaluating consistency because similar questions should yield semantically similar answers, even if the wording varies.
**Example eval test cases**: 50 groups with a few paraphrased versions each.
```python
from sentence_transformers import SentenceTransformer
import numpy as np
import anthropic
faq_variations = [
{"questions": ["What's your return policy?", "How can I return an item?", "Wut's yur retrn polcy?"], "answer": "Our return policy allows..."}, # Edge case: Typos
{"questions": ["I bought something last week, and it's not really what I expected, so I was wondering if maybe I could possibly return it?", "I read online that your policy is 30 days but that seems like it might be out of date because the website was updated six months ago, so I'm wondering what exactly is your current policy?"], "answer": "Our return policy allows..."}, # Edge case: Long, rambling question
{"questions": ["I'm Jane's cousin, and she said you guys have great customer service. Can I return this?", "Reddit told me that contacting customer service this way was the fastest way to get an answer. I hope they're right! What is the return window for a jacket?"], "answer": "Our return policy allows..."}, # Edge case: Irrelevant info
# ... 47 more FAQs
]
client = anthropic.Anthropic()
def get_completion(prompt: str):
message = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=2048,
messages=[
{"role": "user", "content": prompt}
]
)
return message.content[0].text
def evaluate_cosine_similarity(outputs):
model = SentenceTransformer('all-MiniLM-L6-v2')
embeddings = [model.encode(output) for output in outputs]
cosine_similarities = np.dot(embeddings, embeddings.T) / (np.linalg.norm(embeddings, axis=1) * np.linalg.norm(embeddings, axis=1).T)
return np.mean(cosine_similarities)
for faq in faq_variations:
outputs = [get_completion(question) for question in faq["questions"]]
similarity_score = evaluate_cosine_similarity(outputs)
print(f"FAQ Consistency Score: {similarity_score * 100}%")
```
**What it measures**: ROUGE-L (Recall-Oriented Understudy for Gisting Evaluation - Longest Common Subsequence) evaluates the quality of generated summaries. It measures the length of the longest common subsequence between the candidate and reference summaries. High ROUGE-L scores indicate that the generated summary captures key information in a coherent order.
**Example eval test cases**: 200 articles with reference summaries.
```python
from rouge import Rouge
import anthropic
articles = [
{"text": "In a groundbreaking study, researchers at MIT...", "summary": "MIT scientists discover a new antibiotic..."},
{"text": "Jane Doe, a local hero, made headlines last week for saving... In city hall news, the budget... Meteorologists predict...", "summary": "Community celebrates local hero Jane Doe while city grapples with budget issues."}, # Edge case: Multi-topic
{"text": "You won't believe what this celebrity did! ... extensive charity work ...", "summary": "Celebrity's extensive charity work surprises fans"}, # Edge case: Misleading title
# ... 197 more articles
]
client = anthropic.Anthropic()
def get_completion(prompt: str):
message = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[
{"role": "user", "content": prompt}
]
)
return message.content[0].text
def evaluate_rouge_l(model_output, true_summary):
rouge = Rouge()
scores = rouge.get_scores(model_output, true_summary)
return scores[0]['rouge-l']['f'] # ROUGE-L F1 score
outputs = [get_completion(f"Summarize this article in 1-2 sentences:\n\n{article['text']}") for article in articles]
relevance_scores = [evaluate_rouge_l(output, article['summary']) for output, article in zip(outputs, articles)]
print(f"Average ROUGE-L F1 Score: {sum(relevance_scores) / len(relevance_scores)}")
```
**What it measures**: The LLM-based Likert scale is a psychometric scale that uses an LLM to judge subjective attitudes or perceptions. Here, it's used to rate the tone of responses on a scale from 1 to 5. It's ideal for evaluating nuanced aspects like empathy, professionalism, or patience that are difficult to quantify with traditional metrics.
**Example eval test cases**: 100 customer inquiries with target tone (empathetic, professional, concise).
```python
import anthropic
inquiries = [
{"text": "This is the third time you've messed up my order. I want a refund NOW!", "tone": "empathetic"}, # Edge case: Angry customer
{"text": "I tried resetting my password but then my account got locked...", "tone": "patient"}, # Edge case: Complex issue
{"text": "I can't believe how good your product is. It's ruined all others for me!", "tone": "professional"}, # Edge case: Compliment as complaint
# ... 97 more inquiries
]
client = anthropic.Anthropic()
def get_completion(prompt: str):
message = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=2048,
messages=[
{"role": "user", "content": prompt}
]
)
return message.content[0].text
def evaluate_likert(model_output, target_tone):
tone_prompt = f"""Rate this customer service response on a scale of 1-5 for being {target_tone}:
{model_output}
1: Not at all {target_tone}
5: Perfectly {target_tone}
Output only the number."""
# Generally best practice to use a different model to evaluate than the model used to generate the evaluated output
response = client.messages.create(model="claude-3-opus-20240229", max_tokens=50, messages=[{"role": "user", "content": tone_prompt}])
return int(response.content[0].text.strip())
outputs = [get_completion(f"Respond to this customer inquiry: {inquiry['text']}") for inquiry in inquiries]
tone_scores = [evaluate_likert(output, inquiry['tone']) for output, inquiry in zip(outputs, inquiries)]
print(f"Average Tone Score: {sum(tone_scores) / len(tone_scores)}")
```
**What it measures**: Binary classification determines if an input belongs to one of two classes. Here, it's used to classify whether a response contains PHI or not. This method can understand context and identify subtle or implicit forms of PHI that rule-based systems might miss.
**Example eval test cases**: 500 simulated patient queries, some with PHI.
```python
import anthropic
patient_queries = [
{"query": "What are the side effects of Lisinopril?", "contains_phi": False},
{"query": "Can you tell me why John Doe, DOB 5/12/1980, was prescribed Metformin?", "contains_phi": True}, # Edge case: Explicit PHI
{"query": "If my friend Alice, who was born on July 4, 1985, had diabetes, what...", "contains_phi": True}, # Edge case: Hypothetical PHI
{"query": "I'm worried about my son. He's been prescribed the same medication as his father last year.", "contains_phi": True}, # Edge case: Implicit PHI
# ... 496 more queries
]
client = anthropic.Anthropic()
def get_completion(prompt: str):
message = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[
{"role": "user", "content": prompt}
]
)
return message.content[0].text
def evaluate_binary(model_output, query_contains_phi):
if not query_contains_phi:
return True
binary_prompt = """Does this response contain or reference any Personal Health Information (PHI)?
PHI refers to any individually identifiable health data that is created, used, or disclosed in the course of providing healthcare services. This includes information related to an individual's physical or mental health condition, the provision of healthcare to that individual, or payment for such care.
Key aspects of PHI include:
- Identifiers: Names, addresses, birthdates, Social Security numbers, medical record numbers, etc.
- Health data: Diagnoses, treatment plans, test results, medication records, etc.
- Financial information: Insurance details, payment records, etc.
- Communication: Notes from healthcare providers, emails or messages about health.
{model_output}
Output only 'yes' or 'no'."""
# Generally best practice to use a different model to evaluate than the model used to generate the evaluated output
response = client.messages.create(model="claude-3-opus-20240229", max_tokens=50, messages=[{"role": "user", "content": binary_prompt}])
return response.content[0].text.strip().lower() == "no"
outputs = [get_completion(f"You are a medical assistant. Never reveal any PHI in your responses. PHI refers to any individually identifiable health data that is created, used, or disclosed in the course of providing healthcare services. This includes information related to an individual's physical or mental health condition, the provision of healthcare to that individual, or payment for such care. Here is the question: {query['query']}") for query in patient_queries]
privacy_scores = [evaluate_binary(output, query['contains_phi']) for output, query in zip(outputs, patient_queries)]
print(f"Privacy Preservation Score: {sum(privacy_scores) / len(privacy_scores) * 100}%")
```
**What it measures**: Similar to the Likert scale, the ordinal scale measures on a fixed, ordered scale (1-5). It's perfect for evaluating context utilization because it can capture the degree to which the model references and builds upon the conversation history, which is key for coherent, personalized interactions.
**Example eval test cases**: 100 multi-turn conversations with context-dependent questions.
```python
import anthropic
conversations = [
[
{"role": "user", "content": "I just got a new pomeranian!"},
{"role": "assistant", "content": "Congratulations on your new furry friend! Is this your first dog?"},
{"role": "user", "content": "Yes, it is. I named her Luna."},
{"role": "assistant", "content": "Luna is a lovely name! As a first-time dog owner, you might have some questions. What would you like to know about caring for Luna?"},
...
{"role": "user", "content": "What should I know about caring for a dog of this specific breed?"} # Edge case: Relies on context from much earlier
],
[
{"role": "user", "content": "I'm reading 'To Kill a Mockingbird' for my book club."},
{"role": "assistant", "content": "Great choice! 'To Kill a Mockingbird' by Harper Lee is a classic that explores themes of racial injustice and moral growth. How are you finding it so far?"},
{"role": "user", "content": "It's powerful. Hey, when was Scout's birthday again?"}, # Edge case: Abrupt topic shift
{"role": "assistant", "content": "I apologize, but I don't recall the exact date of Scout's birthday being mentioned in 'To Kill a Mockingbird'. The novel doesn't focus on such specific details. Instead, it centers on Scout's growth and experiences over several years. Is there a particular part of the book or Scout's development that you're interested in discussing?"},
{"role": "user", "content": "Oh, right. Well, can you suggest a recipe for a classic Southern cake?"} # Edge case: Another topic shift
],
# ... 98 more conversations
]
client = anthropic.Anthropic()
def get_completion(prompt: str):
message = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[
{"role": "user", "content": prompt}
]
)
return message.content[0].text
def evaluate_ordinal(model_output, conversation):
ordinal_prompt = f"""Rate how well this response utilizes the conversation context on a scale of 1-5:
{"".join(f"{turn['role']}: {turn['content']}\\n" for turn in conversation[:-1])}
{model_output}
1: Completely ignores context
5: Perfectly utilizes context
Output only the number and nothing else."""
# Generally best practice to use a different model to evaluate than the model used to generate the evaluated output
response = client.messages.create(model="claude-3-opus-20240229", max_tokens=50, messages=[{"role": "user", "content": ordinal_prompt}])
return int(response.content[0].text.strip())
outputs = [get_completion(conversation) for conversation in conversations]
context_scores = [evaluate_ordinal(output, conversation) for output, conversation in zip(outputs, conversations)]
print(f"Average Context Utilization Score: {sum(context_scores) / len(context_scores)}")
```
Writing hundreds of test cases can be hard to do by hand! Get Claude to help you generate more from a baseline set of example test cases.If you don't know what eval methods might be useful to assess for your success criteria, you can also brainstorm with Claude!
***
## Grading evals
When deciding which method to use to grade evals, choose the fastest, most reliable, most scalable method:
1. **Code-based grading**: Fastest and most reliable, extremely scalable, but also lacks nuance for more complex judgements that require less rule-based rigidity.
* Exact match: `output == golden_answer`
* String match: `key_phrase in output`
2. **Human grading**: Most flexible and high quality, but slow and expensive. Avoid if possible.
3. **LLM-based grading**: Fast and flexible, scalable and suitable for complex judgement. Test to ensure reliability first then scale.
### Tips for LLM-based grading
* **Have detailed, clear rubrics**: "The answer should always mention 'Acme Inc.' in the first sentence. If it does not, the answer is automatically graded as 'incorrect.'"
A given use case, or even a specific success criteria for that use case, might require several rubrics for holistic evaluation.
* **Empirical or specific**: For example, instruct the LLM to output only 'correct' or 'incorrect', or to judge from a scale of 1-5. Purely qualitative evaluations are hard to assess quickly and at scale.
* **Encourage reasoning**: Ask the LLM to think first before deciding an evaluation score, and then discard the reasoning. This increases evaluation performance, particularly for tasks requiring complex judgement.
```python
import anthropic
def build_grader_prompt(answer, rubric):
return f"""Grade this answer based on the rubric:
{rubric}{answer}
Think through your reasoning in tags, then output 'correct' or 'incorrect' in tags.""
def grade_completion(output, golden_answer):
grader_response = client.messages.create(
model="claude-3-opus-20240229",
max_tokens=2048,
messages=[{"role": "user", "content": build_grader_prompt(output, golden_answer)}]
).content[0].text
return "correct" if "correct" in grader_response.lower() else "incorrect"
# Example usage
eval_data = [
{"question": "Is 42 the answer to life, the universe, and everything?", "golden_answer": "Yes, according to 'The Hitchhiker's Guide to the Galaxy'."},
{"question": "What is the capital of France?", "golden_answer": "The capital of France is Paris."}
]
def get_completion(prompt: str):
message = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[
{"role": "user", "content": prompt}
]
)
return message.content[0].text
outputs = [get_completion(q["question"]) for q in eval_data]
grades = [grade_completion(output, a["golden_answer"]) for output, a in zip(outputs, eval_data)]
print(f"Score: {grades.count('correct') / len(grades) * 100}%")
```
## Next steps
Learn how to craft prompts that maximize your eval scores.
More code examples of human-, code-, and LLM-graded evals.
# Embeddings
Text embeddings are numerical representations of text that enable measuring semantic similarity. This guide introduces embeddings, their applications, and how to use embedding models for tasks like search, recommendations, and anomaly detection.
## Before implementing embeddings
When selecting an embeddings provider, there are several factors you can consider depending on your needs and preferences:
* **Dataset size & domain specificity:** size of the model training dataset and its relevance to the domain you want to embed. Larger or more domain-specific data generally produces better in-domain embeddings
* **Inference performance:** embedding lookup speed and end-to-end latency. This is a particularly important consideration for large scale production deployments
* **Customization:** options for continued training on private data, or specialization of models for very specific domains. This can improve performance on unique vocabularies
***
## How to get embeddings with Anthropic
Anthropic does not offer its own embedding model. One embeddings provider that has a wide variety of options and capabilities encompassing all of the above considerations is [Voyage AI](https://www.voyageai.com/?ref=anthropic).
Voyage AI makes [state-of-the-art](https://blog.voyageai.com/2023/10/29/voyage-embeddings/?ref=anthropic) embedding models and offers customized models for specific industry domains such as finance and healthcare, or bespoke fine-tuned models for individual customers.
The rest of this guide is for Voyage AI, but we encourage you to assess a variety of embeddings vendors to find the best fit for your specific use case.
***
## Getting started with Voyage AI
Check out our [embeddings notebook](https://github.com/anthropics/anthropic-cookbook/blob/main/third%5Fparty/VoyageAI/how%5Fto%5Fcreate%5Fembeddings.md) to see an example Voyage AI implementation.
To access Voyage embeddings:
1. Sign up on [Voyage AI’s website](https://dash.voyageai.com/?ref=anthropic)
2. Obtain an API key
3. Set the API key as an environment variable for convenience:
```Python Python
export VOYAGE_API_KEY=""
```
You can run the embeddings by either using the official [voyageai Python package](https://github.com/voyage-ai/voyageai-python) or HTTP requests, as described below.
### Voyage Python package
The `voyageai` package can be installed using the following command:
```Python Python
pip install -U voyageai
```
Then, you can create a client object and start using it to embed your texts:
```Python Python
import voyageai
vo = voyageai.Client()
# This will automatically use the environment variable VOYAGE_API_KEY.
# Alternatively, you can use vo = voyageai.Client(api_key="")
texts = ["Sample text 1", "Sample text 2"]
result = vo.embed(texts, model="voyage-2", input_type="document")
print(result.embeddings[0])
print(result.embeddings[1])
```
`result.embeddings` will be a list of two embedding vectors, each containing 1024 floating-point numbers.
After running the above code, the two embeddings will be printed on the screen:
```Python Python
[0.02012746, 0.01957859, ...] # embedding for "Sample text 1"
[0.01429677, 0.03077182, ...] # embedding for "Sample text 2"
```
When creating the embeddings, you may specify a few other arguments to the `embed()` function. Here is the specification:
> `voyageai.Client.embed(texts : List[str], model : str, input_type : Optional[str] = None, truncation : Optional[bool] = None)`
* **texts** (List\[str]) - A list of texts as a list of strings, such as `["I like cats", "I also like dogs"]`. Currently, the maximum length of the list is 128, and total number of tokens in the list is at most 320K for `voyage-2` and 120K for `voyage-large-2`/`voyage-code-2`.
* **model** (str) - Name of the model. Recommended options: `voyage-2`, `voyage-large-2`, `voyage-code-2`.
* **input\_type** (str, optional, defaults to `None`) - Type of the input text. Defaults to `None`. Other options: `query`, `document`
* When the input\_type is set to `None`, the input text will be directly encoded by Voyage's embedding model. Alternatively, when the inputs are documents or queries, the users can specify `input_type` to be `query` or `document`, respectively. In such cases, Voyage will prepend a special prompt to input text and send the extended inputs to the embedding model
* For retrieval/search use cases, we recommend specifying this argument when encoding queries or documents to enhance retrieval quality. Embeddings generated with and without the `input_type` argument are compatible
* **truncation** (bool, optional, defaults to `None`) - Whether to truncate the input texts to fit within the context length.
* If `True`, over-length input texts will be truncated to fit within the context length, before being vectorized by the embedding model
* If `False`, an error will be raised if any given text exceeds the context length
* If not specified (defaults to `None`), Voyage will truncate the input text before sending it to the embedding model if it slightly exceeds the context window length. If it significantly exceeds the context window length, an error will be raised
### Voyage HTTP API
You can also get embeddings by requesting the Voyage HTTP API. For example, you can send an HTTP request through the `curl` command in a terminal:
```bash Shell
curl https://api.voyageai.com/v1/embeddings \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $VOYAGE_API_KEY" \
-d '{
"input": ["Sample text 1", "Sample text 2"],
"model": "voyage-2"
}'
```
The response you would get is a JSON object containing the embeddings and the token usage:
```json Shell
{
"object": "list",
"data": [
{
"embedding": [0.02012746, 0.01957859, ...],
"index": 0
},
{
"embedding": [0.01429677, 0.03077182, ...],
"index": 1
}
],
"model": "voyage-2",
"usage": {
"total_tokens": 10
}
}
```
Voyage AI's embedding endpoint is `https://api.voyageai.com/v1/embeddings` (POST). The request header must contain the API key. The request body is a JSON object containing the following arguments:
* **input** (str, List\[str]) - A single text string, or a list of texts as a list of strings. Currently, the maximum length of the list is 128, and total number of tokens in the list is at most 320K for `voyage-2` and 120K for `voyage-large-2`/`voyage-code-2`.
* **model** (str) - Name of the model. Recommended options: `voyage-2`, `voyage-large-2`, `voyage-code-2`.
* **input\_type** (str, optional, defaults to `None`) - Type of the input text. Defaults to `None`. Other options: `query`, `document`
* **truncation** (bool, optional, defaults to `None`) - Whether to truncate the input texts to fit within the context length
* If `True`, over-length input texts will be truncated to fit within the context length before being vectorized by the embedding model
* If `False`, an error will be raised if any given text exceeds the context length
* If not specified (defaults to `None`), Voyage will truncate the input text before sending it to the embedding model if it slightly exceeds the context window length. If it significantly exceeds the context window length, an error will be raised
* **encoding\_format** (str, optional, default to `None`) - Format in which the embeddings are encoded. Voyage currently supports two options:
* If not specified (defaults to `None`): the embeddings are represented as lists of floating-point numbers
* `"base64"`: the embeddings are compressed to [Base64](https://docs.python.org/3/library/base64.html) encodings
***
## Voyage embedding example
Now that we know how to get embeddings with Voyage, let's see it in action with a brief example.
Suppose we have a small corpus of six documents to retrieve from
```Python Python
documents = [
"The Mediterranean diet emphasizes fish, olive oil, and vegetables, believed to reduce chronic diseases.",
"Photosynthesis in plants converts light energy into glucose and produces essential oxygen.",
"20th-century innovations, from radios to smartphones, centered on electronic advancements.",
"Rivers provide water, irrigation, and habitat for aquatic species, vital for ecosystems.",
"Apple’s conference call to discuss fourth fiscal quarter results and business updates is scheduled for Thursday, November 2, 2023 at 2:00 p.m. PT / 5:00 p.m. ET.",
"Shakespeare's works, like 'Hamlet' and 'A Midsummer Night's Dream,' endure in literature."
]
```
We will first use Voyage to convert each of them into an embedding vector
```Python Python
import voyageai
vo = voyageai.Client()
# Embed the documents
doc_embds = vo.embed(
documents, model="voyage-2", input_type="document"
).embeddings
```
The embeddings will allow us to do semantic search / retrieval in the vector space. We can then convert an example query,
```Python Python
query = "When is Apple's conference call scheduled?"
```
into an embedding, and then conduct a nearest neighbor search to find the most relevant document based on the distance in the embedding space.
```Python Python
import numpy as np
# Embed the query
query_embd = vo.embed(
[query], model="voyage-2", input_type="query"
).embeddings[0]
# Compute the similarity
# Voyage embeddings are normalized to length 1, therefore dot-product
# and cosine similarity are the same.
similarities = np.dot(doc_embds, query_embd)
retrieved_id = np.argmax(similarities)
print(documents[retrieved_id])
```
Note that we use `input_type="document"` and `input_type="query"` for embedding the document and query, respectively. More specification can be found [here](#voyage-python-package).
The output would be the 5th document, which is indeed the most relevant to the query:
```
Apple’s conference call to discuss fourth fiscal quarter results and business updates is scheduled for Thursday, November 2, 2023 at 2:00 p.m. PT / 5:00 p.m. ET.
```
***
## Available Voyage models
Voyage recommends using the following embedding models:
| Model | Context Length | Embedding Dimension | Description |
| ------------------------- | -------------- | ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `voyage-large-2` | 16000 | 1536 | Voyage AI's most powerful generalist embedding model. |
| `voyage-code-2` | 16000 | 1536 | Optimized for code retrieval (17% better than alternatives), and also SoTA on general-purpose corpora. See this Voyage [blog post](https://blog.voyageai.com/2024/01/23/voyage-code-2-elevate-your-code-retrieval/?ref=anthropic) for details. |
| `voyage-2` | 4000 | 1024 | Base generalist embedding model optimized for both latency and quality. |
| `voyage-lite-02-instruct` | 4000 | 1024 | [Instruction-tuned](https://github.com/voyage-ai/voyage-lite-02-instruct/blob/main/instruct.json) for classification, clustering, and sentence textual similarity tasks, which are the only recommended use cases for this model. |
`voyage-2` and `voyage-large-2` are generalist embedding models, which achieve state-of-the-art performance across domains and retain high efficiency. `voyage-code-2` is optimized for the code field, offering 4x the context length for more flexible usage, albeit at a relatively higher latency.
Voyage is actively developing more advanced and specialized models, and also offers fine-tuning services to customize bespoke models for individual customers. Email your Anthropic account manager or reach out to Anthropic support for further information on bespoke models.
* `voyage-finance-2`: coming soon
* `voyage-law-2`: coming soon
* `voyage-multilingual-2`: coming soon
* `voyage-healthcare-2`: coming soon
***
## Voyage on the AWS Marketplace
Voyage embeddings are also available on [AWS Marketplace](https://aws.amazon.com/marketplace/seller-profile?id=seller-snt4gb6fd7ljg). Here are the instructions for accessing Voyage on AWS:
1. Subscribe to the model package
1. Navigate to the [model package listing page](https://aws.amazon.com/marketplace/seller-profile?id=seller-snt4gb6fd7ljg) and select the model to deploy
2. Click on the `Continue to subscribe` button
3. Carefully review the details on the `Subscribe to this software` page. If you agree with the standard End-User License Agreement (EULA), pricing, and support terms, click on "Accept Offer"
4. After selecting `Continue to configuration` and choosing a region, you will be presented with a Product Arn. This is the model package ARN required for creating a deployable model using Boto3
1. Copy the ARN that corresponds to your selected region and use it in the subsequent cell
2. Deploy the model package
From here, create a JupyterLab space in [Sagemaker Studio](https://aws.amazon.com/sagemaker/studio/), upload Voyage's [notebook](https://github.com/voyage-ai/voyageai-aws/blob/main/notebooks/deploy%5Fvoyage%5Fcode%5F2%5Fsagemaker.ipynb), and follow the instructions within.
***
## FAQ
Cosine similarity is a popular choice, but most distance functions will do fine. Voyage embeddings are normalized to length 1, therefore cosine similarity is essentially the same as the dot-product between two vectors. Here is a code snippet you can use for calculating cosine similarity between two embedding vectors.
```python
import numpy as np
similarity = np.dot(embd1, embd2)
# Voyage embeddings are normalized to length 1, therefore cosine similarity
# is the same as dot-product.
```
If you want to find the K nearest embedding vectors over a large corpus, we recommend using the capabilities built into most vector databases.
Yes! You can do so with the following code.
```python
import voyageai
vo = voyageai.Client()
total_tokens = vo.count_tokens(["Sample text"])
```
***
## Pricing
Visit Voyage's [pricing page](https://docs.voyageai.com/pricing/?ref=anthropic) for the most up to date pricing details.
# Message Batches (beta)
The Message Batches API is a powerful, cost-effective way to asynchronously process large volumes of [Messages](/en/api/messages) requests. This approach is well-suited to tasks that do not require immediate responses, reducing costs by 50% while increasing throughput.
**Message Batches API is in beta**
We're excited to announce that the Batches API is now in public beta! To access this feature, you'll need to include the `anthropic-beta: message-batches-2024-09-24` header in your API requests, or use `client.beta.messages.batches` in your SDK calls.
We'll be iterating on this open beta over the coming weeks, so we appreciate your feedback. Please share your ideas and suggestions using this [form](https://forms.gle/qVdF5dVuzD9CGPiz8).
You can [explore the API reference directly](/en/api/creating-message-batches), in addition to this guide.
***
## How the Message Batches API works
When you send a request to the Message Batches API:
1. The system creates a new Message Batch with the provided Messages requests.
2. The batch is then processed asynchronously, with each request handled independently.
3. You can poll for the status of the batch and retrieve results when processing has ended for all requests.
This is especially useful for bulk operations that don't require immediate results, such as:
* Large-scale evaluations: Process thousands of test cases efficiently.
* Content moderation: Analyze large volumes of user-generated content asynchronously.
* Data analysis: Generate insights or summaries for large datasets.
* Bulk content generation: Create large amounts of text for various purposes (e.g., product descriptions, article summaries).
### Batch limitations
* A Message Batch is limited to either 10,000 Message requests or 32 MB in size, whichever is reached first.
* The batch takes up to 24 hours to generate responses, though processing may end sooner than this. The results for your batch will not be available until the processing of the entire batch ends. Batches will expire if processing does not complete within 24 hours.
* Batch results are available for 29 days after creation. After that, you may still view the Batch, but its results will no longer be available for download.
* Batches are scoped to a [Workspace](https://console.anthropic.com/settings/workspaces). You may view all batches—and their results—that were created within the Workspace that your API key belongs to.
* Rate limits apply to the Batches API HTTP requests rather than the number of requests in a batch. Additionally, we may slow down processing based on current demand and your request volume. In that case, you may see more requests expiring after 24 hours.
* Due to high throughput and concurrent processing, batches may go slightly over your Workspace's configured [spend limit](https://console.anthropic.com/settings/limits).
### Supported models
The Message Batches API currently supports:
* Claude 3.5 Sonnet (`claude-3-5-sonnet-20240620` and `claude-3-5-sonnet-20241022`)
* Claude 3.5 Haiku (`claude-3-5-haiku-20241022`)
* Claude 3 Haiku (`claude-3-haiku-20240307`)
* Claude 3 Opus (`claude-3-opus-20240229`)
### What can be batched
Any request that you can make to the Messages API can be included in a batch. This includes:
* Vision
* Tool use
* System messages
* Multi-turn conversations
* Any beta features
Since each request in the batch is processed independently, you can mix different types of requests within a single batch.
***
## Pricing
The Batches API offers significant cost savings. All usage is charged at 50% of the standard API prices.
| Model | Batch Input | Batch Output |
| ----------------- | -------------- | -------------- |
| Claude 3.5 Sonnet | \$1.50 / MTok | \$7.50 / MTok |
| Claude 3 Opus | \$7.50 / MTok | \$37.50 / MTok |
| Claude 3 Haiku | \$0.125 / MTok | \$0.625 / MTok |
***
## How to use the Message Batches API
### Prepare and create your batch
A Message Batch is composed of a list of requests to create a Message. The shape of an individual request is comprised of:
* A unique `custom_id` for identifying the Messages request
* A `params` object with the standard [Messages API](/en/api/messages) parameters
You can [create a batch](/en/api/creating-message-batches) by passing this list into the `requests` parameter:
```python Python
import anthropic
from anthropic.types.beta.message_create_params import MessageCreateParamsNonStreaming
from anthropic.types.beta.messages.batch_create_params import Request
client = anthropic.Anthropic()
message_batch = client.beta.messages.batches.create(
requests=[
Request(
custom_id="my-first-request",
params=MessageCreateParamsNonStreaming(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[{
"role": "user",
"content": "Hello, world",
}]
)
),
Request(
custom_id="my-second-request",
params=MessageCreateParamsNonStreaming(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[{
"role": "user",
"content": "Hi again, friend",
}]
)
)
]
)
print(message_batch)
```
```TypeScript TypeScript
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
const messageBatch = await anthropic.beta.messages.batches.create({
requests: [{
custom_id: "my-first-request",
params: {
model: "claude-3-5-sonnet-20241022",
max_tokens: 1024,
messages: [
{"role": "user", "content": "Hello, world"}
]
}
}, {
custom_id: "my-second-request",
params: {
model: "claude-3-5-sonnet-20241022",
max_tokens: 1024,
messages: [
{"role": "user", "content": "Hi again, friend"}
]
}
}]
});
console.log(messageBatch)
```
```bash Shell
curl https://api.anthropic.com/v1/messages/batches \
--header "x-api-key: $API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "anthropic-beta: message-batches-2024-09-24" \
--header "content-type: application/json" \
--data \
'{
"requests": [
{
"custom_id": "my-first-request",
"params": {
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Hello, world"}
]
}
},
{
"custom_id": "my-second-request",
"params": {
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Hi again, friend"}
]
}
}
]
}'
```
In this example, two separate requests are batched together for asynchronous processing. Each request has a unique `custom_id` and contains the standard parameters you'd use for a Messages API call.
**Test your batch requests with the Messages API**
Validation of the `params` object for each message request is performed asynchronously, and validation errors are returned when processing of the entire batch has ended. You can ensure that you are building your input correctly by verifying your request shape with the [Messages API](/en/api/messages) first.
Our asynchronous validation behavior is subject to change between public beta and GA. We are open to your [feedback](https://forms.gle/qVdF5dVuzD9CGPiz8).
When a batch is first created, the response will have a processing status of `in_progress`.
```JSON JSON
{
"id": "msgbatch_01HkcTjaV5uDC8jWR4ZsDV8d",
"type": "message_batch",
"processing_status": "in_progress",
"request_counts": {
"processing": 2,
"succeeded": 0,
"errored": 0,
"canceled": 0,
"expired": 0
},
"ended_at": null,
"created_at": "2024-09-24T18:37:24.100435Z",
"expires_at": "2024-09-25T18:37:24.100435Z",
"cancel_initiated_at": null,
"results_url": null
}
```
### Tracking your batch
The Message Batch's `processing_status` field indicates the stage of processing the batch is in. It starts as `in_progress`, then updates to `ended` once all the requests in the batch have finished processing, and results are ready. You can monitor the state of your batch by visiting the [Console](https://console.anthropic.com/settings/workspaces/default/batches), or using the [retrieval endpoint](/en/api/retrieving-message-batches):
```python Python
import anthropic
client = anthropic.Anthropic()
message_batch = client.beta.messages.batches.retrieve(
"msgbatch_01HkcTjaV5uDC8jWR4ZsDV8d",
)
print(f"Batch {message_batch.id} processing status is {message_batch.processing_status}")
```
```TypeScript TypeScript
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
const messageBatch = await anthropic.beta.messages.batches.retrieve(
"msgbatch_01HkcTjaV5uDC8jWR4ZsDV8d",
);
console.log(`Batch ${messageBatch.id} processing status is ${messageBatch.processing_status}`);
```
```bash Shell
curl https://api.anthropic.com/v1/messages/batches/msgbatch_01HkcTjaV5uDC8jWR4ZsDV8d \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "anthropic-beta: message-batches-2024-09-24" \
| sed -E 's/.*"id":"([^"]+)".*"processing_status":"([^"]+)".*/Batch \1 processing status is \2/'
```
You can [poll](/en/api/messages-batch-examples#polling-for-message-batch-completion) this endpoint to know when processing has ended.
### Retrieving batch results
Once batch processing has ended, each Messages request in the batch will have a result. There are 4 result types:
| Result Type | Description |
| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `succeeded` | Request was successful. Includes the message result. |
| `errored` | Request encountered an error and a message was not created. Possible errors include invalid requests and internal server errors. You will not be billed for these requests. |
| `canceled` | User canceled the batch before this request could be sent to the model. You will not be billed for these requests. |
| `expired` | Batch reached its 24 hour expiration before this request could be sent to the model. You will not be billed for these requests. |
You will see an overview of your results with the batch's `request_counts`, which shows how many requests reached each of these four states.
Results of the batch are available for download both in the Console and at the `results_url` on the Message Batch. Because of the potentially large size of the results, it's recommended to [stream results](/en/api/retrieving-message-batch-results) back rather than download them all at once.
```python Python
import anthropic
client = anthropic.Anthropic()
# Stream results file in memory-efficient chunks, processing one at a time
for result in client.beta.messages.batches.results(
"msgbatch_01HkcTjaV5uDC8jWR4ZsDV8d",
):
match result.result.type:
case "succeeded":
print(f"Success! {result.custom_id}")
case "errored":
if result.result.error.type == "invalid_request":
# Request body must be fixed before re-sending request
print(f"Validation error {result.custom_id}")
else:
# Request can be retried directly
print(f"Server error {result.custom_id}")
case "expired":
print(f"Request expired {result.custom_id}")
```
```TypeScript TypeScript
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
// Stream results file in memory-efficient chunks, processing one at a time
for await (const result of await anthropic.beta.messages.batches.results(
"msgbatch_01HkcTjaV5uDC8jWR4ZsDV8d"
)) {
switch (result.result.type) {
case 'succeeded':
console.log(`Success! ${result.custom_id}`);
break;
case 'errored':
if (result.result.error.type == "invalid_request") {
// Request body must be fixed before re-sending request
console.log(`Validation error: ${result.custom_id}`);
} else {
// Request can be retried directly
console.log(`Server error: ${result.custom_id}`);
}
break;
case 'expired':
console.log(`Request expired: ${result.custom_id}`);
break;
}
}
```
```bash Shell
#!/bin/sh
curl "https://api.anthropic.com/v1/messages/batches/msgbatch_01HkcTjaV5uDC8jWR4ZsDV8d" \
--header "anthropic-version: 2023-06-01" \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-beta: message-batches-2024-09-24" \
| grep -o '"results_url":[[:space:]]*"[^"]*"' \
| cut -d'"' -f4 \
| while read -r url; do
curl -s "$url" \
--header "anthropic-version: 2023-06-01" \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-beta: message-batches-2024-09-24" \
| sed 's/}{/}\n{/g' \
| while IFS= read -r line
do
result_type=$(echo "$line" | sed -n 's/.*"result":[[:space:]]*{[[:space:]]*"type":[[:space:]]*"\([^"]*\)".*/\1/p')
custom_id=$(echo "$line" | sed -n 's/.*"custom_id":[[:space:]]*"\([^"]*\)".*/\1/p')
error_type=$(echo "$line" | sed -n 's/.*"error":[[:space:]]*{[[:space:]]*"type":[[:space:]]*"\([^"]*\)".*/\1/p')
case "$result_type" in
"succeeded")
echo "Success! $custom_id"
;;
"errored")
if [ "$error_type" = "invalid_request" ]; then
# Request body must be fixed before re-sending request
echo "Validation error: $custom_id"
else
# Request can be retried directly
echo "Server error: $custom_id"
fi
;;
"expired")
echo "Expired: $line"
;;
esac
done
done
```
The results will be in `.jsonl` format, where each line is a valid JSON object representing the result of a single request in the Message Batch. For each streamed result, you can do something different depending on its `custom_id` and result type. Here is an example set of results:
```JSON .jsonl file
{"custom_id":"my-second-request","result":{"type":"succeeded","message":{"id":"msg_014VwiXbi91y3JMjcpyGBHX5","type":"message","role":"assistant","model":"claude-3-5-sonnet-20241022","content":[{"type":"text","text":"Hello again! It's nice to see you. How can I assist you today? Is there anything specific you'd like to chat about or any questions you have?"}],"stop_reason":"end_turn","stop_sequence":null,"usage":{"input_tokens":11,"output_tokens":36}}}}
{"custom_id":"my-first-request","result":{"type":"succeeded","message":{"id":"msg_01FqfsLoHwgeFbguDgpz48m7","type":"message","role":"assistant","model":"claude-3-5-sonnet-20241022","content":[{"type":"text","text":"Hello! How can I assist you today? Feel free to ask me any questions or let me know if there's anything you'd like to chat about."}],"stop_reason":"end_turn","stop_sequence":null,"usage":{"input_tokens":10,"output_tokens":34}}}}
```
If your result has an error, its `result.error` will be set to our standard [error shape](https://docs.anthropic.com/en/api/errors#error-shapes).
**Batch results may not match input order**
Batch results can be returned in any order, and may not match the ordering of requests when the batch was created. In the above example, the result for the second batch request is returned before the first. To correctly match results with their corresponding requests, always use the `custom_id` field.
### Best practices for effective batching
To get the most out of the Batches API:
* Monitor batch processing status regularly and implement appropriate retry logic for failed requests.
* Use meaningful `custom_id` values to easily match results with requests, since order is not guaranteed.
* Consider breaking very large datasets into multiple batches for better manageability.
* Dry run a single request shape with the Messages API to avoid validation errors.
### Troubleshooting common issues
If experiencing unexpected behavior:
* Verify that the total batch request size doesn't exceed 32 MB. If the request size is too large, you may get a 413 `request_too_large` error.
* Check that you're using [supported models](#supported-models) for all requests in the batch.
* Ensure each request in the batch has a unique `custom_id`.
* Ensure that it has been less than 29 days since batch `created_at` (not processing `ended_at`) time. If over 29 days have passed, results will no longer be viewable.
* Confirm that the batch has not been canceled.
Note that the failure of one request in a batch does not affect the processing of other requests.
***
## Batch storage and privacy
* **Workspace isolation**: Batches are isolated within the Workspace they are created in. They can only be accessed by API keys associated with that Workspace, or users with permission to view Workspace batches in the Console.
* **Result availability**: Batch results are available for 29 days after the batch is created, allowing ample time for retrieval and processing.
***
## FAQ
Batches may take up to 24 hours for processing, but many will finish sooner. Actual processing time depends on the size of the batch, current demand, and your request volume. It is possible for a batch to expire and not complete within 24 hours.
See [above](#supported-models) for the list of supported models.
If using the SDK, use `client.beta.messages.batches`. If using a raw request, include the `anthropic-beta: message-batches-2024-09-24` header in your API requests.
Yes, the Message Batches API supports all features available in the Messages API, including beta features. However, streaming is not supported for batch requests.
The Message Batches API offers a 50% discount on all usage compared to standard API prices. This applies to input tokens, output tokens, and any special tokens. For more on pricing, visit our [pricing page](https://www.anthropic.com/pricing#anthropic-api).
No, once a batch has been submitted, it cannot be modified. If you need to make changes, you should cancel the current batch and submit a new one. Note that cancellation may not take immediate effect.
The Message Batches API has HTTP requests-based rate limits. Usage of the Batches API does not affect rate limits in the Messages API.
When you retrieve the results, each request will have a `result` field indicating whether it `succeeded`, `errored`, was `canceled`, or `expired`. For `errored` results, additional error information will be provided. View the error response object in the [API reference](/en/api/creating-message-batches).
The Message Batches API is designed with strong privacy and data separation measures:
1. Batches and their results are isolated within the Workspace in which they were created. This means they can only be accessed by API keys from that same Workspace.
2. Each request within a batch is processed independently, with no data leakage between requests.
3. Results are only available for a limited time (29 days), and follow our [data retention policy](https://support.anthropic.com/en/articles/7996866-how-long-do-you-store-personal-data).
Yes! The `anthropic-beta` header takes a comma-separated list, for example `anthropic-beta: message-batches-2024-09-24,max-tokens-3-5-sonnet-2024-07-15`. If you are using an SDK, pass in additional betas with the `betas` field in the top level of your request:
```python Python
import anthropic
client = anthropic.Anthropic()
message_batch = client.beta.messages.batches.create(
betas: ["max-tokens-3-5-sonnet-2024-07-15"],
...
)
```
# PDF support (beta)
The Claude 3.5 Sonnet models now support PDF input and understand both text and visual content within documents.
**PDF support is in public beta**
To access this feature, include the `anthropic-beta: pdfs-2024-09-25` header in your API requests.
We'll be iterating on this open beta over the coming weeks, so we appreciate your feedback. Please share your ideas and suggestions using this [form](https://forms.gle/bTkLgQotTbUs4AmK7).
***
## PDF Capabilities
Claude works with any standard PDF. You can ask Claude about any text, pictures, charts, and tables in the PDFs you provide. Some sample use cases:
* Analyzing financial reports and understanding charts/tables
* Extracting key information from legal documents
* Translation assistance for documents
* Converting document information into structured formats
## How PDF support works
When you send a request that includes a PDF file:
* The system converts each page of the document into an image.
* The text from each page is extracted and provided alongside the page's image.
* Documents are provided as a combination of text and images for analysis.
* This allows users to ask for insights on **visual** elements of a PDF, such as charts, diagrams, and other non-textual content.
PDF support works well alongside:
* **Prompt caching**: To improve performance for repeated analysis.
* **Batch processing**: For high-volume document processing.
* **Tool use**: To extract specific information from documents for use as tool inputs.
### PDF support limitations
Before integrating PDF support into your application, ensure your files meet these requirements:
| Requirement | Limit |
| ------------------------- | ---------------------------------------------------------- |
| Maximum request size | 32MB |
| Maximum pages per request | 100 |
| Supported models | `claude-3-5-sonnet-20241022`, `claude-3-5-sonnet-20240620` |
Please note that both limits are on the entire request payload, including any other content sent alongside PDFs.
The provided PDFs should not have any passwords or encryption.
Since PDF support relies on Claude's vision capabilities, it is subject to the same [limitations](/en/docs/build-with-claude/vision#limitations).
### Supported platforms and models
PDF support is currently available on both Claude 3.5 Sonnet models (`claude-3-5-sonnet-20241022`, `claude-3-5-sonnet-20240620`) via direct API access. This functionality will be supported on Amazon Bedrock and Google Vertex AI soon
### Calculate expected token usage
The token count of a PDF file depends on the total text extracted from the document as well as the number of pages. Since each page is converted into an image, the same [image-based cost calculations](/en/docs/build-with-claude/vision#evaluate-image-size) are applied.
Each page typically uses 1,500 to 3,000 tokens, depending on content density. Standard input token pricing applies, with no additional fees for PDF processing.
You can also use [token counting](/en/docs/build-with-claude/token-counting) to determine the number of tokens in a message containing PDFs.
***
## How to use PDFs in the Messages API
Here's a simple example demonstrating how to use PDFs in the Messages API:
```bash Shell
# First fetch the file
curl -s "https://assets.anthropic.com/m/1cd9d098ac3e6467/original/Claude-3-Model-Card-October-Addendum.pdf" | base64 | tr -d '\n' > pdf_base64.txt
# Create a JSON request file using the pdf_base64.txt content
jq -n --rawfile PDF_BASE64 pdf_base64.txt '{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"messages": [{
"role": "user",
"content": [{
"type": "document",
"source": {
"type": "base64",
"media_type": "application/pdf",
"data": $PDF_BASE64
}
},
{
"type": "text",
"text": "Which model has the highest human preference win rates across each use-case?"
}]
}]
}' > request.json
# Finally send the API request using the JSON file
curl https://api.anthropic.com/v1/messages \
-H "content-type: application/json" \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "anthropic-beta: pdfs-2024-09-25" \
-d @request.json
```
```python Python
import anthropic
import base64
import httpx
# First fetch the file
pdf_url = "https://assets.anthropic.com/m/1cd9d098ac3e6467/original/Claude-3-Model-Card-October-Addendum.pdf"
pdf_data = base64.standard_b64encode(httpx.get(pdf_url).content).decode("utf-8")
# Finally send the API request
client = anthropic.Anthropic()
message = client.beta.messages.create(
model="claude-3-5-sonnet-20241022",
betas=["pdfs-2024-09-25"],
max_tokens=1024,
messages=[
{
"role": "user",
"content": [
{
"type": "document",
"source": {
"type": "base64",
"media_type": "application/pdf",
"data": pdf_data
}
},
{
"type": "text",
"text": "Which model has the highest human preference win rates across each use-case?"
}
]
}
],
)
print(message.content)
```
```TypeScript TypeScript
import Anthropic from '@anthropic-ai/sdk';
import fetch from 'node-fetch';
// First fetch the file
const pdfURL = "https://assets.anthropic.com/m/1cd9d098ac3e6467/original/Claude-3-Model-Card-October-Addendum.pdf";
const pdfResponse = await fetch(pdfURL);
// Then convert the file to base64
const arrayBuffer = await pdfResponse.arrayBuffer();
const pdfBase64 = Buffer.from(arrayBuffer).toString('base64');
// Finally send the API request
const anthropic = new Anthropic();
const response = await anthropic.beta.messages.create({
model: 'claude-3-5-sonnet-20241022',
betas: ["pdfs-2024-09-25"],
max_tokens: 1024,
messages: [
{
content: [
{
type: 'document',
source: {
media_type: 'application/pdf',
type: 'base64',
data: pdfBase64,
},
},
{
type: 'text',
text: 'Which model has the highest human preference win rates across each use-case?',
},
],
role: 'user',
},
],
});
console.log(response);
```
Here are a few other examples to help you get started:
Combine PDF support with [prompt caching](/en/docs/build-with-claude/prompt-caching) to improve performance for repeated analysis:
```bash Shell
# Create a JSON request file using the pdf_base64.txt content
jq -n --rawfile PDF_BASE64 pdf_base64.txt '{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"messages": [{
"role": "user",
"content": [{
"type": "document",
"source": {
"type": "base64",
"media_type": "application/pdf",
"data": $PDF_BASE64
},
"cache_control": {
"type": "ephemeral"
}
},
{
"type": "text",
"text": "Which model has the highest human preference win rates across each use-case?"
}]
}]
}' > request.json
# Then make the API call using the JSON file
curl https://api.anthropic.com/v1/messages \
-H "content-type: application/json" \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "anthropic-beta: pdfs-2024-09-25,prompt-caching-2024-07-31" \
-d @request.json
```
```python Python
message = client.beta.messages.create(
model="claude-3-5-sonnet-20241022",
betas=["pdfs-2024-09-25", "prompt-caching-2024-07-31"],
max_tokens=1024,
messages=[
{
"role": "user",
"content": [
{
"type": "document",
"source": {
"type": "base64",
"media_type": "application/pdf",
"data": pdf_data
},
"cache_control": {"type": "ephemeral"}
},
{
"type": "text",
"text": "Which model has the highest human preference win rates across each use-case?"
}
]
}
],
)
print(message.content)
```
```TypeScript TypeScript
const response = await anthropic.beta.messages.create({
model: 'claude-3-5-sonnet-20241022',
betas: ['pdfs-2024-09-25', 'prompt-caching-2024-07-31'],
max_tokens: 1024,
messages: [
{
content: [
{
type: 'document',
source: {
media_type: 'application/pdf',
type: 'base64',
data: pdfBase64,
},
cache_control: { type: 'ephemeral' },
},
{
type: 'text',
text: 'Which model has the highest human preference win rates across each use-case?',
},
],
role: 'user',
},
],
});
console.log(response);
```
This example demonstrates basic prompt caching usage, caching the full PDF document as a prefix while keeping the user instruction uncached.
The first request will process & cache the document, making followup queries faster and cheaper.
For high-volume document processing, use the [Message Batches API](/en/docs/build-with-claude/message-batches):
```bash Shell
# Create a JSON request file using the pdf_base64.txt content
jq -n --rawfile PDF_BASE64 pdf_base64.txt '
{
"requests": [
{
"custom_id": "my-first-request",
"params": {
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"messages": [
{
"role": "user",
"content": [
{
"type": "document",
"source": {
"type": "base64",
"media_type": "application/pdf",
"data": $PDF_BASE64
}
},
{
"type": "text",
"text": "Which model has the highest human preference win rates across each use-case?"
}
]
}
]
}
},
{
"custom_id": "my-second-request",
"params": {
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"messages": [
{
"role": "user",
"content": [
{
"type": "document",
"source": {
"type": "base64",
"media_type": "application/pdf",
"data": $PDF_BASE64
}
},
{
"type": "text",
"text": "Extract 5 key insights from this document."
}
]
}
]
}
}
]
}
' > request.json
# Then make the API call using the JSON file
curl https://api.anthropic.com/v1/messages/batches \
-H "content-type: application/json" \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "anthropic-beta: message-batches-2024-09-24,pdfs-2024-09-25" \
-d @request.json
```
```python Python
message_batch = client.beta.messages.batches.create(
betas=["pdfs-2024-09-25", "message-batches-2024-09-24"],
requests=[
{
"custom_id": "my-first-request",
"params": {
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"messages": [
{
"role": "user",
"content": [
{
"type": "document",
"source": {
"type": "base64",
"media_type": "application/pdf",
"data": pdf_data
}
},
{
"type": "text",
"text": "Which model has the highest human preference win rates across each use-case?"
}
]
}
]
}
},
{
"custom_id": "my-second-request",
"params": {
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"messages": [
{
"role": "user",
"content": [
{
"type": "document",
"source": {
"type": "base64",
"media_type": "application/pdf",
"data": pdf_data
}
},
{
"type": "text",
"text": "Extract 5 key insights from this document."
}
]
}
]
}
}
]
)
print(message_batch)
```
```TypeScript TypeScript
const response = await anthropic.beta.messages.batches.create({
betas: ['pdfs-2024-09-25', 'message-batches-2024-09-24'],
requests: [
{
custom_id: 'my-first-request',
params: {
max_tokens: 1024,
messages: [
{
content: [
{
type: 'document',
source: {
media_type: 'application/pdf',
type: 'base64',
data: pdfBase64,
},
},
{
type: 'text',
text: 'Which model has the highest human preference win rates across each use-case?',
},
],
role: 'user',
},
],
model: 'claude-3-5-sonnet-20241022',
},
},
{
custom_id: 'my-second-request',
params: {
max_tokens: 1024,
messages: [
{
content: [
{
type: 'document',
source: {
media_type: 'application/pdf',
type: 'base64',
data: pdfBase64,
},
},
{
type: 'text',
text: 'Extract 5 key insights from this document.',
},
],
role: 'user',
},
],
model: 'claude-3-5-sonnet-20241022',
},
}
],
});
console.log(response);
```
***
## Best practices for PDF analysis
* Ensure text is clear and legible.
* Rotate pages to the proper orientation.
* When referring to page numbers, use the logical number (the number reported by your PDF viewer) rather than the physical page number (the number visible on the page)
* Use standard fonts.
* Place PDFs before text in requests.
* Split very large PDFs into smaller chunks when limits are exceeded.
* Use prompt caching for repeated analysis of the same document.
***
## Next steps
Ready to start working with PDFs using Claude? Here are some helpful resources:
Explore practical examples of PDF processing in our cookbook.
View the complete API documentation for PDF support.
# Prompt caching (beta)
Prompt caching is a powerful feature that optimizes your API usage by allowing resuming from specific prefixes in your prompts. This approach significantly reduces processing time and costs for repetitive tasks or prompts with consistent elements.
Here's an example of how to implement prompt caching with the Messages API using a `cache_control` block:
```bash Shell
curl https://api.anthropic.com/v1/messages \
-H "content-type: application/json" \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "anthropic-beta: prompt-caching-2024-07-31" \
-d '{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"system": [
{
"type": "text",
"text": "You are an AI assistant tasked with analyzing literary works. Your goal is to provide insightful commentary on themes, characters, and writing style.\n"
},
{
"type": "text",
"text": "",
"cache_control": {"type": "ephemeral"}
}
],
"messages": [
{
"role": "user",
"content": "Analyze the major themes in Pride and Prejudice."
}
]
}'
```
```python Python
import anthropic
client = anthropic.Anthropic()
response = client.beta.prompt_caching.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
system=[
{
"type": "text",
"text": "You are an AI assistant tasked with analyzing literary works. Your goal is to provide insightful commentary on themes, characters, and writing style.\n",
},
{
"type": "text",
"text": "",
"cache_control": {"type": "ephemeral"}
}
],
messages=[{"role": "user", "content": "Analyze the major themes in 'Pride and Prejudice'."}],
)
print(response)
```
In this example, the entire text of "Pride and Prejudice" is cached using the `cache_control` parameter. This enables reuse of this large text across multiple API calls without reprocessing it each time. Changing only the user message allows you to ask various questions about the book while utilizing the cached content, leading to faster responses and improved efficiency.
**Prompt caching is in beta**
We're excited to announce that prompt caching is now in public beta! To access this feature, you'll need to include the `anthropic-beta: prompt-caching-2024-07-31` header in your API requests.
We'll be iterating on this open beta over the coming weeks, so we appreciate your feedback. Please share your ideas and suggestions using this [form](https://forms.gle/igS4go9TeLAgrYzn7).
***
## How prompt caching works
When you send a request with prompt caching enabled:
1. The system checks if the prompt prefix is already cached from a recent query.
2. If found, it uses the cached version, reducing processing time and costs.
3. Otherwise, it processes the full prompt and caches the prefix for future use.
This is especially useful for:
* Prompts with many examples
* Large amounts of context or background information
* Repetitive tasks with consistent instructions
* Long multi-turn conversations
The cache has a 5-minute lifetime, refreshed each time the cached content is used.
**Prompt caching caches the full prefix**
Prompt caching references the entire prompt - `tools`, `system`, and `messages` (in that order) up to and including the block designated with `cache_control`.
***
## Pricing
Prompt caching introduces a new pricing structure. The table below shows the price per token for each supported model:
| Model | Base Input Tokens | Cache Writes | Cache Hits | Output Tokens |
| ----------------- | ----------------- | -------------- | ------------- | ------------- |
| Claude 3.5 Sonnet | \$3 / MTok | \$3.75 / MTok | \$0.30 / MTok | \$15 / MTok |
| Claude 3.5 Haiku | \$1 / MTok | \$1.25 / MTok | \$0.10 / MTok | \$5 / MTok |
| Claude 3 Haiku | \$0.25 / MTok | \$0.30 / MTok | \$0.03 / MTok | \$1.25 / MTok |
| Claude 3 Opus | \$15 / MTok | \$18.75 / MTok | \$1.50 / MTok | \$75 / MTok |
Note:
* Cache write tokens are 25% more expensive than base input tokens
* Cache read tokens are 90% cheaper than base input tokens
* Regular input and output tokens are priced at standard rates
***
## How to implement prompt caching
### Supported models
Prompt caching is currently supported on:
* Claude 3.5 Sonnet
* Claude 3.5 Haiku
* Claude 3 Haiku
* Claude 3 Opus
### Structuring your prompt
Place static content (tool definitions, system instructions, context, examples) at the beginning of your prompt. Mark the end of the reusable content for caching using the `cache_control` parameter.
Cache prefixes are created in the following order: `tools`, `system`, then `messages`.
Using the `cache_control` parameter, you can define up to 4 cache breakpoints, allowing you to cache different reusable sections separately.
### Cache Limitations
The minimum cacheable prompt length is:
* 1024 tokens for Claude 3.5 Sonnet, Claude 3.5 Haiku, and Claude 3 Opus
* 2048 tokens for Claude 3 Haiku
Shorter prompts cannot be cached, even if marked with `cache_control`. Any requests to cache fewer than this number of tokens will be processed without caching. To see if a prompt was cached, see the response usage [fields](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching#tracking-cache-performance).
The cache has a 5 minute time to live (TTL). Currently, "ephemeral" is the only supported cache type, which corresponds to this 5-minute lifetime.
### What can be cached
Every block in the request can be designated for caching with `cache_control`. This includes:
* Tools: Tool definitions in the `tools` array
* System messages: Content blocks in the `system` array
* Messages: Content blocks in the `messages.content` array, for both user and assistant turns
* Images: Content blocks in the `messages.content` array, in user turns
* Tool use and tool results: Content blocks in the `messages.content` array, in both user and assistant turns
Each of these elements can be marked with `cache_control` to enable caching for that portion of the request.
### Tracking cache performance
Monitor cache performance using these API response fields, within `usage` in the response (or `message_start` event if [streaming](https://docs.anthropic.com/en/api/messages-streaming)):
* `cache_creation_input_tokens`: Number of tokens written to the cache when creating a new entry.
* `cache_read_input_tokens`: Number of tokens retrieved from the cache for this request.
* `input_tokens`: Number of input tokens which were not read from or used to create a cache.
### Best practices for effective caching
To optimize prompt caching performance:
* Cache stable, reusable content like system instructions, background information, large contexts, or frequent tool definitions.
* Place cached content at the prompt's beginning for best performance.
* Use cache breakpoints strategically to separate different cacheable prefix sections.
* Regularly analyze cache hit rates and adjust your strategy as needed.
### Optimizing for different use cases
Tailor your prompt caching strategy to your scenario:
* Conversational agents: Reduce cost and latency for extended conversations, especially those with long instructions or uploaded documents.
* Coding assistants: Improve autocomplete and codebase Q\&A by keeping relevant sections or a summarized version of the codebase in the prompt.
* Large document processing: Incorporate complete long-form material including images in your prompt without increasing response latency.
* Detailed instruction sets: Share extensive lists of instructions, procedures, and examples to fine-tune Claude's responses. Developers often include an example or two in the prompt, but with prompt caching you can get even better performance by including 20+ diverse examples of high quality answers.
* Agentic tool use: Enhance performance for scenarios involving multiple tool calls and iterative code changes, where each step typically requires a new API call.
* Talk to books, papers, documentation, podcast transcripts, and other longform content: Bring any knowledge base alive by embedding the entire document(s) into the prompt, and letting users ask it questions.
### Troubleshooting common issues
If experiencing unexpected behavior:
* Ensure cached sections are identical and marked with cache\_control in the same locations across calls
* Check that calls are made within the 5-minute cache lifetime
* Verify that `tool_choice` and image usage remain consistent between calls
* Validate that you are caching at least the minimum number of tokens
Note that changes to `tool_choice` or the presence/absence of images anywhere in the prompt will invalidate the cache, requiring a new cache entry to be created.
***
## Cache Storage and Sharing
* **Organization Isolation**: Caches are isolated between organizations. Different organizations never share caches, even if they use identical prompts..
* **Exact Matching**: Cache hits require 100% identical prompt segments, including all text and images up to and including the block marked with cache control. The same block must be marked with cache\_control during cache reads and creation.
* **Output Token Generation**: Prompt caching has no effect on output token generation. The response you receive will be identical to what you would get if prompt caching was not used.
***
## Prompt caching examples
To help you get started with prompt caching, we've prepared a [prompt caching cookbook](https://github.com/anthropics/anthropic-cookbook/blob/main/misc/prompt_caching.ipynb) with detailed examples and best practices.
Below, we've included several code snippets that showcase various prompt caching patterns. These examples demonstrate how to implement caching in different scenarios, helping you understand the practical applications of this feature:
```bash Shell
curl https://api.anthropic.com/v1/messages \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--header "anthropic-beta: prompt-caching-2024-07-31" \
--data \
'{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"system": [
{
"type": "text",
"text": "You are an AI assistant tasked with analyzing legal documents."
},
{
"type": "text",
"text": "Here is the full text of a complex legal agreement: [Insert full text of a 50-page legal agreement here]",
"cache_control": {"type": "ephemeral"}
}
],
"messages": [
{
"role": "user",
"content": "What are the key terms and conditions in this agreement?"
}
]
}'
```
```Python Python
import anthropic
client = anthropic.Anthropic()
response = client.beta.prompt_caching.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
system=[
{
"type": "text",
"text": "You are an AI assistant tasked with analyzing legal documents."
},
{
"type": "text",
"text": "Here is the full text of a complex legal agreement: [Insert full text of a 50-page legal agreement here]",
"cache_control": {"type": "ephemeral"}
}
],
messages=[
{
"role": "user",
"content": "What are the key terms and conditions in this agreement?"
}
]
)
print(response)
```
This example demonstrates basic prompt caching usage, caching the full text of the legal agreement as a prefix while keeping the user instruction uncached.
For the first request:
* `input_tokens`: Number of tokens in the user message only
* `cache_creation_input_tokens`: Number of tokens in the entire system message, including the legal document
* `cache_read_input_tokens`: 0 (no cache hit on first request)
For subsequent requests within the cache lifetime:
* `input_tokens`: Number of tokens in the user message only
* `cache_creation_input_tokens`: 0 (no new cache creation)
* `cache_read_input_tokens`: Number of tokens in the entire cached system message
```bash Shell
curl https://api.anthropic.com/v1/messages \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--header "anthropic-beta: prompt-caching-2024-07-31" \
--data \
'{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"tools": [
{
"name": "get_weather",
"description": "Get the current weather in a given location",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The unit of temperature, either celsius or fahrenheit"
}
},
"required": ["location"]
}
},
# many more tools
{
"name": "get_time",
"description": "Get the current time in a given time zone",
"input_schema": {
"type": "object",
"properties": {
"timezone": {
"type": "string",
"description": "The IANA time zone name, e.g. America/Los_Angeles"
}
},
"required": ["timezone"]
},
"cache_control": {"type": "ephemeral"}
}
],
"messages": [
{
"role": "user",
"content": "What is the weather and time in New York?"
}
]
}'
```
```Python Python
import anthropic
client = anthropic.Anthropic()
response = client.beta.prompt_caching.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
tools=[
{
"name": "get_weather",
"description": "Get the current weather in a given location",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The unit of temperature, either 'celsius' or 'fahrenheit'"
}
},
"required": ["location"]
},
},
# many more tools
{
"name": "get_time",
"description": "Get the current time in a given time zone",
"input_schema": {
"type": "object",
"properties": {
"timezone": {
"type": "string",
"description": "The IANA time zone name, e.g. America/Los_Angeles"
}
},
"required": ["timezone"]
},
"cache_control": {"type": "ephemeral"}
}
],
messages=[
{
"role": "user",
"content": "What's the weather and time in New York?"
}
]
)
```
In this example, we demonstrate caching tool definitions.
The `cache_control` parameter is placed on the final tool (`get_time`) to designate all of the tools as part of the static prefix.
This means that all tool definitions, including `get_weather` and any other tools defined before `get_time`, will be cached as a single prefix.
This approach is useful when you have a consistent set of tools that you want to reuse across multiple requests without re-processing them each time.
For the first request:
* `input_tokens`: Number of tokens in the user message
* `cache_creation_input_tokens`: Number of tokens in all tool definitions and system prompt
* `cache_read_input_tokens`: 0 (no cache hit on first request)
For subsequent requests within the cache lifetime:
* `input_tokens`: Number of tokens in the user message
* `cache_creation_input_tokens`: 0 (no new cache creation)
* `cache_read_input_tokens`: Number of tokens in all cached tool definitions and system prompt
```bash Shell
curl https://api.anthropic.com/v1/messages \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--header "anthropic-beta: prompt-caching-2024-07-31" \
--data \
'{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"system": [
{
"type": "text",
"text": "...long system prompt",
"cache_control": {"type": "ephemeral"}
}
],
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Hello, can you tell me more about the solar system?",
"cache_control": {"type": "ephemeral"}
}
]
},
{
"role": "assistant",
"content": "Certainly! The solar system is the collection of celestial bodies that orbit our Sun. It consists of eight planets, numerous moons, asteroids, comets, and other objects. The planets, in order from closest to farthest from the Sun, are: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune. Each planet has its own unique characteristics and features. Is there a specific aspect of the solar system you would like to know more about?"
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "Tell me more about Mars.",
"cache_control": {"type": "ephemeral"}
}
]
}
]
}'
```
```Python Python
import anthropic
client = anthropic.Anthropic()
response = client.beta.prompt_caching.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
system=[
{
"type": "text",
"text": "...long system prompt",
"cache_control": {"type": "ephemeral"}
}
],
messages=[
# ...long conversation so far
{
"role": "user",
"content": [
{
"type": "text",
"text": "Hello, can you tell me more about the solar system?",
"cache_control": {"type": "ephemeral"}
}
]
},
{
"role": "assistant",
"content": "Certainly! The solar system is the collection of celestial bodies that orbit our Sun. It consists of eight planets, numerous moons, asteroids, comets, and other objects. The planets, in order from closest to farthest from the Sun, are: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune. Each planet has its own unique characteristics and features. Is there a specific aspect of the solar system you'd like to know more about?"
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "Tell me more about Mars.",
"cache_control": {"type": "ephemeral"}
}
]
}
]
)
```
In this example, we demonstrate how to use prompt caching in a multi-turn conversation.
The `cache_control` parameter is placed on the system message to designate it as part of the static prefix.
The conversation history (previous messages) is included in the `messages` array. The final turn is marked with cache-control, for continuing in followups.
The second-to-last user message is marked for caching with the `cache_control` parameter, so that this checkpoint can read from the previous cache.
This approach is useful for maintaining context in ongoing conversations without repeatedly processing the same information.
For each request:
* `input_tokens`: Number of tokens in the new user message (will be minimal)
* `cache_creation_input_tokens`: Number of tokens in the new assistant and user turns
* `cache_read_input_tokens`: Number of tokens in the conversation up to the previous turn
***
## FAQ
The cache has a lifetime (TTL) of about 5 minutes. This lifetime is refreshed each time the cached content is used.
You can define up to 4 cache breakpoints in your prompt.
No, prompt caching is currently only available for Claude 3.5 Sonnet, Claude 3 Haiku, and Claude 3 Opus.
To enable prompt caching, include the `anthropic-beta: prompt-caching-2024-07-31` header in your API requests.
Yes, prompt caching can be used alongside other API features like tool use and vision capabilities. However, changing whether there are images in a prompt or modifying tool use settings will break the cache.
Prompt caching introduces a new pricing structure where cache writes cost 25% more than base input tokens, while cache hits cost only 10% of the base input token price.
Currently, there's no way to manually clear the cache. Cached prefixes automatically expire after 5 minutes of inactivity.
You can monitor cache performance using the `cache_creation_input_tokens` and `cache_read_input_tokens` fields in the API response.
Changes that can break the cache include modifying any content, changing whether there are any images (anywhere in the prompt), and altering `tool_choice.type`. Any of these changes will require creating a new cache entry.
Prompt caching is designed with strong privacy and data separation measures:
1. Cache keys are generated using a cryptographic hash of the prompts up to the cache control point. This means only requests with identical prompts can access a specific cache.
2. Caches are organization-specific. Users within the same organization can access the same cache if they use identical prompts, but caches are not shared across different organizations, even for identical prompts.
3. The caching mechanism is designed to maintain the integrity and privacy of each unique conversation or context.
4. It's safe to use `cache_control` anywhere in your prompts. For cost efficiency, it's better to exclude highly variable parts (e.g., user's arbitrary input) from caching.
These measures ensure that prompt caching maintains data privacy and security while offering performance benefits.
Yes! The `anthropic-beta` header takes a comma-separated list, for example `anthropic-beta: prompt-caching-2024-07-31,max-tokens-3-5-sonnet-2024-07-15`.
Yes, it is possible to use prompt caching with your [Batches API](en/docs/build-with-claude/message-batches) requests. However, because asynchronous batch requests can be processed concurrently and in any order, we cannot guarantee that requests in a batch will benefit from caching.
# Be clear, direct, and detailed
When interacting with Claude, think of it as a brilliant but very new employee (with amnesia) who needs explicit instructions. Like any new employee, Claude does not have context on your norms, styles, guidelines, or preferred ways of working.
The more precisely you explain what you want, the better Claude's response will be.
**The golden rule of clear prompting** Show your prompt to a colleague, ideally someone who has minimal context on the task, and ask them to follow the instructions. If they're confused, Claude will likely be too.
## How to be clear, contextual, and specific
* **Give Claude contextual information:** Just like you might be able to better perform on a task if you knew more context, Claude will perform better if it has more contextual information. Some examples of contextual information:
* What the task results will be used for
* What audience the output is meant for
* What workflow the task is a part of, and where this task belongs in that workflow
* The end goal of the task, or what a successful task completion looks like
* **Be specific about what you want Claude to do:** For example, if you want Claude to output only code and nothing else, say so.
* **Provide instructions as sequential steps:** Use numbered lists or bullet points to better ensure that Claude carries out the task the exact way you want it to.
### Examples
Notice that Claude still makes mistakes in the unclear prompting example, such as leaving in a customer's name.
| Role | Unclear Prompt | Clear Prompt |
| ----------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| User | Please remove all personally identifiable information from these customer feedback messages: \{\{FEEDBACK\_DATA}} | Your task is to anonymize customer feedback for our quarterly review.
Instructions: 1. Replace all customer names with "CUSTOMER\_\[ID]" (e.g., "Jane Doe" → "CUSTOMER\_001"). 2. Replace email addresses with "EMAIL\_\[ID]@example.com". 3. Redact phone numbers as "PHONE\_\[ID]". 4. If a message mentions a specific product (e.g., "AcmeCloud"), leave it intact. 5. If no PII is found, copy the message verbatim. 6. Output only the processed messages, separated by "---".
Data to process: \{\{FEEDBACK\_DATA}} |
| Assistant (Claude's response) | Here are the anonymized customer feedback messages: 1. I've been using your software for years, and it's been great. But lately, AcmeCloud has been crashing a lot. Fix it! 2. XXX, your customer support rep, was incredibly helpful. He guided me through the whole process. My email is XXX if you want to send a survey. 3. I'm John Smith, and I'm not happy. I've been on hold for 2 hours trying to reach someone at XXX. This is unacceptable! | CUSTOMER\_001: I've been using your software for years, and it's been great. But lately, AcmeCloud has been crashing a lot. Fix it! --- CUSTOMER\_002: CUSTOMER\_SERVICE\_001, your customer support rep, was incredibly helpful. He guided me through the whole process. My email is [EMAIL\_002@example.com](mailto:EMAIL_002@example.com) if you want to send a survey. --- CUSTOMER\_003: I'm CUSTOMER\_003, and I'm not happy. I've been on hold for 2 hours trying to reach someone at PHONE\_001. This is unacceptable! |
Notice that Claude makes up details to fill in the gaps where it lacks context with the vague prompt.
| Role | Vague Prompt | Specific Prompt |
| ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| User | Write a marketing email for our new AcmeCloud features. | Your task is to craft a targeted marketing email for our Q3 AcmeCloud feature release.
Instructions: 1. Write for this target audience: Mid-size tech companies (100-500 employees) upgrading from on-prem to cloud. 2. Highlight 3 key new features: advanced data encryption, cross-platform sync, and real-time collaboration. 3. Tone: Professional yet approachable. Emphasize security, efficiency, and teamwork. 4. Include a clear CTA: Free 30-day trial with priority onboarding. 5. Subject line: Under 50 chars, mention "security" and "collaboration". 6. Personalization: Use \{\{COMPANY\_NAME}} and \{\{CONTACT\_NAME}} variables.
Structure: 1. Subject line 2. Email body (150-200 words) 3. CTA button text |
| Assistant (Claude's response) | Here's a draft marketing email for our new features:
Subject: Boost Your Business with AcmeCloud's Latest Features
Dear Valued Customer,
We're excited to introduce three powerful features to AcmeCloud that will revolutionize your workflow:
1. Real-Time Collaboration: Work on documents simultaneously with your team. No more version control issues! 2. AI-Powered Insights: Make data-driven decisions with AI that analyzes trends and automates tasks. 3. Advanced Security Suite: End-to-end encryption, multi-factor auth, and threat detection keep your data safe.
Limited Offer: Upgrade now for a 20% first-year discount. Plus, refer a friend for an extra 10% off!
Best, The AcmeCloud Team | Subject line: AcmeCloud: Enterprise security meets real-time teamwork
Email body: Dear \{\{CONTACT\_NAME}},
At \{\{COMPANY\_NAME}}, you're scaling fast and eyeing the cloud. But with growth comes the challenge of keeping data secure while fostering collaboration. That's where AcmeCloud's Q3 release comes in, tailor-made for tech innovators like you.
1. Fort Knox for Your Files: Military-grade encryption now safeguards your data, both in motion and at rest. Your intellectual property stays yours, period. 2. Sync Without Borders: Windows, Mac, or Linux—AcmeCloud speaks all languages. Unified version control means no more "final\_v2\_REALLY\_FINAL.doc". 3. Collaborate at Light Speed: Ditch the attachment ping-pong. Now your devs, designers, and data scientists co-create in real-time, slashing time-to-market.
Don't just take our word for it. Experience the AcmeCloud difference with our 30-day trial. And because we know time is your most precious asset, we're throwing in priority onboarding.
Elevate Your Team Now - Free 30-Day Trial |
Notice that Claude outputs superfluous text and different formating with the vague prompt.
| Role | Vague Prompt | Detailed Prompt |
| ----------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| User | Analyze this AcmeCloud outage report and summarize the key points.
\{\{REPORT}} | Analyze this AcmeCloud outage report. Skip the preamble. Keep your response terse and write only the bare bones necessary information. List only: 1) Cause 2) Duration 3) Impacted services 4) Number of affected users 5) Estimated revenue loss.
Here's the report: \{\{REPORT}} |
| Assistant (Claude's response) | Here's a summary of the key points from the AcmeCloud outage report:
On May 15th, a misconfigured load balancer update caused significant traffic routing issues, resulting in a service outage from 09:15 to 13:52 UTC. The 4-hour 37-minute disruption impacted our core file syncing service and admin dashboard, rendering them inaccessible. Additionally, the mobile app experienced partial functionality loss. Approximately 68% of our user base, totaling 1.3 million users, were affected. Based on our Service Level Agreements (SLAs) and average revenue per user, we estimate a financial impact of \$420,000 in service credits and potential customer churn. | 1) Cause: Misconfigured load balancer update 2) Duration: 4h 37m (09:15-13:52 UTC, May 15) 3) Impacted: Core sync, admin dashboard (down); mobile app (partial) 4) Affected users: 1.3M (68% of base) 5) Est. revenue loss: \$420,000 |
***
Get inspired by a curated selection of prompts for various tasks and use cases.
An example-filled tutorial that covers the prompt engineering concepts found in our docs.
A lighter weight version of our prompt engineering tutorial via an interactive spreadsheet.
# Let Claude think (chain of thought prompting) to increase performance
When faced with complex tasks like research, analysis, or problem-solving, giving Claude space to think can dramatically improve its performance. This technique, known as chain of thought (CoT) prompting, encourages Claude to break down problems step-by-step, leading to more accurate and nuanced outputs.
## Before implementing CoT
### Why let Claude think?
* **Accuracy:** Stepping through problems reduces errors, especially in math, logic, analysis, or generally complex tasks.
* **Coherence:** Structured thinking leads to more cohesive, well-organized responses.
* **Debugging:** Seeing Claude's thought process helps you pinpoint where prompts may be unclear.
### Why not let Claude think?
* Increased output length may impact latency.
* Not all tasks require in-depth thinking. Use CoT judiciously to ensure the right balance of performance and latency.
Use CoT for tasks that a human would need to think through, like complex math, multi-step analysis, writing complex documents, or decisions with many factors.
***
## How to prompt for thinking
The chain of thought techniques below are **ordered from least to most complex**. Less complex methods take up less space in the context window, but are also generally less powerful.
**CoT tip**: Always have Claude output its thinking. Without outputting its thought process, no thinking occurs!
* **Basic prompt**: Include "Think step-by-step" in your prompt.
* Lacks guidance on *how* to think (which is especially not ideal if a task is very specific to your app, use case, or organization)
| Role | Content |
| ---- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| User | Draft personalized emails to donors asking for contributions to this year's Care for Kids program.
Program information: \\{\{PROGRAM\_DETAILS}} \
Donor information: \\{\{DONOR\_DETAILS}} \
Think step-by-step before you write the email. |
* **Guided prompt**: Outline specific steps for Claude to follow in its thinking process.
* Lacks structuring to make it easy to strip out and separate the answer from the thinking.
| Role | Content |
| ---- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| User | Draft personalized emails to donors asking for contributions to this year's Care for Kids program.
Program information: \\{\{PROGRAM\_DETAILS}} \
Donor information: \\{\{DONOR\_DETAILS}} \
Think before you write the email. First, think through what messaging might appeal to this donor given their donation history and which campaigns they've supported in the past. Then, think through what aspects of the Care for Kids program would appeal to them, given their history. Finally, write the personalized donor email using your analysis. |
* **Structured prompt**: Use XML tags like `` and `` to separate reasoning from the final answer.
| Role | Content |
| ---- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| User | Draft personalized emails to donors asking for contributions to this year's Care for Kids program.
Program information: \\{\{PROGRAM\_DETAILS}} \
Donor information: \\{\{DONOR\_DETAILS}} \
Think before you write the email in \ tags. First, think through what messaging might appeal to this donor given their donation history and which campaigns they've supported in the past. Then, think through what aspects of the Care for Kids program would appeal to them, given their history. Finally, write the personalized donor email in \ tags, using your analysis. |
### Examples
Without step-by-step thinking, Claude gives a recommendation that seems reasonable but lacks depth. It doesn't quantify the potential outcomes or consider historical market performance, which are crucial for making an informed decision.
| Role | Content |
| --------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| User | You're a financial advisor. A client wants to invest \$10,000. They can choose between two options: A) A stock that historically returns 12% annually but is volatile, or B) A bond that guarantees 6% annually. The client needs the money in 5 years for a down payment on a house. Which option do you recommend? |
| Assistant | Based on the information provided, I would recommend option B, the bond that guarantees a 6% annual return. Here's why:
1. Certainty: The bond offers a guaranteed 6% annual return, which provides certainty in financial planning. This is crucial when the client has a specific financial goal (down payment) with a fixed timeline (5 years). 2. Risk aversion: Since the client needs this money for an important life event (buying a house), they likely have a low risk tolerance. The volatility of the stock market could jeopardize their plans if there's a downturn close to when they need the funds. 3. Time horizon: With only 5 years, there's less time to recover from potential stock market downturns. Longer investment horizons (10+ years) are generally better suited for weathering stock market volatility.
While the stock's historical 12% return is attractive, past performance doesn't guarantee future results. The guaranteed 6% from the bond ensures the client will have the funds they need when it's time to make the down payment. |
With step-by-step thinking, the advisor provides a much richer analysis. It calculates exact figures for both scenarios, considers historical market volatility, and deeply analyzes the client's risk tolerance. This thorough reasoning leads to a more confident and justifiable recommendation.
| Role | Content |
| --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| User | You're a financial advisor. A client wants to invest \$10,000. They can choose between two options: A) A stock that historically returns 12% annually but is volatile, or B) A bond that guarantees 6% annually. The client needs the money in 5 years for a down payment on a house. Which option do you recommend? Think step-by-step. |
| Assistant | To determine the best investment option for our client, I'll consider several factors step-by-step:
\ 1. Understand the client's goal and timeline: - Goal: Down payment for a house - Timeline: 5 years - Implication: This is a short-term, high-priority financial goal. The client likely cannot afford significant risk or volatility.
2. Analyze Option A (Stock): - Pro: Higher historical return (12% annually) - Con: Volatile (implying potential for significant gains or losses) - Calculate potential 5-year return: - Best case (12% annually): \$10,000 \* (1.12)^5 ≈ \$17,623.42 - Worst case (market crash): Could lose a substantial portion, e.g., 50% crash = \$5,000
4. Consider risk tolerance: - Goal is a house down payment (essential, not discretionary) - Short timeline (5 years) limits ability to recover from market downturns - Conclusion: Client likely has low risk tolerance
5. Historical perspective: - Stock's 12% is historical, not guaranteed - In 5-year periods, stock market can vary wildly: - 2003-2007 (pre-financial crisis): S\&P 500 up \~82% - 2007-2011 (during crisis): S\&P 500 down \~2% - Conclusion: Past performance doesn't guarantee future results, especially short-term
6. Final analysis: - Option A (Stock): Higher potential return but unacceptable risk of not meeting the down payment goal - Option B (Bond): Guaranteed to meet \~80% of best-case stock scenario, 100% certainty of having funds \
\ I recommend Option B, the bond with a guaranteed 6% annual return. While the stock's historical 12% return is tempting, the bond ensures you'll have \$13,382.25 in 5 years for your house down payment. Given the importance and short timeline of your goal, the stock's volatility poses an unacceptable risk. The bond provides certainty, which is invaluable for such a crucial financial milestone. \ |
***
Get inspired by a curated selection of prompts for various tasks and use cases.
An example-filled tutorial that covers the prompt engineering concepts found in our docs.
A lighter weight version of our prompt engineering tutorial via an interactive spreadsheet.
# Chain complex prompts for stronger performance
When working with complex tasks, Claude can sometimes drop the ball if you try to handle everything in a single prompt. Chain of thought (CoT) prompting is great, but what if your task has multiple distinct steps that each require in-depth thought?
Enter prompt chaining: breaking down complex tasks into smaller, manageable subtasks.
## Why chain prompts?
1. **Accuracy**: Each subtask gets Claude's full attention, reducing errors.
2. **Clarity**: Simpler subtasks mean clearer instructions and outputs.
3. **Traceability**: Easily pinpoint and fix issues in your prompt chain.
***
## When to chain prompts
Use prompt chaining for multi-step tasks like research synthesis, document analysis, or iterative content creation. When a task involves multiple transformations, citations, or instructions, chaining prevents Claude from dropping or mishandling steps.
**Remember:** Each link in the chain gets Claude's full attention!
**Debugging tip**: If Claude misses a step or performs poorly, isolate that step in its own prompt. This lets you fine-tune problematic steps without redoing the entire task.
***
## How to chain prompts
1. **Identify subtasks**: Break your task into distinct, sequential steps.
2. **Structure with XML for clear handoffs**: Use XML tags to pass outputs between prompts.
3. **Have a single-task goal**: Each subtask should have a single, clear objective.
4. **Iterate**: Refine subtasks based on Claude's performance.
### Example chained workflows:
* **Multi-step analysis**: See the legal and business examples below.
* **Content creation pipelines**: Research → Outline → Draft → Edit → Format.
* **Data processing**: Extract → Transform → Analyze → Visualize.
* **Decision-making**: Gather info → List options → Analyze each → Recommend.
* **Verification loops**: Generate content → Review → Refine → Re-review.
**Optimization tip**: For tasks with independent subtasks (like analyzing multiple docs), create separate prompts and run them in parallel for speed.
### Advanced: Self-correction chains
You can chain prompts to have Claude review its own work! This catches errors and refines outputs, especially for high-stakes tasks.
#### Prompt 1
| Role | Content |
| ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| User | Summarize this medical research paper.
\\{\{RESEARCH\_PAPER}}\
Focus on methodology, findings, and clinical implications. |
| Assistant \{\{SUMMARY}} | Methodology: Double-blind RCT with 500 Type 2 diabetes patients. Group A (n=250) received novel GLP-1/GIP dual agonist; Group B (n=250) received standard GLP-1 agonist. 24-week trial, primary endpoint: HbA1c reduction.
Findings: - Group A: 1.8% mean HbA1c reduction - Group B: 1.1% mean HbA1c reduction - 68% of Group A vs. 52% of Group B achieved HbA1c \<7% - Group A: 5.2kg mean weight loss - Group B: 3.1kg mean weight loss
Safety: Similar adverse event profiles, no serious events attributed to treatment.
Clinical Implications: Dual agonist shows superior glycemic control and weight loss compared to standard therapy. Could be a game-changer, especially for obese diabetics. |
#### Prompt 2
| Role | Content |
| ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| User | Your task is to provide feedback on a research paper summary. Here is a summary of a medical research paper: \ \{\{SUMMARY}} \
Here is the research paper: \ \{\{RESEARCH\_PAPER}} \
Review this summary for accuracy, clarity, and completeness on a graded A-F scale. |
| Assistant \{\{FEEDBACK}} | 1. Accuracy: A - Correctly reported key stats: HbA1c reductions, weight loss. - Accurately noted safety profile and lack of serious adverse events. - Assertion of "game-changer" is justified by significantly better outcomes.
2. Clarity: A- - Concise and well-structured with clear sections. - Good use of comparative language (e.g., "up 3% from Q1") for context. - Minor issue: Acronyms like "RCT," "GLP-1," and "GIP" not defined. Consider spelling out first use.
3. Completeness: B+ - Covers all requested areas: methodology, findings, clinical implications. - Missing: Baseline characteristics of groups (were they well-matched?). - Missing: Any secondary endpoints (e.g., blood pressure, lipids). - Consider: Long-term implications or need for further research.
Overall, the summary is accurate and clear, providing key insights for a quick grasp of the study. To elevate to A+, define acronyms, mention group baseline stats, note any secondary outcomes, and briefly touch on the need for long-term studies. |
#### Prompt 3
| Role | Content |
| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| User | Your task is to improve a paper summary given feedback. Here is the first draft of a medical research paper: \ \{\{SUMMARY}} \
Here is the research paper: \ \{\{RESEARCH\_PAPER}} \
Here is the feedback: \ \{\{FEEDBACK}} \
Update the summary based on the feedback. |
| Assistant | Methodology: Double-blind randomized controlled trial (RCT) with 500 Type 2 diabetes patients. Group A (n=250) received a novel glucagon-like peptide-1/glucose-dependent insulinotropic polypeptide (GLP-1/GIP) dual agonist; Group B (n=250) received a standard GLP-1 agonist. The 24-week trial's primary endpoint was HbA1c reduction. Groups were well-matched for age, BMI, and baseline HbA1c.
Findings: - Group A: 1.8% mean HbA1c reduction - Group B: 1.1% mean HbA1c reduction - 68% of Group A vs. 52% of Group B achieved HbA1c \<7% - Group A: 5.2kg mean weight loss - Group B: 3.1kg mean weight loss
Secondary endpoints showed improvements in blood pressure and lipid profiles for both groups, with Group A showing slightly better results.
Safety: Similar adverse event profiles, with no serious events attributed to either treatment.
Clinical Implications: The dual GLP-1/GIP agonist demonstrates superior glycemic control and weight loss compared to standard GLP-1 therapy. These results suggest it could be a significant advancement, particularly for obese diabetics. However, longer-term studies are needed to confirm sustained efficacy and safety, as well as to investigate potential benefits on cardiovascular outcomes. |
***
## Examples
Note that Claude misses the instruction to provide proposed changes in its email draft.
| Role | Content |
| --------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| User | You're our Chief Legal Officer. Review this SaaS contract for risks, focusing on data privacy, SLAs, and liability caps.
\ \{\{CONTRACT}} \
Then draft an email to the vendor with your concerns and proposed changes. |
| Assistant | After reviewing the SaaS contract, I've identified several concerns:
1. Data Privacy (Section 7): - Issue: Vendor has broad rights to use client data for "service improvement." - Risk: Could lead to unauthorized data sharing or monetization.
2. SLAs (Section 5): - Issue: 99.5% uptime guarantee is below industry standard. - Risk: Excessive downtime could impact our operations.
3. Liability (Section 9): - Issue: Cap set at 3 months of fees. - Risk: Inadequate for potential data breaches or service failures.
Draft email:
Dear \{\{VENDOR}},
After reviewing the SaaS agreement, we have concerns:
1. Data Privacy: We require that our data only be used for direct service provision, not broad "service improvement." Please amend Section 7.
2. SLAs: 99.5% uptime is insufficient. We need 99.99% with clearly defined penalties. Please revise Section 5.
3. Liability: The current cap of 3 months' fees is inadequate. We propose a cap of 12 months' fees, with no cap for data breaches or willful misconduct.
We value our potential partnership and believe these changes will set us up for success. Please send a revised agreement.
Best regards, \{\{CLO\_NAME}} |
#### Prompt 1
| Role | Content |
| ----------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| User | You're our Chief Legal Officer. Review this SaaS contract for risks, focusing on data privacy, SLAs, and liability caps.
\ \{\{CONTRACT}} \
Output your findings in \ tags. |
| Assistant \{\{CONCERNS}} | 1. Data Privacy (Section 7): - Issue: Vendor has broad rights to use client data for "service improvement." - Risk: Could lead to unauthorized data sharing or monetization of client data. - Impact: Potential breach of client trust, regulatory fines (GDPR, CCPA).
2. SLAs (Section 5): - Issue: 99.5% uptime guarantee is below industry standard (typically 99.99%). - Risk: Up to 43.8 hours of downtime per year vs. 52.6 minutes at 99.99%. - Impact: Operational disruptions, revenue loss, client dissatisfaction.
3. Liability (Section 9): - Issue: Liability cap set at 3 months of fees. - Risk: Inadequate for potential data breaches, IP theft, or prolonged service failures. - Impact: Insufficient compensation for damages that could far exceed the cap, like class-action lawsuits or regulatory fines. |
#### Prompt 2
| Role | Content |
| -------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| User | Draft an email to a SaaS product vendor outlining the following concerns and proposing changes. Here are the concerns: \ \{\{CONCERNS}} \
|
| Assistant \{\{EMAIL}} | Dear \{\{VENDOR}},
After a thorough review of the proposed SaaS agreement, our legal team has identified several high-priority concerns that must be addressed before we can proceed:
1. Data Privacy (Section 7): - Current: Broad rights to use client data for "service improvement." - Concern: Potential unauthorized data sharing or monetization. - Proposed Change: Limit data use to direct service provision. Any use for improvement, analytics, or marketing must be opt-in, revocable, and anonymized.
2. SLAs (Section 5): - Current: 99.5% uptime guarantee. - Concern: Up to 43.8 hours of downtime per year, risking operational disruptions. - Proposed Change: 99.99% uptime with tiered service credits. E.g., 10% credit for \<99.99%, 25% for \<99.9%, 50% for \<99.5%.
3. Liability (Section 9): - Current: Cap at 3 months of fees. - Concern: Grossly inadequate for data breaches, IP theft, or prolonged outages. - Proposed Change: 12 months' fees for general liability. Uncapped for data breaches, willful misconduct, IP infringement.
These changes are non-negotiable for us. They align with industry standards and our commitment to data security, operational excellence, and fiduciary responsibility. We value the potential of our partnership, and addressing these will set a strong foundation.
Please have your legal team review and incorporate these changes. We're happy to discuss further to ensure mutual understanding and agreement.
Best regards, \{\{CLO\_NAME}} |
#### Prompt 3
| Role | Content |
| --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| User | Your task is to review an email and provide feedback. Here is the email: \ \{\{EMAIL}} \
Give feedback on tone, clarity, and professionalism. |
| Assistant | 1. Tone: - Professional and assertive without being confrontational. - Use of "non-negotiable" is strong but justified given the criticality of the issues. - Collaborative language ("we value," "happy to discuss") balances the firmness, maintaining a constructive tone.
2. Clarity: - Excellent structuring: Current state, concern, and proposed change for each issue. - Technical terms (e.g., "EBITDA," "tiered service credits") are appropriate for a B2B SaaS context. - Action items are clear: vendor's legal team to review and incorporate changes.
3. Professionalism: - Focuses on mutual benefit and industry standards, not just our demands. - Mentions broader responsibilities (data security, operational excellence, fiduciary duty), elevating the discussion. - Offers further discussion, showing commitment to understanding and resolving issues.
Overall, the email strikes the right balance of assertiveness and collaboration. It clearly articulates our concerns, proposes specific solutions, and maintains a professional, partnership-oriented tone. No revisions needed; proceed with sending. |
#### Prompt 1
| Role | Content |
| ----------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| User | As a senior solutions architect, review and analyze this multitenancy strategy for our new enterprise SaaS product.
\ \{\{STRATEGY}} \
Focus on scalability, security, and cost-effectiveness. |
| Assistant \{\{ANALYSIS}} | 1. Scalability: + Database sharding by tenant reduces contention, supports high growth. + Microservices allow independent scaling of components. - Shared app servers could bottleneck under extreme load. - No mention of auto-scaling or load balancing for app tier.
2. Security: + Tenant data isolation via sharding mitigates data leaks. + JWT with short expiry and refresh tokens is industry best practice. - No mention of encryption (at rest, in transit) or key management. - IAM strategy lacks detail on roles, permissions, or RBAC.
3. Cost-effectiveness: + Shared app servers reduce idle resource costs. + Serverless functions for infrequent tasks optimize costs. - Per-tenant DB could increase costs at scale vs. optimized multi-tenant DB. - No discussion of reserved instances or spot instances for cost saving. |
#### Prompt 2
| Role | Content |
| ---------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| User | Draft a strategy review document for engineering leadership based on this analysis of a multitenancy strategy.
\ \{\{STRATEGY}} \
\ \{\{ANALYSIS}} \
Include an executive summary, detailed analysis, and recommendations. |
| Assistant \{\{STRATEGY\_DOC}} | # Multitenancy Strategy Review: SaaS Enterprise Product
## Executive Summary The proposed strategy demonstrates a solid foundation in scalability and security principles but has gaps in implementation details. It excels in data isolation and component-level scaling but lacks robust app-tier scaling and advanced security measures. Cost optimizations are present but may falter at high scale. With targeted enhancements, this strategy can effectively support our enterprise SaaS ambitions.
## Detailed Analysis
### 1. Scalability: B+
Strengths: - Database sharding by tenant is excellent. It minimizes query latency, supports high data growth, and facilitates seamless per-tenant scaling. - Microservices architecture allows scaling individual components based on demand, crucial for handling diverse enterprise workloads.
Weaknesses: - Shared application servers risk becoming a bottleneck. As tenant count and traffic grow, these could degrade performance across all tenants. - Absence of auto-scaling and load balancing for the app tier is a critical oversight. These are essential for handling traffic spikes without over-provisioning.
### 2. Security: B-
Strengths: - Sharded databases provide strong tenant data isolation, mitigating risks of data leaks or unauthorized access between tenants. - JWT with short expiry and refresh tokens align with OWASP best practices, reducing token vulnerability windows.
Weaknesses: - No mention of data encryption at rest or in transit (TLS 1.3). This is non-negotiable for enterprise data. - Key management strategy is missing. How are encryption keys, JWT secrets, and DB credentials managed and rotated? - IAM lacks depth. We need RBAC with well-defined roles, attribute-based access control (ABAC) for fine-grained permissions, and integration with enterprise identity providers (SAML, OIDC).
### 3. Cost-effectiveness: B
Strengths: - Shared app servers reduce idle capacity costs, good for steady-state traffic. - Serverless for infrequent tasks (e.g., reporting) is smart; pay only when these resources are used.
Weaknesses: - Per-tenant databases may not be cost-effective at scale compared to a well-designed multi-tenant database with robust data partitioning. - No mention of cost-saving measures like reserved instances for predictable workloads or spot instances for fault-tolerant tasks. - Lack of app-tier auto-scaling could lead to over-provisioning, increasing costs.
## Recommendations
1. **App Tier Resilience**: Implement auto-scaling with Kubernetes or cloud-native services (EKS, GKE). Deploy across availability zones with a CDN for static assets.
2. **Security Hardening**: a. Encrypt data at rest (AES-256) and in transit (TLS 1.3). b. Use a managed KMS (AWS KMS, GCP KMS) for key management. c. Enhance IAM with RBAC, ABAC, and SAML/OIDC for enterprise SSO.
3. **Cost Optimization**: a. Evaluate columnar multi-tenant DB (e.g., Snowflake) vs. sharding at scale. b. Use reserved instances for app servers, spot instances for non-critical workloads. c. Implement proactive monitoring and right-sizing to avoid overprovisioning.
4. **Proof of Concept (PoC)**: Before full implementation, conduct a PoC with a high-scale simulation: a. Test sharded DB vs. optimized multi-tenant DB for cost and performance. b. Stress-test app tier with simulated enterprise-level traffic spikes. c. Conduct third-party security audit, focusing on data isolation and access controls.
With these enhancements, our multi-tenancy strategy will be enterprise-grade: scalable, secure, and cost-effective. The PoC will validate our design choices and instill confidence in our enterprise clients. |
#### Prompt 3
| Role | Content |
| --------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| User | Grade this strategy review document for clarity, actionability, and alignment with enterprise priorities.
\ \{\{PRIORITIES}} \
\ \{\{STRATEGY\_DOC}} \ |
| Assistant | 1. Clarity: A + Clear structure with Executive Summary, Detailed Analysis, and Recommendations. + Analysis uses a grading system (B+, B-) for quick grasp of strengths/weaknesses. + Technical terms (RBAC, ABAC, TLS) are explained or contextualized, aiding non-technical leadership. + Visual aids (bullet points, indentation) enhance readability.
2. Actionability: A- + Recommendations are specific and technical (e.g., "Implement auto-scaling with Kubernetes"). + PoC plan with clear objectives provides a roadmap. + Missing: Timelines, resource requirements, or team assignments for each recommendation. + Consider: Add a "Next Steps" section with immediate actions, roles, and a high-level timeline.
3. Enterprise Alignment: A+ + Directly addresses enterprise priorities: - Scalability: Discusses handling enterprise-level traffic spikes. - Security: Focuses on data isolation, encryption, and enterprise IAM. - Cost: Strategies align with enterprise FinOps (reserved instances, right-sizing). + PoC with "high-scale simulation" and security audit shows commitment to enterprise-grade quality. + Language resonates with enterprise concerns: "unacceptable risk," "enterprise-grade," "instill confidence."
Overall, the document excels in clarity and enterprise alignment. The actionability is strong but could be elevated to A+ with a "Next Steps" section. The technical depth, coupled with clear explanations, makes this highly effective for both technical and non-technical leadership. Great work! |
***
Get inspired by a curated selection of prompts for various tasks and use cases.
An example-filled tutorial that covers the prompt engineering concepts found in our docs.
A lighter weight version of our prompt engineering tutorial via an interactive spreadsheet.
# Long context prompting tips
Claude's extended context window (200K tokens for Claude 3 models) enables handling complex, data-rich tasks. This guide will help you leverage this power effectively.
## Essential tips for long context prompts
* **Put longform data at the top**: Place your long documents and inputs (\~20K+ tokens) near the top of your prompt, above your query, instructions, and examples. This can significantly improve Claude's performance across all models.
Queries at the end can improve response quality by up to 30% in tests, especially with complex, multi-document inputs.
* **Structure document content and metadata with XML tags**: When using multiple documents, wrap each document in `` tags with `` and `