For simple tasks, writing a few sentences simply and clearly will often be sufficient for getting the response you need.
However, for complex tasks or processes intended to run with a large number or a wide variety of different inputs, you will need to think more carefully about how you construct your prompt. Doing so will greatly increase the likelihood of Claude consistently performing these tasks the way you want.
Prompt length
If you’re worried a verbose prompt will be expensive, keep in mind that we charge substantially less for prompt characters than for completion characters.
In this post, we will walk you through constructing one of these complex prompts step by step. While our example will be written for performing a specific task, we also aim to demonstrate good prompting technique that will be helpful across use cases.
# Use the correct format
When prompting Claude through the API, it is very important to use the correct `\n\nHuman:
` and `\n\nAssistant:
` formatting.
Claude was trained as a conversational agent using these special tokens to mark who is speaking. The `\n\nHuman:
` (you) asks a question or gives instructions, and the `\n\nAssistant:
` (Claude) responds.
Thus, we can start writing our prompt like this:
We'll fill our actual prompt text around and in between these two tokens.
# Describe the task well
When describing a task, it is good to give Claude as much context and detail as possible, as well as any rules for completing the task correctly.
Think of Claude as similar to an intern on their first day on the job. Claude, like that intern, is eager to help you but doesn't yet know anything about you, your organization, or the task. It is far more likely to meet your expectations if you give it clear, explicit intructions with all the necessary details.
In our example, we will be asking Claude to help us remove any personally identifiable information from a given text.
We could try using this prompt:
Here are some example responses:
This prompt works okay, if we only want to remove PII by any means (though it missed one name). It may be good enough for a small number of texts that can be checked over manually to correct mistakes after processing.
However, if we need Claude to respond in a specific format, and to perform the task correctly over and over with a variety of inputs, then we should put more details in our prompt:
In this revised version of the prompt, we:
Provide context (e.g. why we want the task to be done)
Define terms (PII = names, phone numbers, addresses)
Give specific details about how Claude should accomplish the task (replace PII with XXX)
In general, the more details Claude has about your request, the better it can be at predicting the correct response.
# Mark different parts of the prompt
XML tags like `<tag>
`these`</tag>
` are helpful for demarcating some important parts of your prompt, such as rules, examples, or input text to process. Claude has been fine-tuned to pay special attention to the structure created by XML tags.
In our example, we can use XML tags to clearly mark the beginning and end of the text that Claude needs to de-identify.
Text substitution
Usually, your prompt is actually a prompt template that you want to use over and over, where the instructions stay the same but the text you're processing changes over time. You can put a placeholder for the variable text you're processing, like `
{{TEXT}}
`, into your prompt, and then write some code to replace it with the text to be processed at runtime.
We can also ask Claude to use XML tags in _its_ response. Doing so can make it easy to extract key information in a setting where the output is automatically processed. Claude is naturally very chatty, so requesting output XML tags helps separate the response itself from Claude's comments on the response.
At this point, this prompt is already quite well-constructed and ready to be [tested with a variety of inputs](🔗). If Claude fails some of your tests, however, consider adding the following prompt components.
# Examples (optional)
You can give Claude a better idea of how to perform the task correctly by including a few examples with your prompt. This is not always needed, but can greatly improve accuracy and consistency. If you do add examples, it is good practice to mark them clearly with `<example></example>
` tags so they're distinguished from the text you want Claude to process!
One way to provide examples is in the form of a previous conversation. Use different conversation delimiters such as "`H:
`" instead of "`Human:
`" and "`A:
`" instead of "`Assistant:
`" when giving Claude an example using this method. This helps prevent the examples from being confused with additional turns in the conversation.
Why H: and A:?
`
\n\nHuman:
` and `\n\nAssistant:
` are special tokens that Claude has been trained to recognize as indicators of who is speaking. Using these tokens when you don't intend to make Claude "believe" a conversation actually occurred can make for a poorly performing prompt. For more detail, see [Human: and Assistant: Formatting](🔗)
Another way to give examples is by providing them directly:
Deciding on which method is more effective is nuanced and can depend on the specific task at hand. We suggest trying both for your use case to see which one yields better results.
# Difficult cases (optional)
If you can anticipate difficult or unusual cases Claude may encounter in your input, describe them in your prompt and tell Claude what to do when it encounters them.
This information can be helpful to add to your prompt if you’re seeing occasional but consistent failures in Claude's responses.
For example:
For tasks where you ask Claude to find specific information, we especially recommend giving it instructions for what to do if there is nothing matching the description in the input. This can help prevent Claude from [hallucinating](🔗), i.e. making things up in order to be able to give a response.
# System Prompt (optional)
It is allowed by the API to include text before the first `\n\nHuman:
`; this is sometimes called a "system prompt". However, Claude models outside of Claude 2.1 do not currently attend to information in this location as strongly or as accurately as it does to text within the conversational turns. It's generally best to put all critical information and instructions in the post-`\n\nHuman:
` part of the prompt, particularly if you are not using Claude 2.1.
If you are using Claude 2.1, we encourage you to experiment to see how performance differs for your specific use case between using or not using system prompts. For more information about how to format system prompts correctly with Claude, see our guide on [how to use system prompts](🔗).