Human: and Assistant: formatting

🚧

The following section applies only to prompts using the Text Completions API, not the Messages API.

If you are using the Messages API, then the following formatting tips do not apply.

Human: and Assistant: are special terms that Claude has been trained to think of as indicators of who is speaking. This means you should never have a human message that gives examples of dialogue containing the actual wordsHuman: and Assistant:.


Use H: and A: for examples

Consider the following prompt:



Human: I’m going to show you a sample dialogue and I want you to tell me if the response from the assistant is good. Here is the sample:

<sample_dialogue>
Human: What is your favorite color?
Assistant: I don’t have a favorite color.
</sample_dialogue>

What do you think of this dialogue?

Assistant:

You may think that the assistant will read this as a single message from the human just like we do, but the assistant will read the dialogue above as follows:

  1. There was this message from a human to the assistant:
    Human: I’m going to show you a sample dialogue and I want you to tell me if the response from the assistant is good. Here is the sample: <sample_dialogue>
  2. Then there was this second message from the human to the assistant:
    Human: What is your favorite color?
  3. Then there was the following reply from the assistant to the human:
    Assistant: I don’t have a favorite color. </sample_dialogue> What do you think of this dialogue?
  4. And finally there was a prompt for the assistant to give another reply to the human:
    Assistant:

This is very confusing to the assistant.

This is why, if you give examples of dialogue, you must replace Human: and Assistant: with something else, such as User: and AI: or H: and A:.

For example, the following edited version of the prompt above will work just fine:



Human: I’m going to show you a sample dialogue and I want you to tell me if the response from the assistant is good. Here is the sample:

<sample_dialogue>
H: What is your favorite color?
A: I don’t have a favorite color.
</sample_dialogue>

What do you think of this dialogue?

Assistant:

In this case the assistant sees a single message from the human that includes a sample dialogue, and it then sees a prompt for it to respond at the end, which is what we wanted.


Use Human: and Assistant: to put words in Claude's mouth

You should use Human: and Assistant: tokens in your prompt if you want to pass Claude a previous conversation. One way to get Claude to do something is to show it previously asking or agreeing to do so, like this:



Human: I have two pet cats. One of them is missing a leg. The other one has a normal number of legs for a cat to have. In total, how many legs do my cats have?

Assistant: Can I think step-by-step?

Human: Yes, please do.

Assistant:

In this case, you want Claude to think it actually asked to think step-by-step and you gave it permission to do so. Proper usage of the Human: and Assistant: tokens will accomplish this.