Claude is trained to be a helpful, honest, and harmless assistant. It is used to speaking in _dialogue_ and you can instruct it in regular English.
The quality of the instructions you give Claude can have a large effect on the quality of its outputs, especially for complex tasks. This guide to prompt design will help you learn how to craft prompts that produce accurate and consistent results.
# Claude works by sequence prediction
Claude is a conversational assistant, based on a large language model. The model uses all the text that you have sent it (your prompt) and all the text it has generated so far to predict the next [token](🔗) that would be most helpful.
This means that Claude constructs its responses one set of characters at a time, in order. It cannot go back and edit its response after it has written it unless you give it a chance to do so in a subsequent prompt.
Claude can also only see (and make predictions on) what is in its [context window](🔗). It can't remember previous conversations unless you put them in the prompt, and it can't open links.
# What is a prompt?
The text that you give Claude is designed to elicit, or "prompt", a relevant output. A prompt is usually in the form of a question or instructions. For example:
The text that Claude responds with is called a "output".
# Human: / Assistant: formatting
Claude is trained to fill in text for the Assistant role as part of an ongoing dialogue between a human user (`Human:
`) and an AI assistant (`Assistant:
`).
Prompts sent via the API must contain `\n\nHuman:
` and `\n\nAssistant:
` as the signals of who's speaking. In Slack and our web interface we automatically add these for you.
For example, **in [claude.ai](🔗) or in Claude-in-Slack**, you can just ask Claude:
And it will respond:
If you send the same prompt **to the API**, it may behave in unexpected ways, like making up answers well beyond what was asked for in the prompt. This is because Claude is trained to fill in text for the Assistant role as part of an ongoing dialogue between a human user (`Human:
`) and an AI assistant (`Assistant:
`). Without this structure, Claude doesn't know what to do or when to stop, so it just keeps on going with the arc that's already present.
The prompt sent to the API must be:
Why?
Claude has been trained and fine-tuned using [RLHF (reinforcement learning with human feedback) methods](🔗) on `
\n\nHuman:
` and `\n\nAssistant:
` data like this, so **you will need to use these prompts in the API** in order to stay “on-distribution” and get the expected results. It's important to remember to have the two newlines before both Human and Assistant, as that's what it was trained on.
If you are using Claude 2.1 and would like to include system prompts as part of your prompts, you can do so by referencing the formatting in [how to use system prompts](🔗).
# Prompt length
The maximum prompt length that Claude can see is its [context window](🔗). For all models except Claude 2.1, Claude's context window is currently ~75,000 words / ~100,000 tokens / ~340,000 Unicode characters. Claude 2.1 has double the context length, at ~150,000 words / ~200,000 tokens / ~680,000 Unicode characters