This guide is meant to provide usage tips for our legacy models, as well as provide tips on migrating your prompts from other models like OpenAI’s GPT onto our legacy models (for migration to our frontier models, see our main migration guide).


Anthropic’s legacy models

Our legacy models include Claude Instant 1.2, Claude 2.0, and Claude 2.1. Of these legacy models, Claude 2.1 is the only model with system prompt support (all Claude 3 models have full system prompt support).

These models do not have the vision capabilities of the Claude 3 family and are generally less performant and intelligent. However, they can still be useful for certain applications that do not require the advanced features of the Claude 3 models. See the models overview page for a full summary.

Please note that legacy models could be deprecated over time and have less support than newer models, so we recommend planning a migration to the Claude 3 family if possible.


Migrating prompts from OpenAI/ChatGPT to legacy Claude models

If you’re transitioning from OpenAI’s GPT models or ChatGPT to the Claude 2 family of models or older, you are likely to need to make some adjustments to your prompts to ensure optimal performance. While the Claude 3 family is far more steerable and can generally handle prompt migrations without the need for modification, legacy models may require additional prompt tweaks. (That being said, while not necessary, these same techniques and other prompt engineering strategies may still be useful for improving Claude 3 model performance beyond its baseline.)

Here are some tips to help you convert your GPT prompts for better results with legacy Claude models:

1. Add XML tags

XML tags (e.g., <tag></tag>) can be used to demarcate different subsections of a prompt, allowing Claude to compartmentalize the prompt into distinct parts. For example, to add text from a document to your prompt, wrap the document in <doc></doc> tags:

XML
<doc>
Some piece of text...
</doc>

Claude can also recognize other structured formats like JSON and Markdown, but XML tends to lead to the best performance in most cases due to exposure during training. You can use any tag names you want, as long as they follow the <> and </> format (although we recommend making tag names somewhat sensible and semantically connected to the content they’re demarcating). For more information, see our guide on using XML tags.

2. Provide clear and unambiguous instructions

Claude responds well to clear and direct instructions. Instead of leaving room for implicit assumptions, explicitly instruct Claude with as much detail as possible within your prompt to ensure Claude can fully execute the task at hand according to your specifications. For example, instead of:

RoleContent
UserUse the context and the question to create an answer.

Try:

RoleContent
UserPlease read the user’s question supplied within the <question> tags. Then, using only the contextual information provided above within the <context> tags, generate an answer to the question and output it within <answer> tags.

When creating prompts for Claude, adopt the mindset that Claude is new to the task and has no prior context other than what is stated in the prompt. Providing detailed and unambiguous explanations will help Claude generate better responses. For more information, see be clear and direct.

3. Prefill Claude’s response

You can extend Claude’s prompt to prefill the Assistant turn. Claude will continue the conversation from the last token in the Assistant message. This can help avoid Claude’s chatty tendencies and ensure it provides the desired output format. For example:

RoleContent
UserI’d like you to rewrite the following paragraph using the following instructions: ”
{{INSTRUCTIONS}}“.

Here is the paragraph:
<text>“{{PARAGRAPH}}”</text>

Please output your rewrite within <rewrite></rewrite> tags.
Assistant (Prefill)<rewrite>

If you use this approach, make sure to pass </rewrite> as a stop sequence in your API call. For more information, see our guide on prefilling Claude’s response.

4. Keep Claude in character

See keep Claude in character for strategies to ensure Claude maintains character in role-play scenarios. Note that for Claude 2.1 (and all Claude 3 models), you can also use a system prompt to help Claude better stay in character.

5. Place documents before instructions

Claude’s long context window (100K-200K depending on the model) makes it great at parsing and analyzing long documents and strings of text. It’s best to provide long documents and text before instructions or user input, as Claude pays extra attention to text near the bottom of the prompt. Make sure to emphasize important instructions near the end of your prompts.

See long context window tips for further information.

6. Add many examples (at least 3)

Claude learns well through examples of how it should respond and in what format. We recommend adding at least three examples to your prompt, but more is better! Examples are especially beneficial for tasks that require consistent and reliable structured outputs. Uniform examples will teach Claude to always respond in the same way every time. Learn more by visiting our guide to prompting with examples.


Legacy model features

Claude outputs asterisk actions

When given a roleplaying prompt or system prompt, legacy Claude models sometimes like to illustrate their responses creatively by writing stage directions like *smiles* or *waves*. If this is undesired, you can post-process the output to remove words in between asterisks.

An example of how to do this in Python:

Python
import re

text = "Hello. *My name is Claude. *I am an AI assistant."
cleaned = re.sub(r'\*.*?\*', '', text)
print(cleaned)
> Hello. I am an AI assistant.