Long context window tips

Claude's extended context window (200K tokens for Claude 3 models) enables it to handle complex tasks that require processing large amounts of information. Claude's extended context window also enables you to simplify workflows that previously required splitting inputs to fit within shorter context windows. By combining inputs into a single prompt, you can streamline your process and take full advantage of Claude's capabilities.

For example, if your previous application required splitting a long document into multiple parts and processing each part separately, you can now provide the entire document to Claude in a single prompt. This not only simplifies your code but also allows Claude to have a more comprehensive understanding of the context, potentially leading to better results.

Looking for general prompt engineering techniques? Check out our prompt engineering guide.


Structuring long documents

When working with long documents (particularly 30K+ tokens), it's essential to structure your prompts in a way that clearly separates the input data from the instructions. We recommend using XML tags to encapsulate each document. This structure is how Claude was trained to take long documents, and is thus the structure that Claude is most familiar with:

Here are some documents for you to reference for your task:

<documents>
<document index="1">
<source>
(a unique identifying source for this item - could be a URL, file name, hash, etc)
</source>
<document_content>
(the text content of the document - could be a passage, web page, article, etc)
</document_content>
</document>
<document index="2">
<source>
(a unique identifying source for this item - could be a URL, file name, hash, etc)
</source>
<document_content>
(the text  content of the document - could be a passage, web page, article, etc)
</document_content>
</document>
...
</documents>

[Rest of prompt]

This structure makes it clear to Claude which parts of the prompt are input data and which are instructions, improving its ability to process the information accurately. You can also add tags to house other metadata, such as <title> or <author>.


Document-query placement

Notice in the above example of long document prompt structure that the documents come first and the rest of the prompt comes after. For situations with long documents or a lot of additional background content, Claude generally performs noticeably better if the documents and additive material are placed up top, above the detailed instructions or user query.

This is true of all Claude models, from legacy models to the Claude 3 family.


Tips for document q&a

When using Claude for document question-answering tasks, keep these tips in mind:

  • Place the question at the end of the prompt, after the input data. As mentioned, this has been shown to significantly improve the quality of Claude's responses.
  • Ask Claude to find quotes relevant to the question before answering, and to only answer if it finds relevant quotes. This encourages Claude to ground its responses in the provided context and reduces hallucination risk.
  • Instruct Claude to read the document carefully, as it will be asked questions later. This primes Claude to pay close attention to the input data with an eye for the task it will be asked to execute.

Here's an example prompt that incorporates these tips:

RoleContent
UserI'm going to give you a document. Read the document carefully, because I'm going to ask you a question about it. Here is the document: <document>{{TEXT}}</document>

First, find the quotes from the document that are most relevant to answering the question, and then print them in numbered order in <quotes></quotes> tags. Quotes should be relatively short. If there are no relevant quotes, write "No relevant quotes" instead.

Then, answer the question in <answer></answer> tags. Do not include or reference quoted content verbatim in the answer. Don't say "According to Quote [1]" when answering. Instead make references to quotes relevant to each section of the answer solely by adding their bracketed numbers at the end of relevant sentences.

Thus, the format of your overall response should look like what's shown between the <examples></examples> tags. Make sure to follow the formatting and spacing exactly.

<examples>
[Examples of question + answer pairs, with answers written exactly like how Claude's output should be structured]
</examples>

If the question cannot be answered by the document, say so.

Here is the first question: {{QUESTION}}

Multiple choice question generation

When using Claude to generate multiple choice questions based on a given text, providing example question-answer pairs from other parts of the same text can significantly improve the quality of the generated questions. It's important to note that generic multiple choice examples based on external knowledge or generated from an unrelated document do not seem to be nearly as effective.

Here's an example prompt for multiple choice question generation:

RoleContent
UserYour task is to generate multiple choice questions based on content from the following document:
<document>
{{DOCUMENT}}
</document>

Here are some example multiple choice questions and answers based on other parts of the text:

<examples>
Q1: [Example question 1, created from information within the document]
A. [Answer option A]
B. [Answer option B]
C. [Answer option C]
D. [Answer option D]
Answer: [Correct answer letter]

Q2: [Example question 2, created from information within the document]
A. [Answer option A]
B. [Answer option B]
C. [Answer option C]
D. [Answer option D]
Answer: [Correct answer letter]
</examples>

Instructions:
1. Generate 5 multiple choice questions based on the provided text.
2. Each question should have 4 answer options (A, B, C, D).
3. Indicate the correct answer for each question.
4. Make sure the questions are relevant to the text and the answer options are all plausible.

By providing example questions and answers from the same text, you give Claude a better understanding of the desired output format and the types of questions that can be generated from the given content.

For more information on this specific task, see Anthropic's blog post Prompt engineering for a long context window.


Additional resources

  • Prompt engineering techniques: Explore other strategies for optimizing your prompts and enhancing Claude's performance.
  • Anthropic cookbook: Browse a collection of Jupyter notebooks featuring copy-able code snippets that demonstrate highly effective and advanced techniques, integrations, and implementations using Claude.
  • Prompt library: Get inspired by a curated selection of prompts for various tasks and use cases.