POST
/
v1
/
complete

Headers

anthropic-version
string
required

The version of the Anthropic API you want to use.

Read more about versioning and our version history here.

x-api-key
string
required

Your unique API key for authentication.

This key is required in the header of all API requests, to authenticate your account and access Anthropic's services. Get your API key through the Console. Each key is scoped to a Workspace.

Body

application/json
model
string
required

The model that will complete your prompt.

See models for additional details and options.

prompt
string
required

The prompt that you want Claude to complete.

For proper response generation you will need to format your prompt using alternating \n\nHuman: and \n\nAssistant: conversational turns. For example:

"\n\nHuman: {userQuestion}\n\nAssistant:"

See prompt validation and our guide to prompt design for more details.

Minimum length: 1
max_tokens_to_sample
integer
required

The maximum number of tokens to generate before stopping.

Note that our models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate.

Required range: x > 1
stop_sequences
string[]

Sequences that will cause the model to stop generating.

Our models stop on "\n\nHuman:", and may include additional built-in stop sequences in the future. By providing the stop_sequences parameter, you may include additional strings that will cause the model to stop generating.

temperature
number

Amount of randomness injected into the response.

Defaults to 1.0. Ranges from 0.0 to 1.0. Use temperature closer to 0.0 for analytical / multiple choice, and closer to 1.0 for creative and generative tasks.

Note that even with temperature of 0.0, the results will not be fully deterministic.

Required range: 0 < x < 1
top_p
number

Use nucleus sampling.

In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. You should either alter temperature or top_p, but not both.

Recommended for advanced use cases only. You usually only need to use temperature.

Required range: 0 < x < 1
top_k
integer

Only sample from the top K options for each subsequent token.

Used to remove "long tail" low probability responses. Learn more technical details here.

Recommended for advanced use cases only. You usually only need to use temperature.

Required range: x > 0
metadata
object

An object describing metadata about the request.

stream
boolean

Whether to incrementally stream the response using server-sent events.

See streaming for details.

Response

200 - application/json
type
enum<string>
default: completionrequired

Object type.

For Text Completions, this is always "completion".

Available options:
completion
id
string
required

Unique object identifier.

The format and length of IDs may change over time.

completion
string
required

The resulting completion up to and excluding the stop sequences.

stop_reason
string | null
required

The reason that we stopped.

This may be one the following values:

  • "stop_sequence": we reached a stop sequence — either provided by you via the stop_sequences parameter, or a stop sequence built into the model
  • "max_tokens": we exceeded max_tokens_to_sample or the model's maximum
model
string
required

The model that handled the request.