Web fetch tool
The web fetch tool allows Claude to retrieve full content from specified web pages and PDF documents.
The web fetch tool is currently in beta. To enable it, use the beta header web-fetch-2025-09-10
in your API requests.
Please use this form to provide feedback on the quality of the model responses, the API itself, or the quality of the documentation.
Enabling the web fetch tool in environments where Claude processes untrusted input alongside sensitive data poses data exfiltration risks. We recommend only using this tool in trusted environments or when handling non-sensitive data.
To minimize exfiltration risks, Claude is not allowed to dynamically construct URLs. Claude can only fetch URLs that have been explicitly provided by the user or that come from previous web search or web fetch results. However, there is still residual risk that should be carefully considered when using this tool.
If data exfiltration is a concern, consider:
- Disabling the web fetch tool entirely
- Using the
max_uses
parameter to limit the number of requests - Using the
allowed_domains
parameter to restrict to known safe domains
Supported models
Web fetch is available on:
- Claude Opus 4.1 (
claude-opus-4-1-20250805
) - Claude Opus 4 (
claude-opus-4-20250514
) - Claude Sonnet 4 (
claude-sonnet-4-20250514
) - Claude Sonnet 3.7 (
claude-3-7-sonnet-20250219
) - Claude Sonnet 3.5 v2 (deprecated) (
claude-3-5-sonnet-latest
) - Claude Haiku 3.5 (
claude-3-5-haiku-latest
)
How web fetch works
When you add the web fetch tool to your API request:
- Claude decides when to fetch content based on the prompt and available URLs.
- The API retrieves the full text content from the specified URL.
- For PDFs, automatic text extraction is performed.
- Claude analyzes the fetched content and provides a response with optional citations.
How to use web fetch
Provide the web fetch tool in your API request:
Tool definition
The web fetch tool supports the following parameters:
Max uses
The max_uses
parameter limits the number of web fetches performed. If Claude attempts more fetches than allowed, the web_fetch_tool_result
will be an error with the max_uses_exceeded
error code. There is currently no default limit.
Domain filtering
When using domain filters:
- Domains should not include the HTTP/HTTPS scheme (use
example.com
instead ofhttps://example.com
) - Subdomains are automatically included (
example.com
coversdocs.example.com
) - Subpaths are supported (
example.com/blog
) - You can use either
allowed_domains
orblocked_domains
, but not both in the same request.
Be aware that Unicode characters in domain names can create security vulnerabilities through homograph attacks, where visually similar characters from different scripts can bypass domain filters. For example, аmazon.com
(using Cyrillic ‘а’) may appear identical to amazon.com
but represents a different domain.
When configuring domain allow/block lists:
- Use ASCII-only domain names when possible
- Consider that URL parsers may handle Unicode normalization differently
- Test your domain filters with potential homograph variations
- Regularly audit your domain configurations for suspicious Unicode characters
Content limits
The max_content_tokens
parameter limits the amount of content that will be included in the context. If the fetched content exceeds this limit, it will be truncated. This helps control token usage when fetching large documents.
The max_content_tokens
parameter limit is approximate. The actual number of input tokens used can vary by a small amount.
Citations
Unlike web search where citations are always enabled, citations are optional for web fetch. Set "citations": {"enabled": true}
to enable Claude to cite specific passages from fetched documents.
When displaying web results or information contained in web results to end users, inline citations must be made clearly visible and clickable in your user interface.
Response
Here’s an example response structure:
Fetch results
Fetch results include:
url
: The URL that was fetchedcontent
: A document block containing the fetched contentretrieved_at
: Timestamp when the content was retrieved
The web fetch tool caches results to improve performance and reduce redundant requests. This means the content returned may not always be the latest version available at the URL. The cache behavior is managed automatically and may change over time to optimize for different content types and usage patterns.
For PDF documents, the content will be returned as base64-encoded data:
Errors
When the web fetch tool encounters an error, the Anthropic API returns a 200 (success) response with the error represented in the response body:
These are the possible error codes:
invalid_input
: Invalid URL formaturl_too_long
: URL exceeds maximum length (250 characters)url_not_allowed
: URL blocked by domain filtering rules and model restrictionsurl_not_accessible
: Failed to fetch content (HTTP error)too_many_requests
: Rate limit exceededunsupported_content_type
: Content type not supported (only text and PDF)max_uses_exceeded
: Maximum web fetch tool uses exceededunavailable
: An internal error occurred
URL validation
For security reasons, the web fetch tool can only fetch URLs that have previously appeared in the conversation context. This includes:
- URLs in user messages
- URLs in client-side tool results
- URLs from previous web search or web fetch results
The tool cannot fetch arbitrary URLs that Claude generates or URLs from container-based server tools (Code Execution, Bash, etc.).
Combined search and fetch
Web fetch works seamlessly with web search for comprehensive information gathering:
In this workflow, Claude will:
- Use web search to find relevant articles
- Select the most promising results
- Use web fetch to retrieve full content
- Provide detailed analysis with citations
Prompt caching
Web fetch works with prompt caching. To enable prompt caching, add cache_control
breakpoints in your request. Cached fetch results can be reused across conversation turns.
Streaming
With streaming enabled, fetch events are part of the stream with a pause during content retrieval:
Batch requests
You can include the web fetch tool in the Messages Batches API. Web fetch tool calls through the Messages Batches API are priced the same as those in regular Messages API requests.
Usage and pricing
Web fetch usage has no additional charges beyond standard token costs:
The web fetch tool is available on the Anthropic API at no additional cost. You only pay standard token costs for the fetched content that becomes part of your conversation context.
To protect against inadvertently fetching large content that would consume excessive tokens, use the max_content_tokens
parameter to set appropriate limits based on your use case and budget considerations.
Example token usage for typical content:
- Average web page (10KB): ~2,500 tokens
- Large documentation page (100KB): ~25,000 tokens
- Research paper PDF (500KB): ~125,000 tokens