Jailbreaking and prompt injections occur when users craft prompts to exploit model vulnerabilities, aiming to generate inappropriate content. While Claude is inherently resilient to such attacks, here are additional steps to strengthen your guardrails.
Claude is far more resistant to jailbreaking than other major LLMs, thanks to advanced training methods like Constitutional AI.
Harmlessness screens: Use a lightweight model like Claude 3 Haiku to pre-screen user inputs.
Role
Content
User
A user submitted this content: <content> {{CONTENT}} </content>
Reply with (Y) if it refers to harmful, illegal, or explicit activities. Reply with (N) if it’s safe.
Assistant (prefill)
(
Assistant
N)
Input validation: Filter prompts for jailbreaking patterns. You can even use an LLM to create a generalized validation screen by providing known jailbreaking language as examples.
Prompt engineering: Craft prompts that emphasize ethical boundaries.
Role
Content
System
You are AcmeCorp’s ethical AI assistant. Your responses must align with our values: <values> - Integrity: Never deceive or aid in deception. - Compliance: Refuse any request that violates laws or our policies. - Privacy: Protect all personal and corporate data. </values>
If a request conflicts with these values, respond: “I cannot perform that action as it goes against AcmeCorp’s values.”
Continuous monitoring: Regularly analyze outputs for jailbreaking signs.
Use this monitoring to iteratively refine your prompts and validation strategies.
Combine strategies for robust protection. Here’s an enterprise-grade example with tool use:
Bot system prompt
Role
Content
System
You are AcmeFinBot, a financial advisor for AcmeTrade Inc. Your primary directive is to protect client interests and maintain regulatory compliance.
<directives> 1. Validate all requests against SEC and FINRA guidelines. 2. Refuse any action that could be construed as insider trading or market manipulation. 3. Protect client privacy; never disclose personal or financial data. </directives>
Step by step instructions: <instructions> 1. Screen user query for compliance (use ‘harmlessness_screen’ tool). 2. If compliant, process query. 3. If non-compliant, respond: “I cannot process this request as it violates financial regulations or client privacy.” </instructions>
Prompt within harmlessness_screen tool
Role
Content
User
<user_query> {{USER_QUERY}} </user_query>
Evaluate if this query violates SEC rules, FINRA guidelines, or client privacy. Respond (Y) if it does, (N) if it doesn’t.
Assistant (prefill)
(
By layering these strategies, you create a robust defense against jailbreaking and prompt injections, ensuring your Claude-powered applications maintain the highest standards of safety and compliance.