Common Issues & Mitigation Strategies with LLMs

  1. Hallucinations: LLMs may sometimes generate text that is factually incorrect, inconsistent, or irrelevant to the given context. This is known as hallucination and can occur when the model tries to fill in gaps in its knowledge or when the input is ambiguous.
    • Minimizing hallucinations: Learn techniques to reduce the occurrence of factually incorrect or inconsistent output from the model. This page covers strategies such as allowing Claude to say when it doesn’t know the answer, having Claude extract quotes before responding, and other prompt engineering techniques.
  2. Jailbreaking and prompt injections: Users may attempt to bypass the model’s safeguards and ethical guidelines by crafting specific prompts that exploit vulnerabilities in the model’s training. This can lead to the model generating inappropriate or harmful content.
    • Mitigating jailbreaks & prompt injections: Discover best practices to prevent users from exploiting the model’s vulnerabilities and generating inappropriate content. This page discusses methods like input validation and other prompting strategies.
  3. Prompt leaks: Users may attempt to get the model to reveal parts of the input prompt in its generated output. This can be a concern when dealing with sensitive information or when the prompt contains details that should not be disclosed.
    • Reducing prompt leaks: Find out how to minimize the risk of the model revealing sensitive information from the input prompt in its generated output. This page explores techniques such as separating context from queries, prompting strategies, and applying post-processing to the output.
  4. Out-of-character responses: When using LLMs for character role-play scenarios or to emulate a specific personality, the model may sometimes deviate from the intended character traits, leading to inconsistent or unrealistic responses, particularly over long conversations.
    • Keep Claude in character: Get tips on maintaining consistent and in-character responses when using the Claude for character role-play scenarios. This page covers strategies like providing clear character descriptions and using context-setting prompts.
  5. Non-deterministic outputs: Due to the probabilistic nature of LLMs, the generated output may vary even when given the same input. This can be problematic in scenarios where consistent and reproducible results are desired.
    • While LLMs cannot be fully deterministic, you can set temperature to 0.0 to reduce randomness as much as possible. For more information about API parameters, see our Messages API documentation.

In addition to these troubleshooting guides, we recommend reviewing our prompt engineering documentation for a comprehensive overview of how to craft highly effective prompts. This guide offers further insights on optimizing prompts, improving model steerability, and increasing Claude’s overall responsiveness.

If you continue to have trouble, please don’t hesitate to contact our customer support team. We are here to help you make the best use of Claude.