By default, we don’t apply any guardrails to LLM models. Our customers can implement guardrails through various methods:

  1. Using built-in options:

    • Models such as Llama Guard provide built-in guardrails.
    • Integration with existing security frameworks.
  2. Third-party solutions:

    • AI gateways like Portkey offer guardrails as a feature.
    • Documentation available at: Portkey Guardrails

Best practices:

  • Implement guardrails appropriate to your use case.
  • Conduct regular security audits.
  • Monitor model outputs consistently.
  • Keep security policies updated.