The AI AWS Guardrails plugin enforces introspection on both inbound requests and outbound responses handled by the AI Proxy plugin. It integrates with the AWS Bedrock Guardrails service to apply compliance and safety policies at the gateway level. This ensures all data exchanged between clients and upstream LLMs adheres to the configured security standards.

AI AWS Guardrails
AI Gateway Enterprise: This plugin is only available as part of our AI Gateway Enterprise offering.
Prerequisites
Before using the AI AWS Guardrails plugin, you must define your guardrail policies in AWS. You can do this through:
- The AWS Console
- The CreateGuardrail API
Overview
The plugin includes a configurable response_buffer_size
parameter. This setting controls how many tokens from the upstream LLM response are buffered during streaming before being sent to the AWS Guardrails service for inspection. For example, setting response_buffer_size
to 50
means the plugin will collect 50 tokens from the upstream model before sending them to AWS Guardrails for evaluation. Guardrail evaluation runs in chunks as tokens stream in.
A smaller buffer size allows faster policy evaluation and quicker response rejection but may increase the number of guardrail calls. Larger sizes reduce API calls but may delay policy enforcement.
For response and request inspection, the plugin by default guards input only. You can change this behavior with the guarding_mode
field, which supports INPUT
, OUTPUT
, or BOTH
. To control which parts of the conversation are sent for content evaluation, use the text_source
field. Set it to concatenate_user_content
to inspect only user
input, or concatenate_all_content
to include the full exchange, including system and assistant messages.
Format
This plugin works with all of the AI Proxy plugin’s route_type
settings (excluding the preserve
mode).