Use the AI GCP Model Armor plugin
Configure the AI Proxy Advanced plugin to route requests to any LLM upstream, then apply the AI GCP Model Armor plugin to inspect prompts and responses for unsafe content using Google Cloud’s Model Armor service.
Prerequisites
Kong Konnect
This is a Konnect tutorial and requires a Konnect personal access token.
-
Create a new personal access token by opening the Konnect PAT page and selecting Generate Token.
-
Export your token to an environment variable:
export KONNECT_TOKEN='YOUR_KONNECT_PAT'
Copied! -
Run the quickstart script to automatically provision a Control Plane and Data Plane, and configure your environment:
curl -Ls https://get.konghq.com/quickstart | bash -s -- -k $KONNECT_TOKEN --deck-output
Copied!This sets up a Konnect Control Plane named
quickstart
, provisions a local Data Plane, and prints out the following environment variable exports:export DECK_KONNECT_TOKEN=$KONNECT_TOKEN export DECK_KONNECT_CONTROL_PLANE_NAME=quickstart export KONNECT_CONTROL_PLANE_URL=https://us.api.konghq.com export KONNECT_PROXY_URL='http://localhost:8000'
Copied!Copy and paste these into your terminal to configure your session.
Kong Gateway running
This tutorial requires Kong Gateway Enterprise. If you don’t have Kong Gateway set up yet, you can use the quickstart script with an enterprise license to get an instance of Kong Gateway running almost instantly.
-
Export your license to an environment variable:
export KONG_LICENSE_DATA='LICENSE-CONTENTS-GO-HERE'
Copied! -
Run the quickstart script:
curl -Ls https://get.konghq.com/quickstart | bash -s -- -e KONG_LICENSE_DATA
Copied!Once Kong Gateway is ready, you will see the following message:
Kong Gateway Ready
decK v1.43+
decK is a CLI tool for managing Kong Gateway declaratively with state files. To complete this tutorial, install decK version 1.43 or later.
This guide uses deck gateway apply
, which directly applies entity configuration to your Gateway instance.
We recommend upgrading your decK installation to take advantage of this tool.
You can check your current decK version with deck version
.
Required entities
For this tutorial, you’ll need Kong Gateway entities, like Gateway Services and Routes, pre-configured. These entities are essential for Kong Gateway to function but installing them isn’t the focus of this guide. Follow these steps to pre-configure them:
-
Run the following command:
echo ' _format_version: "3.0" services: - name: example-service url: http://httpbin.konghq.com/anything routes: - name: example-route paths: - "/anything" service: name: example-service ' | deck gateway apply -
Copied!
To learn more about entities, you can read our entities documentation.
OpenAI
This tutorial uses OpenAI:
- Create an OpenAI account.
- Get an API key.
- Create a decK variable with the API key:
export DECK_OPENAI_API_KEY="YOUR OPENAI API KEY"
export DECK_OPENAI_API_KEY="YOUR OPENAI API KEY"
GCP Account and gcloud CLI
To use the AI GCP Model Armor plugin, you need a service account with Model Armor Admin permissions and a configured Model Armor template:
1. **Check your IAM permissions:**
Your service account must have the [`roles/modelarmor.admin`](https://cloud.google.com/iam/docs/roles-permissions/modelarmor) IAM role.
2. Create the `modelarmor-admin` service account in your GCP by executing the following command in your terminal:
```bash
gcloud iam service-accounts create modelarmor-admin \
--description="Service account for Model Armor administration" \
--display-name="Model Armor Admin" \
--project=$DECK_GCP_PROJECT_ID
```
3. Create and activate a service account key file by executing the following commands:
```bash
gcloud iam service-accounts keys create modelarmor-admin-key.json \
--iam-account=modelarmor-admin@$DECK_GCP_PROJECT_ID.iam.gserviceaccount.com
gcloud auth activate-service-account \
--key-file=modelarmor-admin-key.json
```
After creating the key, convert the contents of `modelarmor-admin-key.json` into a **single-line JSON string**.
Escape all necessary characters — quotes (`"`) and newlines (`\n`) — so that it becomes a valid one-line JSON string.
Then export it as an environment variable:
```bash
export DECK_GCP_SERVICE_ACCOUNT_JSON="<single-line-escaped-json>"
```
4. Enable the Model Armor API:
```bash
gcloud config set api_endpoint_overrides/modelarmor "https://modelarmor.$DECK_GCP_LOCATION_ID.rep.googleapis.com/"
gcloud services enable modelarmor.googleapis.com --project=$DECK_GCP_PROJECT_ID
```
5. Create a Model Armor template with strict guardrails. This template blocks **hate speech, harassment, and sexually explicit content** at medium confidence or higher, enforces PI/jailbreak and malicious URI filters, and logs all inspection events. Execute the following command to create the template:
```bash
gcloud model-armor templates create strict-guardrails \
--project=$DECK_GCP_PROJECT_ID \
--location=$DECK_GCP_LOCATION_ID \
--rai-settings-filters='[
{ "filterType": "HATE_SPEECH", "confidenceLevel": "MEDIUM_AND_ABOVE" },
{ "filterType": "HARASSMENT", "confidenceLevel": "MEDIUM_AND_ABOVE" },
{ "filterType": "SEXUALLY_EXPLICIT", "confidenceLevel": "MEDIUM_AND_ABOVE" }
]' \
--basic-config-filter-enforcement=enabled \
--pi-and-jailbreak-filter-settings-enforcement=enabled \
--pi-and-jailbreak-filter-settings-confidence-level=LOW_AND_ABOVE \
--malicious-uri-filter-settings-enforcement=enabled \
--template-metadata-log-operations \
--template-metadata-log-sanitize-operations
```
6. Export the template ID:
```bash
export DECK_GCP_TEMPLATE_ID="strict-guardrails"
```
Configure the plugin
First, set up the AI Proxy plugin. This plugin will forward requests to the LLM upstream, while GCP Model Armor will enforce content safety on prompts and responses.
In this example, we’ll use the gpt-4o
model:
echo '
_format_version: "3.0"
plugins:
- name: ai-proxy
config:
route_type: llm/v1/chat
auth:
header_name: Authorization
header_value: Bearer ${{ env "DECK_OPENAI_API_KEY" }}
model:
provider: openai
name: gpt-4o
options:
max_tokens: 512
temperature: 1.0
' | deck gateway apply -
Configure the GCP Model Armor plugin
After configuring AI Proxy to route requests to your LLM, you can apply the AI GCP Model Armor plugin to enforce content safety on prompts and responses. In this example, the plugin is configured to guard input prompts only, reveal blocked categories, and return user-friendly messages when content is blocked.
echo '
_format_version: "3.0"
plugins:
- name: ai-gcp-model-armor
config:
project_id: "${{ env "DECK_GCP_PROJECT_ID" }}"
location_id: "${{ env "DECK_GCP_LOCATION_ID" }}"
template_id: "${{ env "DECK_GCP_TEMPLATE_ID" }}"
guarding_mode: INPUT
gcp_use_service_account: true
gcp_service_account_json: "${{ env "DECK_GCP_SERVICE_ACCOUNT_JSON" }}"
reveal_failure_categories: true
request_failure_message: Your request was blocked by content policies.
response_failure_message: The model response was filtered for safety.
timeout: 15000
response_buffer_size: 4096
text_source: last_message
' | deck gateway apply -
Validate configuration
Once the AI GCP Model Armor is configured, you can test different kinds of prompts to make sure the guardrails are working. Disallowed prompt categories should be blocked based on content return a 404: Your request was blocked by content policies.
error.
Cleanup
Clean up Konnect environment
If you created a new control plane and want to conserve your free trial credits or avoid unnecessary charges, delete the new control plane used in this tutorial.
Destroy the Kong Gateway container
curl -Ls https://get.konghq.com/quickstart | bash -s -- -d