Observe MCP traffic for autogenerated MCP tools
Reconfigure the AI MCP Proxy plugin to enable logging of payloads and statistics for your MCP tools, then enable the Prometheus plugin to scrape and collect these metrics for monitoring.
Prerequisites
Series Prerequisites
This page is part of the Autogenerate and observe MCP tools from any RESTful API series.
Complete the previous page, Autogenerate MCP tools from a RESTful API before completing this page.
decK v1.43+
decK is a CLI tool for managing Kong Gateway declaratively with state files. To complete this tutorial, install decK version 1.43 or later.
This guide uses deck gateway apply
, which directly applies entity configuration to your Gateway instance.
We recommend upgrading your decK installation to take advantage of this tool.
You can check your current decK version with deck version
.
Reconfigure the AI MCP Proxy plugin
To observe traffic for MCP tools, you first must enable logging and statistics on the AI MCP Proxy plugin. Apply the below configuration for the AI MCP Proxy plugin with enabled logging capabilities:
echo '
_format_version: "3.0"
plugins:
- name: ai-mcp-proxy
route: mcp-route
config:
logging:
log_payloads: true
log_statistics: true
mode: conversion-listener
tools:
- description: Get users
method: GET
path: "/marketplace/users"
parameters:
- name: id
in: query
required: false
schema:
type: string
description: Optional user ID
- description: Get orders for a user
method: GET
path: "/marketplace/orders"
parameters:
- description: User ID to filter orders
in: query
name: userid
required: true
schema:
type: string
server:
timeout: 60000
' | deck gateway apply -
Enable the Prometheus plugin
Before you configure Prometheus, enable the Prometheus plugin on Kong Gateway:
echo '
_format_version: "3.0"
plugins:
- name: prometheus
config:
status_code_metrics: true
ai_metrics: true
' | deck gateway apply -
Configure Prometheus
Create a prometheus.yml
file:
touch prometheus.yml
Now, add the following to the prometheus.yml
file to configure Prometheus to scrape MCP traffic metrics:
scrape_configs:
- job_name: 'kong'
scrape_interval: 5s
static_configs:
- targets: ['kong-quickstart-gateway:8001']
Run a Prometheus server, and pass it the configuration file created in the previous step:
docker run -d --name kong-quickstart-prometheus \
--network=kong-quickstart-net -p 9090:9090 \
-v $(PWD)/prometheus.yml:/etc/prometheus/prometheus.yml \
prom/prometheus:latest
Prometheus will begin to scrape metrics data from Kong Gateway.
Validate the configuration
You can validate that the Prometheus plugin is collecting metrics by generating MCP traffic to the mcp-service
. Enter the following question in the Cursor chat:
What users do you see in the API?
Once Cursor agent has finished reasoning, run the following to query the collected kong_ai_mcp_latency_ms
metric data:
curl -s 'localhost:9090/api/v1/query?query=kong_ai_mcp_latency_ms_bucket'
This should return something like the following:
{
"status": "success",
"data": {
"resultType": "vector",
"result": [
{
"metric": {
"__name__": "kong_ai_mcp_latency_ms_bucket",
"instance": "kong-quickstart-gateway:8001",
"job": "kong",
"le": "25.0",
"method": "tools/call",
"route": "mcp-route",
"service": "mcp-service",
"tool_name": "mcp-route-1",
"workspace": "default"
},
"value": [1755759385.507, "3"]
},
{
"metric": {
"__name__": "kong_ai_mcp_latency_ms_bucket",
"instance": "kong-quickstart-gateway:8001",
"job": "kong",
"le": "25.0",
"method": "tools/call",
"route": "mcp-route",
"service": "mcp-service",
"tool_name": "mcp-route-2",
"workspace": "default"
},
"value": [1755759385.507, "2"],
"..."
]
}
}
Cleanup
Prometheus
Once you are done experimenting with Prometheus, you can use the following commands to stop the Prometheus server you created in this guide:
docker stop kong-quickstart-prometheus
Destroy the Kong Gateway container
curl -Ls https://get.konghq.com/quickstart | bash -s -- -d