Dedicated Cloud Gateways reference

Uses: Kong Gateway

Dedicated Cloud Gateways domain breaking changes: Review domain breaking changes for Dedicated Cloud Gateways and migrate to the new domain before September 30, 2025.

How do Dedicated Cloud Gateways work?

When you create a Dedicated Cloud Gateway, Konnect creates a Control Plane. This Control Plane, like other Konnect Control Planes, is hosted by Konnect. You can then deploy Data Planes in different regions.

Dedicated Cloud Gateways support two different configuration modes:

  • Autopilot Mode: Configure expected requests per second, and Konnect pre-warms and autoscales the Data Plane nodes automatically.
  • Custom Mode: Manually specify the instance size, type, and number of nodes per cluster.
 
flowchart TD
A(Dedicated Cloud Gateway Control Plane)
B(Managed Data Plane Node 
Region 1) C(Managed Data Plane Node
Region 2) subgraph id1 [Konnect] A end A --auto-scale configuration---> B A --auto-scale configuration---> C

How do I provision a Control Plane?

  1. Create a Dedicated Cloud Gateway Control Plane using by issuing a POST request to the Control Plane API:

     curl -X POST "$KONNECT_CONTROL_PLANE_URL/v2/control-planes/" \
         -H "Accept: application/json"\
         -H "Content-Type: application/json"\
         -H "Authorization: Bearer $KONNECT_TOKEN" \
         --json '{
           "name": "cloud-gateway-control-plane",
           "description": "A test Control Plane for Dedicated Cloud Gateways.",
           "cluster_type": "CLUSTER_TYPE_CONTROL_PLANE",
           "cloud_gateway": true,
           "proxy_urls": [
             {
               "host": "example.com"
             },
             {
               "port": 443
             },
             {
               "protocol": "https"
             }
           ]
         }'
    
  2. Create a Dedicated Cloud Gateway Data Plane by issuing a PUT request to the Cloud Gateways API:

     curl -X PUT "$KONNECT_CONTROL_PLANE_URL/v2/cloud-gateways/configurations" \
         -H "Accept: application/json"\
         -H "Content-Type: application/json"\
         -H "Authorization: Bearer $KONNECT_TOKEN" \
         --json '{
           "control_plane_id": "'$CONTROL_PLANE_ID'",
           "version": 3.9,
           "control_plane_geo": "ap-northeast-1",
           "dataplane_groups": [
             {
               "provider": "aws"
             },
             {
               "region": "na"
             },
             {
               "cloud_gateway_network_id": "'$CLOUD_GATEWAY_NETWORK_ID'"
             },
             {
               "autoscale": [
                 {
                   "kind": "autopilot"
                 },
                 {
                   "base_rps": 100
                 }
               ]
             }
           ]
         }'
    

AWS workload identities

Dedicated Cloud Gateways support AWS workload identities for data plane instances, enabling secure integration with your own AWS-managed services using IAM AssumeRole. This allows native and custom Kong plugins running in the data plane to access AWS services (like S3, Secrets Manager, Lambda, and DynamoDB) without static credentials, improving both security and operational simplicity.

Using AWS workload identities with Dedicated Cloud Gateways provides the following benefits:

  • Credential-less integration: No need to manage or rotate static AWS credentials.
  • Security-first: Workload identity is scoped to assume specific roles defined by you.
  • Compatibility: Native and custom Kong plugins can seamlessly use AssumeRole credentials.

This is currently only available for AWS.

How AWS workload identities works

  1. When an AWS Dedicated Cloud Gateway is provisioned, Konnect automatically creates the following:
    • An IAM Role in your dedicated tenant AWS account named after the network UUID. You can derive this IAM Role ARN.
    • A trust policy that enables AssumeRoleWithWebIdentity for the EKS service account used by the Kong Gateway data planes. For example:
      {
       "Version": "2012-10-17",
       "Statement": [{
         "Effect": "Allow",
         "Principal": {
           "AWS": "arn:aws:iam::*:root"
         },
         "Action": "sts:AssumeRole",
         "Condition": {
           "StringLike": {
             "aws:PrincipalArn": "arn:aws:iam::*:role/*"
           }
         }
       }]
       }
      
  2. You define a trust relationship in your AWS account, allowing the Dedicated Cloud Gateway IAM role to assume a target role in your account.
  3. The workload identity annotation on Konnect’s service account is used to connect to this IAM role.

Keep the following security considerations in mind:

  • The IAM role created by Konnect is assume-only and has no permissions to manage infrastructure or cloud resources.
  • You control which of your IAM roles Konnect is allowed to assume by configuring trust relationships.

Derive the Konnect IAM Role ARN

You can compute the ARN for Konnect’s IAM role using this pattern:

arn:aws:iam::$KONNECT_AWS_ACCOUNT_ID:role/$NETWORK_ID
  1. To get the AWS account ID, do the following:

  2. To fetch the UUID of the Network, do the following:

Custom DNS

Konnect integrates domain name management and configuration with Dedicated Cloud Gateways.

Konnect configuration

  1. In Konnect, navigate to API Gateway in the sidebar.
  2. Click your control plane
  3. Click Connect.
  4. From the Connect menu, save the Public Edge DNS URL.
  5. Navigate to Custom Domains in the sidebar.
  6. Click New Custom Domain.
  7. Enter your domain name.

    Save the value that appears under CNAME.

Dedicated Cloud Gateways domain registrar configuration

The following settings must be configured in your domain registrar using the values in Konnect. For example, in AWS Route 53, it would look like this:

Host Name

Record Type

Routing Policy

Alias

Evaluate Target Health

Value

TTL

_acme-challenge.example.com CNAME Simple _acme-challenge.9e454bcfec.acme.gateways.konggateway.com 300
example.com CNAME Simple 9e454bcfec.gateways.konggateway.com 300

Kong Gateway configuration

The Kong Gateway configuration for your data plane nodes can be customized using environment variables.

The following table lists the environment variables that you can set while creating a Dedicated Cloud Gateway.

Parameter Description
KONG_ALLOW_DEBUG_HEADER Default: off

Enable the Kong-Debug header function. If it is on, Kong will add Kong-Route-Id, Kong-Route-Name, Kong-Service-Id, and Kong-Service-Name debug headers to the response when the client request header Kong-Debug: 1 is present.

KONG_HEADERS Default: server_tokens, latency_tokens, X-Kong-Request-Id

Comma-separated list of headers Kong should inject in client responses.

Accepted values are:

  • Server: Injects Server: kong/x.y.z on Kong-produced responses (e.g., Admin API, rejected requests from auth plugin).
  • Via: Injects Via: kong/x.y.z for successfully proxied requests.
  • X-Kong-Proxy-Latency: Time taken (in milliseconds) by Kong to process a request and run all plugins before proxying the request upstream.
  • X-Kong-Response-Latency: Time taken (in milliseconds) by Kong to produce a response in case of, e.g., a plugin short-circuiting the request, or in case of an error.
  • X-Kong-Upstream-Latency: Time taken (in milliseconds) by the upstream service to send response headers.
  • X-Kong-Admin-Latency: Time taken (in milliseconds) by Kong to process an Admin API request.
  • X-Kong-Upstream-Status: The HTTP status code returned by the upstream service. This is particularly useful for clients to distinguish upstream statuses if the response is rewritten by a plugin.
  • X-Kong-Request-Id: Unique identifier of the request.
  • X-Kong-Total-Latency: Time elapsed (in milliseconds) between the first bytes being read from the client and the log write after the last bytes were sent to the client. Calculated as the difference between the current timestamp and the timestamp when the request was created.
  • X-Kong-Third-Party-Latency: Cumulative sum of all third-party latencies, including DNS resolution, HTTP client calls, Socket operations, and Redis operations.
  • X-Kong-Client-Latency: Time that Kong waits to receive headers and body from the client, and also how long Kong waits for the client to read/receive the response from Kong.
  • server_tokens: Same as specifying both Server and Via.
  • latency_tokens: Same as specifying X-Kong-Proxy-Latency, X-Kong-Response-Latency, X-Kong-Admin-Latency, and X-Kong-Upstream-Latency.
  • advanced_latency_tokens: Same as specifying X-Kong-Proxy-Latency, X-Kong-Response-Latency, X-Kong-Admin-Latency, X-Kong-Upstream-Latency. X-Kong-Total-Latency, X-Kong-Third-Party-Latency, and X-Kong-Client-Latency.

In addition to these, this value can be set to off, which prevents Kong from injecting any of the above headers. Note that this does not prevent plugins from injecting headers of their own.

Example: headers = via, latency_tokens

KONG_HEADER_UPSTREAM

Comma-separated list of headers Kong should inject in requests to upstream.

At this time, the only accepted value is:

  • X-Kong-Request-Id: Unique identifier of the request.

In addition, this value can be set to off, which prevents Kong from injecting the above header. Note that this does not prevent plugins from injecting headers of their own.

KONG_LATENCY_TOKENS

Removes the latency information from the HTTP response headers.

KONG_LOG_LEVEL Default: notice

Log level of the data plane node.

The logs are available in Konnect, in the Logs tab of the data plane node.

KONG_REAL_IP_HEADER Default: X-Real-IP

Defines the request header field whose value will be used to replace the client address. This value sets the ngx_http_realip_module directive of the same name in the Nginx configuration.

If this value receives proxy_protocol:

  • at least one of the proxy_listen entries must have the proxy_protocol flag enabled.
  • the proxy_protocol parameter will be appended to the listen directive of the Nginx template.

See http://nginx.org/en/docs/http/ngx_http_realip_module.html#real_ip_header for a description of this directive.

KONG_REAL_IP_RECURSIVE Default: off

This value sets the ngx_http_realip_module directive of the same name in the Nginx configuration.

See http://nginx.org/en/docs/http/ngx_http_realip_module.html#real_ip_recursive for a description of this directive.

KONG_REQUEST_DEBUG_TOKEN Default:

The Request Debug Token is used in the X-Kong-Request-Debug-Token header to prevent abuse. If this value is not set (the default), a random token will be generated when Kong starts, restarts, or reloads. If a token is specified manually, then the provided token will be used.

You can locate the generated debug token in two locations:

  • Kong error log: Debug token will be logged in the error log (notice level) when Kong starts, restarts, or reloads. The log line will have the: [request-debug] prefix to aid searching.
  • Filesystem: Debug token will also be stored in a file located at {prefix}/.request_debug_token and updated when Kong starts, restarts, or reloads.
KONG_SERVER_TOKENS

Removes the Kong version information from the HTTP response headers.

KONG_TRACING_INSTRUMENTATIONS Default: off

Comma-separated list of tracing instrumentations this node should load. By default, no instrumentations are enabled.

Valid values for this setting are:

  • off: do not enable instrumentations.
  • request: only enable request-level instrumentations.
  • all: enable all the following instrumentations.
  • db_query: trace database queries.
  • dns_query: trace DNS queries.
  • router: trace router execution, including router rebuilding.
  • http_client: trace OpenResty HTTP client requests.
  • balancer: trace balancer retries.
  • plugin_rewrite: trace plugin iterator execution with rewrite phase.
  • plugin_access: trace plugin iterator execution with access phase.
  • plugin_header_filter: trace plugin iterator execution with header_filter phase.

Note: In the current implementation, tracing instrumentations are not enabled in stream mode.

KONG_TRACING_SAMPLING_RATE Default: 0.01

Tracing instrumentation sampling rate. Tracer samples a fixed percentage of all spans following the sampling rate.

Example: 0.25, this accounts for 25% of all traces.

KONG_TRUSTED_IPS

Defines trusted IP address blocks that are known to send correct X-Forwarded-* headers. Requests from trusted IPs make Kong forward their X-Forwarded-* headers upstream. Non-trusted requests make Kong insert its own X-Forwarded-* headers.

This property also sets the set_real_ip_from directive(s) in the Nginx configuration. It accepts the same type of values (CIDR blocks) but as a comma-separated list.

To trust all IPs, set this value to 0.0.0.0/0,::/0.

If the special value unix: is specified, all UNIX-domain sockets will be trusted.

See http://nginx.org/en/docs/http/ngx_http_realip_module.html#set_real_ip_from for examples of accepted values.

KONG_UNTRUSTED_LUA_SANDBOX_REQUIRES

Comma-separated list of modules allowed to be loaded with require inside the sandboxed environment. Ignored if untrusted_lua is not sandbox.

For example, say you have configured the Serverless pre-function plugin and it contains the following requires:

local template = require "resty.template"
local split = require "kong.tools.string".split

To run the plugin, add the modules to the allowed list:

untrusted_lua_sandbox_requires = resty.template, kong.tools.utils

Warning: Allowing certain modules may create opportunities to escape the sandbox. For example, allowing os or luaposix may be unsafe.

How do I set environment variables?

In the Konnect UI, you can add environment variables to a Dedicated Cloud Gateway when you create the data plane node. Navigate to your Dedicated Cloud Gateway control plane and from the Actions dropdown menu, select “Edit or Resize Cluster”. Click Advanced options and enter the environment variable key and value pairs you want to use.

You can add environment variables using the Cloud Gateways API. When you create a Dedicated Cloud Gateway Data Plane with a PUT request to the /cloud-gateways/configurations endpoint, add the environment array containing the name and value of each variable:

 curl -X PUT "https://global.api.konghq.com/v2/cloud-gateways/configurations" \
     -H "Authorization: Bearer $KONNECT_TOKEN" \
     --json '{
       "control_plane_id": "'$CONTROL_PLANE_ID'",
       "version": "3.11",
       "control_plane_geo": "us",
       "dataplane_groups": [
         {
           "provider": "aws",
           "region": "us-east-2",
           "cloud_gateway_network_id": "'$CLOUD_GATEWAY_NETWORK_ID'",
           "autoscale": {
             "kind": "autopilot",
             "base_rps": 100
           },
           "environment": [
             {
               "name": "KONG_TRACING_SAMPLING_RATE",
               "value": "0.01"
             }
           ]
         }
       ]
     }'

Securing backend communication

Dedicated Cloud Gateways only support public networking. If your use case requires private connectivity, consider using Dedicated Cloud Gateways with AWS Transit Gateways.

To securely connect a Dedicated Cloud Gateway to your backend, you can inject a shared secret into each request using the Request Transformer plugin.

  1. Ensure the backend accepts a known token like an Authorization header.
  2. Attach the Request Transformer plugin to the Control Plane and Gateway Service that you want to secure:

     curl -X POST "$KONNECT_CONTROL_PLANE_URL/v2/control-planes/$CONTROL_PLANE_ID/core-entities/services/$SERVICE_ID/plugins" \
         -H "accept: application/json"\
         -H "Content-Type: application/json"\
         -H "Authorization: Bearer $KONNECT_TOKEN" \
         --json '{
           "name": "request-transformer",
           "config": {
             "add": {
               "headers": [
                 "Authorization:Bearer '$SECRET_TOKEN_VALUE'"
               ]
             }
           }
         }'
    

AWS Transit Gateway

If you are using Dedicated Cloud Gateways and your upstream services are hosted in AWS, AWS Transit Gateway is the preferred method for most users. For more information and a guide on how to attach your Dedicated Cloud Gateway, see the Transit Gateways documentation.

Azure VNet Peering

If you are using Dedicated Cloud Gateways and your upstream services are hosted in Azure, VNet Peering is the preferred method for most users. For more information and a guide on how to attach your Dedicated Cloud Gateway, see the Azure Peering documentation.

GCP VPC Peering

If you are using Dedicated Cloud Gateways and your upstream services are hosted in GCP, VPC Network Peering is the preferred method for most users. For more information and a guide on how to attach your Dedicated Cloud Gateway, see the GCP VPC Peering documentation.

Custom plugins

With Dedicated Cloud Gateways, Konnect can stream custom plugins from the Control Plane to the Data Plane. This means that the Control Plane becomes a single source of truth for plugin versions. You only need to upload a plugin once, to the Control Plane, and Konnect handles distributing the plugin code to all Data Planes in that Control Plane.

How does custom plugin streaming work?

With Dedicated Cloud Gateways, Konnect can stream custom plugins from the Control Plane to the Data Plane. The Control Plane becomes the single source of truth for plugin versions. You only need to upload the plugin once, and Konnect handles distribution to all Data Planes in the same Control Plane.

A custom plugin must meet the following requirements:

  • Unique name per plugin
  • One handler.lua and one schema.lua file
  • Cannot run in the init_worker phase or create timers
  • Must be written in Lua
  • A personal or system access token for the Konnect API

Custom plugin limitations

Keep the following custom plugin limitations in mind when adding them to Dedicated Cloud Gateways:

  • Only schema.lua and handler.lua files are supported. Plugin logic must be self-contained in these two files. You can’t use DAOs, custom APIs, migrations, or multiple Lua modules.
  • Custom modules cannot be required when plugin sandboxing is enabled. Eternal Lua files or shared libraries can’t be loaded.
  • Custom validation must be implemented in handler.lua, not schema.lua. In handler.lua, it can be logged and handled as part of plugin business logic.
  • Plugin files are limited to 100 KB per upload.
  • Plugins cannot read/write to the Kong Gateway filesystem.
  • The LuaJIT version is fixed per Kong Gateway version. Any future major Lua/LuaJIT upgrade will be communicated in advance due to potential breaking changes.

How do I add a custom plugin?

Plugins can be uploaded to Konnect using the Konnect UI. You can also use jq with the following request template to add the plugin using the API:

curl -X POST $KONNECT_CONTROL_PLANE_URL/v2/control-planes/$CONTROL_PLANE_ID/core-entities/custom-plugins \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $KONNECT_TOKEN" \
  -d "$(jq -n \
      --arg handler "$(cat handler.lua)" \
      --arg schema "$(cat schema.lua)" \
      '{"handler":$handler,"name":"streaming-headers","schema":$schema}')" \
    | jq

Once uploaded, you can manage custom plugins using any of the following methods:

FAQs

A common reason is a missing or misconfigured Certificate Authority Authorization (CAA) record. Konnect uses Google Cloud Public CA (pki.goog) to issue certificates. If your domain’s CAA record does not authorize this CA, attachment will fail.

If your custom domain fails to attach, check whether your domain has a Certificate Authority Authorization (CAA) record that restricts certificate issuance. Konnect uses Google Cloud Public CA (pki.goog) to provision SSL/TLS certificates. If the CAA record doesn’t include pki.goog, certificate issuance will fail.

To resolve the issue:

  1. Run dig CAA yourdomain.com +short to check for existing CAA records.
  2. If a record exists but doesn’t allow pki.goog, update it.
    yourdomain.com.    CAA    0 issue "pki.goog"
    
  3. Wait for DNS propagation and try attaching your domain again.

If no CAA record exists, no changes are needed. For more details, see the Let’s Encrypt CAA Guide.

DNS validation statuses for Dedicated Cloud Gateways are refreshed every 5 minutes.

In Konnect, go to API Gateway, choose a Control Plane, click Custom Domains, and use the action menu to delete the domain.

Each Cloud Gateway node is part of a dedicated network for its region (e.g., us-east-1). You can securely peer this network with your AWS network using AWS Transit Gateway.

If the Kong-hosted Control Plane goes down, you won’t be able to access it or update configuration. However, connected Data Plane nodes continue to route traffic normally using the last cached configuration.

AWS PrivateLink offers secure and private connectivity by routing traffic through an endpoint, but it only supports unidirectional communication. This means that Dedicated Cloud Gateway can send requests to your upstream services, but your upstream services cannot initiate communication back to the gateway. For many use cases requiring bidirectional communication—such as callbacks or dynamic interactions between the gateway and your upstream services—this limitation is a blocker. For this reason, PrivateLink is not generally recommended for secure connectivity to your upstream services.

Once uploaded, you can manage custom plugins using any of the following methods:

Something wrong?

Help us make these docs great!

Kong Developer docs are open source. If you find these useful and want to make them better, contribute today!