Configure SNI routing with Kong Event Gateway

Incompatible with
on-prem
Related Documentation
TL;DR

To send traffic to multiple virtual clusters with a single port and certificate:

  1. Generate a certificate and use a wildcard for the virtual cluster prefix in the subject.
  2. Create an OpenSSL extension file to set the subject alternative names for the certificate.
  3. Create a listener that listens on a single port.
  4. Create a TLS server listener policy using your certificate and key.
  5. Create a Forward to virtual cluster policy with the port ans SNI suffix.

Prerequisites

Install kafkactl. You’ll need it to interact with Kafka clusters.

Start a Docker Compose cluster with multiple Kafka services.

First, we need to create a docker-compose.yaml file. This file will define the services we want to run in our local environment:

cat <<EOF > docker-compose.yaml
name: kafka_cluster

networks:
  kafka:
    name: kafka_event_gateway

services:
  kafka1:
    image: apache/kafka:4.1.1
    networks:
      - kafka
    container_name: kafka1
    ports:
      - "9094:9094"
    environment:
      KAFKA_NODE_ID: 0
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_LISTENERS: INTERNAL://kafka1:9092,CONTROLLER://kafka1:9093,EXTERNAL://0.0.0.0:9094
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka1:9092,EXTERNAL://localhost:9094
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_CONTROLLER_QUORUM_VOTERS: 0@kafka1:9093,1@kafka2:9093,2@kafka3:9093
      KAFKA_CLUSTER_ID: 'abcdefghijklmnopqrstuv'
      KAFKA_LOG_DIRS: /tmp/kraft-combined-logs

  kafka2:
    image: apache/kafka:4.1.1
    networks:
      - kafka
    container_name: kafka2
    ports:
      - "9095:9095"
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_LISTENERS: INTERNAL://kafka2:9092,CONTROLLER://kafka2:9093,EXTERNAL://0.0.0.0:9095
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka2:9092,EXTERNAL://localhost:9095
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_CONTROLLER_QUORUM_VOTERS: 0@kafka1:9093,1@kafka2:9093,2@kafka3:9093
      KAFKA_CLUSTER_ID: 'abcdefghijklmnopqrstuv'
      KAFKA_LOG_DIRS: /tmp/kraft-combined-logs

  kafka3:
    image: apache/kafka:4.1.1
    networks:
      - kafka
    container_name: kafka3
    ports:
      - "9096:9096"
    environment:
      KAFKA_NODE_ID: 2
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_LISTENERS: INTERNAL://kafka3:9092,CONTROLLER://kafka3:9093,EXTERNAL://0.0.0.0:9096
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka3:9092,EXTERNAL://localhost:9096
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_CONTROLLER_QUORUM_VOTERS: 0@kafka1:9093,1@kafka2:9093,2@kafka3:9093
      KAFKA_CLUSTER_ID: 'abcdefghijklmnopqrstuv'
      KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
EOF

Now, let’s start the local setup:

docker compose up -d

If you don’t have a Konnect account, you can get started quickly with our onboarding wizard.

  1. The following Konnect items are required to complete this tutorial:
    • Personal access token (PAT): Create a new personal access token by opening the Konnect PAT page and selecting Generate Token.
  2. Set the personal access token as an environment variable:

    export KONNECT_TOKEN='YOUR KONNECT TOKEN'
    

Run the quickstart script to automatically provision a demo Kong Gateway control plane and data plane, and configure your environment:

curl -Ls https://get.konghq.com/event-gateway | bash -s -- -k $KONNECT_TOKEN -N kafka_event_gateway

This sets up an Kong Gateway control plane named event-gateway-quickstart, provisions a local data plane, and prints out the following environment variable export:

export EVENT_GATEWAY_ID=your-gateway-id

Copy and paste the command with your Event Gateway ID into your terminal to configure your session.

This quickstart script is meant for demo purposes only, therefore it runs locally with default parameters and a small number of exposed ports. If you want to run Kong Gateway as a part of a production-ready platform, set up your control plane and data planes through the Konnect UI, or using Terraform.

In this guide we’ll set up SNI routing to send traffic to more two virtual clusters in the same Event Gateway without opening more ports on the data plane. For more details, see Hostname mapping.

For testing purposes, this guide generates self-signed certificates and points to hostnames that resolve to 127.0.0.1. In production, you should use real hostnames, manage the DNS entries, and sign your certificates with a real, trusted CA.

Create a backend cluster

Use the following command to create a backend cluster that connects to the Kafka servers you set up:

BACKEND_CLUSTER_ID=$(curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/backend-clusters" \
     --no-progress-meter --fail-with-body  \
     -H "Authorization: Bearer $KONNECT_TOKEN" \
     --json '{
       "name": "backend_cluster",
       "bootstrap_servers": [
         "kafka1:9092",
         "kafka2:9092",
         "kafka3:9092"
       ],
       "authentication": {
         "type": "anonymous"
       },
       "tls": {
         "enabled": false
       },
       "insecure_allow_anonymous_virtual_cluster_auth": true
     }' | jq -r ".id")

Create an analytics virtual cluster

Use the following command to create the analytics virtual cluster:

ANALYTICS_VC_ID=$(curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/virtual-clusters" \
     --no-progress-meter --fail-with-body  \
     -H "Authorization: Bearer $KONNECT_TOKEN" \
     --json '{
       "name": "analytics_vc",
       "destination": {
         "id": "'$BACKEND_CLUSTER_ID'"
       },
       "dns_label": "analytics",
       "authentication": [
         {
           "type": "anonymous"
         }
       ],
       "acl_mode": "passthrough",
       "namespace": {
         "prefix": "analytics_",
         "mode": "hide_prefix",
         "additional": {
           "topics": [
             {
               "type": "exact_list",
               "conflict": "warn",
               "exact_list": [
                 {
                   "backend": "user_actions"
                 }
               ]
             }
           ]
         }
       }
     }' | jq -r ".id")

This virtual cluster provides access to topics with the analytics_ prefix, and the user_actions topic.

Create a payments virtual cluster

Use the following command to create the payments virtual cluster:

PAYMENTS_VC_ID=$(curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/virtual-clusters" \
     --no-progress-meter --fail-with-body  \
     -H "Authorization: Bearer $KONNECT_TOKEN" \
     --json '{
       "name": "payments_vc",
       "destination": {
         "id": "'$BACKEND_CLUSTER_ID'"
       },
       "dns_label": "payments",
       "authentication": [
         {
           "type": "anonymous"
         }
       ],
       "acl_mode": "passthrough",
       "namespace": {
         "prefix": "payments_",
         "mode": "hide_prefix",
         "additional": {
           "topics": [
             {
               "type": "exact_list",
               "conflict": "warn",
               "exact_list": [
                 {
                   "backend": "user_actions"
                 }
               ]
             }
           ]
         }
       }
     }' | jq -r ".id")

This virtual cluster provides access to topics with the payments_ prefix, and the user_actions topic.

Define the kafkactl context

Configure kafkactl to use TLS but ignore certificate verification:

cat <<EOF > kafkactl.yaml
contexts:
  backend:
    brokers:
      - localhost:9094
  analytics:
    brokers:
      - bootstrap.analytics.127-0-0-1.sslip.io:19092
    tls:
      enabled: true
      ca: ./rootCA.crt
      insecure: true
  payments:
    brokers:
      - bootstrap.payments.127-0-0-1.sslip.io:19092
    tls:
      enabled: true
      ca: ./rootCA.crt
      insecure: true
EOF

Create Kafka topics

Create sample topics in the Kafka cluster that we created in the prerequisites:

kafkactl -C kafkactl.yaml --context backend create topic \
analytics_pageviews analytics_clicks analytics_orders \
payments_transactions payments_refunds payments_orders \
user_actions

Generate certificates

Generate the certificates we’ll need to enable TLS:

  1. Generate the root key and certificate:

    openssl genrsa -out ./rootCA.key 4096
    openssl req -x509 -new -nodes -key ./rootCA.key \
    -sha256 -days 3650 \
    	-subj "/C=US/ST=Local/L=Local/O=Dev CA/CN=Dev Root CA" \
    	-out ./rootCA.crt
    
  2. Generate the gateway key and certificate signing request:

   openssl genrsa -out ./tls.key 2048
   openssl req -new -key ./tls.key \
   -subj "/C=US/ST=Local/L=Local/O=Dev/CN=*.127-0-0-1.sslip.io" \
   	-out ./tls.csr

We’re setting the subject in the certificate signing request to *.127-0-0-1.sslip.io:

  • * is used for the virtual cluster prefixes, which are the analytics and payments DNS labels we configured when creating the virtual clusters.
  • .127-0-0-1.sslip.io is the SNI suffix, which we’ll use in the TLS listener policy configuration. In this example, we’re using sslip.io to resolve 127-0-0-1.sslip.io to 127.0.0.1.
  1. To explicitly set the subject alternative names for the certificate, create an OpenSSL extension file:

    cat << EOF > ./tls.ext
    basicConstraints = CA:FALSE
    keyUsage = digitalSignature, keyEncipherment
    extendedKeyUsage = serverAuth, clientAuth
    subjectAltName = @alt_names
    authorityKeyIdentifier = keyid,issuer
       
    [alt_names]
    DNS.1 = *.analytics.127-0-0-1.sslip.io
    DNS.2 = *.payments.127-0-0-1.sslip.io
    EOF
    
  2. To generate the certificate we’ll need for the TLS listener policy, sign the gateway certificate signing request:

    openssl x509 -req -in ./tls.csr \
    	-CA ./rootCA.crt -CAkey ./rootCA.key -CAcreateserial \
    	-out ./tls.crt -days 825 -sha256 \
    	-extfile ./tls.ext
    
  3. Export the key and certificate to your environment:

    export CERTIFICATE=$(awk '{printf "%s\\n", $0}' tls.crt)
    export KEY=$(cat tls.key | base64)
    

Create a listener

Create a listener that listens on port 19092:

LISTENER_ID=$(curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/listeners" \
     --no-progress-meter --fail-with-body  \
     -H "Authorization: Bearer $KONNECT_TOKEN" \
     --json '{
       "name": "gateway_listener",
       "addresses": [
         "0.0.0.0"
       ],
       "ports": [
         19092
       ]
     }' | jq -r ".id")

Create a TLS server listener policy

Create a TLS server policy:

curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/listeners/$LISTENER_ID/policies" \
     --no-progress-meter --fail-with-body  \
     -H "Authorization: Bearer $KONNECT_TOKEN" \
     --json '{
       "type": "tls_server",
       "name": "tls_server",
       "config": {
         "certificates": [
           {
             "certificate": "'$CERTIFICATE'",
             "key": "'$KEY'"
           }
         ]
       }
     }'

Create a Forward to virtual cluster policy

Create a Forward to virtual cluster policy that configures SNI and defines a suffix to expose on the listener:

curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/listeners/$LISTENER_ID/policies" \
     --no-progress-meter --fail-with-body  \
     -H "Authorization: Bearer $KONNECT_TOKEN" \
     --json '{
       "type": "forward_to_virtual_cluster",
       "name": "forward_to_virtual_cluster",
       "config": {
         "type": "sni",
         "advertised_port": 19092,
         "sni_suffix": ".127-0-0-1.sslip.io"
       }
     }'

This policy enables routing to each virtual cluster and mapping brokers:

  • Bootstrap server to bootstrap.analytics.127-0-0-1.sslip.io:19092 or bootstrap.payments.127-0-0-1.sslip.io:19092
  • Broker 1 to broker-0.analytics.127-0-0-1.sslip.io:19092 or broker-0.payments.127-0-0-1.sslip.io:19092
  • Broker 2 to broker-1.analytics.127-0-0-1.sslip.io:19092 or broker-1.payments.127-0-0-1.sslip.io:19092
  • Broker 3 to broker-2.analytics.127-0-0-1.sslip.io:19092 or broker-2.payments.127-0-0-1.sslip.io:19092

Validate

Get a list of topics from the analytics virtual cluster:

kafkactl -C kafkactl.yaml --context analytics list topics

You should see the following result:

TOPIC            PARTITIONS     REPLICATION FACTOR
clicks           1              1
orders           1              1
pageviews        1              1
user_actions     1              1

Get a list of topics from the payments virtual cluster:

kafkactl -C kafkactl.yaml --context  payments list topics

You should see the following result:

TOPIC            PARTITIONS     REPLICATION FACTOR
orders           1              1
refunds          1              1
transactions     1              1
user_actions     1              1

You can reach both virtual clusters with a single certificate and through a single port.

Something wrong?

Help us make these docs great!

Kong Developer docs are open source. If you find these useful and want to make them better, contribute today!