Get started with Kong Event Gateway

Incompatible with
on-prem
Related Documentation
TL;DR

Get started with Kong Event Gateway by setting up a Konnect control plane and data plane, then configuring a backend cluster, virtual cluster, listener, and policies with the Kong Event Gateway API.

Note: This quickstart runs a pre-configured demo Docker container to explore Kong Gateway’s capabilities. If you want to run Kong Gateway as a part of a production-ready platform, set up your control plane and data planes through the Konnect UI, or using Terraform.

Prerequisites

Install kafkactl. You’ll need it to interact with Kafka clusters.

Start a Docker Compose cluster with multiple Kafka services.

First, we need to create a docker-compose.yaml file. This file will define the services we want to run in our local environment:

cat <<EOF > docker-compose.yaml
name: kafka_cluster

networks:
  kafka:
    name: kafka_event_gateway

services:
  kafka1:
    image: apache/kafka:4.1.1
    networks:
      - kafka
    container_name: kafka1
    ports:
      - "9094:9094"
    environment:
      KAFKA_NODE_ID: 0
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_LISTENERS: INTERNAL://kafka1:9092,CONTROLLER://kafka1:9093,EXTERNAL://0.0.0.0:9094
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka1:9092,EXTERNAL://localhost:9094
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_CONTROLLER_QUORUM_VOTERS: 0@kafka1:9093,1@kafka2:9093,2@kafka3:9093
      KAFKA_CLUSTER_ID: 'abcdefghijklmnopqrstuv'
      KAFKA_LOG_DIRS: /tmp/kraft-combined-logs

  kafka2:
    image: apache/kafka:4.1.1
    networks:
      - kafka
    container_name: kafka2
    ports:
      - "9095:9095"
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_LISTENERS: INTERNAL://kafka2:9092,CONTROLLER://kafka2:9093,EXTERNAL://0.0.0.0:9095
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka2:9092,EXTERNAL://localhost:9095
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_CONTROLLER_QUORUM_VOTERS: 0@kafka1:9093,1@kafka2:9093,2@kafka3:9093
      KAFKA_CLUSTER_ID: 'abcdefghijklmnopqrstuv'
      KAFKA_LOG_DIRS: /tmp/kraft-combined-logs

  kafka3:
    image: apache/kafka:4.1.1
    networks:
      - kafka
    container_name: kafka3
    ports:
      - "9096:9096"
    environment:
      KAFKA_NODE_ID: 2
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_LISTENERS: INTERNAL://kafka3:9092,CONTROLLER://kafka3:9093,EXTERNAL://0.0.0.0:9096
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka3:9092,EXTERNAL://localhost:9096
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_CONTROLLER_QUORUM_VOTERS: 0@kafka1:9093,1@kafka2:9093,2@kafka3:9093
      KAFKA_CLUSTER_ID: 'abcdefghijklmnopqrstuv'
      KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
EOF

Now, let’s start the local setup:

docker compose up -d

If you don’t have a Konnect account, you can get started quickly with our onboarding wizard.

  1. The following Konnect items are required to complete this tutorial:
    • Personal access token (PAT): Create a new personal access token by opening the Konnect PAT page and selecting Generate Token.
  2. Set the personal access token as an environment variable:

    export KONNECT_TOKEN='YOUR KONNECT TOKEN'
    

Create an Event Gateway in Konnect

Run the quickstart script to automatically provision a demo Kong Gateway control plane and data plane, and configure your environment:

curl -Ls https://get.konghq.com/event-gateway | bash -s -- -k $KONNECT_TOKEN -N kafka_event_gateway

This sets up an Kong Gateway control plane named event-gateway-quickstart, provisions a local data plane, and prints out the following environment variable export:

export EVENT_GATEWAY_ID=your-gateway-id

Copy and paste this into your terminal to configure your session.

This quickstart script is meant for demo purposes only, therefore it runs locally with default parameters and a small number of exposed ports. If you want to run Kong Gateway as a part of a production-ready platform, set up your control plane and data planes through the Konnect UI, or using Terraform.

Add a backend cluster

Backend clusters are abstractions of your real Kafka clusters, and they store connection and configuration details required for Kong Event Gateway to proxy traffic to Kafka. You need at least one backend cluster.

Run the following command to create a new backend cluster linked to the local Kafka server we created in the prerequisites:

BACKEND_CLUSTER_ID=$(curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/backend-clusters" \
     --no-progress-meter --fail-with-body  \
     -H "Authorization: Bearer $KONNECT_TOKEN" \
     --json '{
       "name": "default_backend_cluster",
       "bootstrap_servers": [
         "kafka1:9092",
         "kafka2:9092",
         "kafka3:9092"
       ],
       "authentication": {
         "type": "anonymous"
       },
       "insecure_allow_anonymous_virtual_cluster_auth": true,
       "tls": {
         "enabled": false
       }
     }' | jq -r ".id")

In this example configuration:

  • bootstrap_servers: Points the backend cluster to the three bootstrap servers that we launched in the prerequisites.
  • authentication and insecure_allow_anonymous_virtual_cluster_auth: For demo purposes, we’re allowing insecure anonymous connections, which means no authentication required.
  • tls: TLS is disabled so that we can easily test the connection.

Add a virtual cluster

Virtual clusters are the connection point for Kafka clients. Instead of connecting clients directly to your Kafka cluster, you can set up virtual clusters to customize how clients connect, and what requirements they need to have. From the client’s point of view, they’re just connecting to a regular Kafka cluster.

Virtual clusters provide environment isolation and let you enforce policies, manage authentication, and more. Each virtual cluster can connect to one backend cluster, though a backend cluster can have many virtual clusters connected to it.

Run the following command to create a new virtual cluster associated with our backend cluster:

VIRTUAL_CLUSTER_ID=$(curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/virtual-clusters" \
     --no-progress-meter --fail-with-body  \
     -H "Authorization: Bearer $KONNECT_TOKEN" \
     --json '{
       "name": "example_virtual_cluster",
       "destination": {
         "id": "'$BACKEND_CLUSTER_ID'"
       },
       "dns_label": "vcluster-1",
       "authentication": [
         {
           "type": "anonymous"
         }
       ],
       "acl_mode": "passthrough"
     }' | jq -r ".id")

In this example:

  • authentication: Allows anonymous authentication.
  • acl_mode: The setting passthrough means that all clients are allowed and don’t have to match a defined ACL. In a production environment, you would set this to enforce_on_gateway and define an ACL policy.
  • name is an internal name for the configuration object, while the dns_label is necessary for SNI routing.

Add a listener

A listener represents hostname-port or IP-port combinations that connect to TCP sockets. In this example, we’re going to use port mapping, so we need to expose a range of ports.

Run the following command to create a new listener:

LISTENER_ID=$(curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/listeners" \
     --no-progress-meter --fail-with-body  \
     -H "Authorization: Bearer $KONNECT_TOKEN" \
     --json '{
       "name": "example_listener",
       "addresses": [
         "0.0.0.0"
       ],
       "ports": [
         "19092-19095"
       ]
     }' | jq -r ".id")

Add a listener policy

The listener needs a policy to tell it how to process requests and what to do with them. In this example, we’re going to use the Forward to Virtual Cluster policy, which will forward requests based on a defined mapping to our virtual cluster.

Run the following command to add the listener policy:

curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/listeners/$LISTENER_ID/policies" \
     --no-progress-meter --fail-with-body  \
     -H "Authorization: Bearer $KONNECT_TOKEN" \
     --json '{
       "type": "forward_to_virtual_cluster",
       "name": "forward",
       "config": {
         "type": "port_mapping",
         "advertised_host": "localhost",
         "destination": {
           "id": "'$VIRTUAL_CLUSTER_ID'"
         }
       }
     }'

For demo purposes, we’re using port mapping, which assigns each Kafka broker to a dedicated port on the Event Gateway. In production, we recommend using SNI routing instead.

Add a virtual cluster policy

Now, let’s add a policy the virtual cluster so we can test our proxy. For this example, let’s add a Modify Headers policy, which lets you set or remove headers on requests:

curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/virtual-clusters/$VIRTUAL_CLUSTER_ID/consume-policies" \
     --no-progress-meter --fail-with-body  \
     -H "Authorization: Bearer $KONNECT_TOKEN" \
     --json '{
       "type": "modify_headers",
       "name": "new-header",
       "config": {
         "actions": [
           {
             "op": "set",
             "key": "My-New-Header",
             "value": "header_value"
           }
         ]
       }
     }'

This policy configuration sets the custom header My-New-Header: header_value on all requests proxied by this virtual cluster.

Configure the Kafka cluster

Now that we’ve configured the proxy, let’s make sure the Kafka cluster is ready.

In your local environment, set up the kafkactl.yaml config file for your Kafka cluster:

cat <<EOF > kafkactl.yaml
contexts:
  direct:
    brokers:
      - localhost:9095
      - localhost:9096
      - localhost:9094
  vc:
    brokers:
      - localhost:19092
EOF

This file defines two configuration profiles:

  • direct: Connection addresses to all of the bootstrap servers you launched in the prerequisites, and configured in the backend cluster. Accessing the direct context will bypass the Kong Event Gateway proxy and connect directly to your Kafka cluster.
  • vc: Connection to the virtual cluster. Accessing the vc context will pass requests through the virtual cluster.

We’re going to switch between these profiles as we test different features.

Validate the cluster

Let’s check that the cluster works using kafkactl. First, create a topic using the direct context, which is a direct connection to our Kafka cluster:

kafkactl -C kafkactl.yaml --context direct create topic my-test-topic

Produce a message to make sure it worked:

kafkactl -C kafkactl.yaml --context direct produce my-test-topic --value="Hello World"

You should see the following response:

topic created: my-test-topic
message produced (partition=0	offset=0)

Now let’s test that our Modify Headers policy is applying the header My-New-Header. By passing the vc context, kafkactl will connect to Kafka through the proxy port 19092.

First, produce a message:

kafkactl -C kafkactl.yaml --context vc produce my-test-topic --value="test message"

Consume the my-test-topic from the beginning while passing the --print-headers flag:

kafkactl -C kafkactl.yaml --context vc consume my-test-topic --print-headers --from-beginning --exit

The output should contain your new header:

My-New-Header:header_value

You now have a Kafka cluster running with an Event Gateway proxy in front, and the proxy is applying your custom policies.

FAQs

Check the following:

  • Verify all services are running with docker ps.
  • Check if ports are available (in this how-to guide, we use 19092 for the proxy, 9092-9095 for Kafka). For example, on a Unix-based system, you could use lsof -i -P | grep 909.
  • Ensure that all environment variables are set correctly.

Troubleshoot your setup by doing the following:

  • Verify that your Kafka broker is healthy.
  • Check if you’re using the correct kafkactl context.
  • Ensure that the proxy is properly connected to the backend cluster.
  • Ensure that acl_mode is set to passthrough in the virtual cluster. If set to enforce_on_gateway, you won’t see any topics listed without an ACL policy.
Something wrong?

Help us make these docs great!

Kong Developer docs are open source. If you find these useful and want to make them better, contribute today!