Get started with Kong Native Event Proxy

Beta and uses: Event Gateway Konnect API
TL;DR

Get started with Kong Native Event Proxy (KNEP) by setting up a Konnect control plane and a Kafka cluster, then configuring the control plane using the /declarative_config endpoint of the Control Plane Config API.

Prerequisites

If you don’t have a Konnect account, you can get started quickly with our onboarding wizard.

  1. The following Konnect items are required to complete this tutorial:
    • Personal access token (PAT): Create a new personal access token by opening the Konnect PAT page and selecting Generate Token.
  2. Set the personal access token as an environment variable:

    export KONNECT_TOKEN='YOUR KONNECT TOKEN'
    

If you’re an existing Kong customer or prospect, please fill out the beta participation form and we will reach out to you.

Install kafkactl. You’ll need it to interact with Kafka clusters.

Kong Native Event Proxy lets you configure virtual clusters, which act as a proxy interface between the client and the Kafka cluster. With virtual clusters, you can:

  • Apply transformations, filtering, and custom policies
  • Route messages based on specific rules to different Kafka clusters
  • Apply auth mediation and message encryption and much more.

Now, let’s configure a proxy and test your first virtual cluster setup.

Create a Control Plane in Konnect

Use the Konnect API to create a new CLUSTER_TYPE_KAFKA_NATIVE_EVENT_PROXY Control Plane:

KONNECT_CONTROL_PLANE_ID=$( curl -X POST "https://us.api.konghq.com/v2/control-planes" \
     -H "Authorization: Bearer $KONNECT_TOKEN" \
     --json '{
       "name": "KNEP getting started",
       "cluster_type": "CLUSTER_TYPE_KAFKA_NATIVE_EVENT_PROXY"
     }' | jq -r '.id')

Start a local Kafka cluster

We will start a Docker Compose cluster with Kafka, KNEP, confluent-schema-registry and a Kafka UI.

First, we need to create a docker-compose.yaml file. This file will define the services we want to run in our local environment:

cat <<EOF > docker-compose.yaml
services:
  kafka:
    image: apache/kafka:3.9.0
    container_name: kafka
    ports:
      - "9092:19092"
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_LISTENERS: INTERNAL://kafka:9092,CONTROLLER://kafka:9093,EXTERNAL://0.0.0.0:19092
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092,EXTERNAL://localhost:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
      KAFKA_CLUSTER_ID: 'abcdefghijklmnopqrstuv'
      KAFKA_LOG_DIRS: /tmp/kraft-combined-logs

  schema-registry:
      image: confluentinc/cp-schema-registry:latest
      container_name: schema-registry
      depends_on:
        - kafka
      ports:
        - "8081:8081"
      environment:
        SCHEMA_REGISTRY_HOST_NAME: schema-registry
        SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: kafka:9092
        SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
      healthcheck:
        test: curl -f http://localhost:8081/subjects
        interval: 10s
        timeout: 5s
        retries: 5
  
  knep:
    image: kong/kong-native-event-proxy:latest
    container_name: knep
    ports:
      - "8080:8080"
      - "19092:19092"
    env_file: "knep.env"
    environment:
      KONNECT_API_TOKEN: ${KONNECT_TOKEN}
      KONNECT_API_HOSTNAME: us.api.konghq.com
      KONNECT_CONTROL_PLANE_ID: ${KONNECT_CONTROL_PLANE_ID}
      KNEP__RUNTIME__DRAIN_DURATION: 1s # makes shutdown quicker, not recommended to be set like this in production 
      # KNEP__OBSERVABILITY__LOG_FLAGS: "info,knep=debug" # Uncomment for debug logging
    healthcheck:
      test: curl -f http://localhost:8080/health/probes/liveness
      interval: 10s
      timeout: 5s
      retries: 5
  
  kafka-ui:
    image: provectuslabs/kafka-ui:latest
    container_name: kafka-ui
    environment:
      # First cluster configuration (direct Kafka connection)
      KAFKA_CLUSTERS_0_NAME: "direct-kafka-cluster"
      KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: "kafka:9092"
      KAFKA_CLUSTERS_0_SCHEMAREGISTRY: "http://schema-registry:8081"

      # Second cluster configuration (KNEP proxy connection)
      KAFKA_CLUSTERS_1_NAME: "knep-proxy-cluster"
      KAFKA_CLUSTERS_1_BOOTSTRAPSERVERS: "knep:9092"
      KAFKA_CLUSTERS_1_SCHEMAREGISTRY: "http://schema-registry:8081"
      
      SERVER_PORT: 8082
    ports:
      - "8082:8082"
EOF

Note that the above config publishes the following ports to the host:

  • kafka:9092 for plaintext auth
  • kafka:9094 for SASL username/password auth
  • kafka-ui:8082 for access to the Kafka UI
  • schema-registry:8081 for access to the schema registry
  • knep:9192 to knep:9292 for access to the KNEP proxy (the port range is wide to allow many virtual clusters to be created)
  • knep:8080 for probes and metrics access to KNEP

The KNEP container will use environment variables from knep.env file. Let’s create it:

cat <<EOF > knep.env
KONNECT_API_TOKEN=\${KONNECT_TOKEN}
KONNECT_API_HOSTNAME=us.api.konghq.com
KONNECT_CONTROL_PLANE_ID=\${KONNECT_CONTROL_PLANE_ID}
EOF

Now let’s start the local setup:

docker compose up -d

Let’s look at the logs of the KNEP container to see if it started correctly:

docker compose logs knep

You should see something like this:

knep  | 2025-04-30T08:59:58.004076Z  WARN tokio-runtime-worker ThreadId(09) add_task{task_id="konnect_watch_config"}:task_run:check_dataplane_config{cp_config_url="/v2/control-planes/c6d325ec-0bd6-4fbc-b2c1-6a56c0a3edb0/declarative-config/native-event-proxy"}: knep::konnect: src/konnect/mod.rs:218: Konnect API returned 404, is the control plane ID correct?

This is expected, as we haven’t configured the Control Plane yet. We’ll do this in the next step.

Configure Kong Native Event Proxy control plane with a passthrough cluster

Create the configuration file for the Control Plane. This file will define the backend cluster and the virtual cluster:

cat <<EOF > knep-config.yaml
backend_clusters:
  - name: kafka-localhost
    bootstrap_servers:
      - kafka:9092

listeners:
  port:
    - listen_address: 0.0.0.0
      listen_port_start: 19092
      advertised_host: localhost

virtual_clusters:
  - name: team-a
    backend_cluster_name: kafka-localhost
    route_by:
      type: port
      port:
        min_broker_id: 1
    authentication:
      - type: anonymous
        mediation:
          type: anonymous
EOF

Update the control plane and data plane

Update the Control Plane using the /declarative-config endpoint:

 curl -X PUT "https://us.api.konghq.com/v2/control-planes/$KONNECT_CONTROL_PLANE_ID/declarative-config" \
     -H "Authorization: Bearer $KONNECT_TOKEN" \
     --json "$(jq -Rs '{config: .}' < knep-config.yaml)"

Restart your data plane to apply the configuration:

docker restart knep

This might take a few seconds.

Configure the cluster

Set up the kafkactl config file:

cat <<EOF > kafkactl.yaml
contexts:
  direct:
    brokers:
      - localhost:9092
  backend:
    brokers:
      - localhost:9092
  knep:
    brokers:
      - localhost:19092
  secured:
    brokers:
      - localhost:29092
  team-a:
    brokers:
      - localhost:19092
  team-b:
    brokers:
      - localhost:29092
current-context: direct
EOF

This file defines several configuration profiles. We’re going to switch between these profiles as we test different features.

Validate the cluster

Let’s check that the cluster works. We can use the Kafka UI to do this by going to http://localhost:8082 and checking the cluster list. You should see the direct-kafka-cluster and knep-proxy-cluster cluster listed there.

You can also use the kafkactl command to check the cluster. Let’s check the Kafka cluster directly:

kafkactl -C kafkactl.yaml --context direct create topic my-test-topic
kafkactl -C kafkactl.yaml --context direct produce my-test-topic --value="Hello World"

It’ll use the direct context, which is this case is a direct connection to our Kafka cluster.

You should see the following response:

topic created: my-test-topic
message produced (partition=0	offset=0)

Now let’s check the Kafka cluster through the KNEP proxy. By passing the knep context, kafkactl will connect to Kafka through the proxy port 19092:

kafkactl -C kafkactl.yaml --context knep list topics

You should see a list of the topics you just created:

TOPIC              PARTITIONS     REPLICATION FACTOR
_schemas           1              1
my-test-topic      1              1

You now have a Kafka cluster running with a KNEP proxy in front.

FAQs

Check the following:

  • Verify all services are running with docker ps
  • Check if ports are available (in this how-to guide, we use 9192 for the proxy, 9092 for Kafka)
  • Ensure that all KONNECT environment variables are set correctly

Troubleshoot your setup by doing the following:

  • Verify that your Kafka broker is healthy
  • Check if you’re using the correct kafkactl context
  • Ensure that the proxy is properly connected to the backend cluster
Something wrong?

Help us make these docs great!

Kong Developer docs are open source. If you find these useful and want to make them better, contribute today!