cat <<EOF > kafkactl.yaml
contexts:
direct:
brokers:
- localhost:9095
- localhost:9096
- localhost:9094
producer:
brokers:
- localhost:19092
tls:
enabled: true
ca: ./server.crt
cert: ./producer.crt
certKey: ./producer.key
insecure: false
consumer:
brokers:
- localhost:19092
tls:
enabled: true
ca: ./server.crt
cert: ./consumer.crt
certKey: ./consumer.key
insecure: false
no_cert:
brokers:
- localhost:19092
tls:
enabled: true
ca: ./server.crt
insecure: false
EOF
Configure mTLS client authentication with Kong Event Gateway
- Generate a CA certificate, a server certificate, and client certificates for each principal.
- Create a TLS trust bundle and a TLS server listener policy with
client_authenticationset torequired. - Create a virtual cluster with
client_certificateauthentication and ACL policies that restrict access based on the certificate principal name.
Prerequisites
Install kafkactl
Install kafkactl. You’ll need it to interact with Kafka clusters.
Start a local Kafka cluster
Start a Docker Compose cluster with multiple Kafka services.
First, we need to create a docker-compose.yaml file. This file will define the services we want to run in our local environment:
cat <<EOF > docker-compose.yaml
name: kafka_cluster
networks:
kafka:
name: kafka_event_gateway
services:
kafka1:
image: apache/kafka:4.2.0
networks:
- kafka
container_name: kafka1
ports:
- "9094:9094"
environment:
KAFKA_NODE_ID: 0
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LISTENERS: INTERNAL://kafka1:9092,CONTROLLER://kafka1:9093,EXTERNAL://0.0.0.0:9094
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka1:9092,EXTERNAL://localhost:9094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_CONTROLLER_QUORUM_VOTERS: 0@kafka1:9093,1@kafka2:9093,2@kafka3:9093
KAFKA_CLUSTER_ID: 'abcdefghijklmnopqrstuv'
KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
kafka2:
image: apache/kafka:4.2.0
networks:
- kafka
container_name: kafka2
ports:
- "9095:9095"
environment:
KAFKA_NODE_ID: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LISTENERS: INTERNAL://kafka2:9092,CONTROLLER://kafka2:9093,EXTERNAL://0.0.0.0:9095
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka2:9092,EXTERNAL://localhost:9095
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_CONTROLLER_QUORUM_VOTERS: 0@kafka1:9093,1@kafka2:9093,2@kafka3:9093
KAFKA_CLUSTER_ID: 'abcdefghijklmnopqrstuv'
KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
kafka3:
image: apache/kafka:4.2.0
networks:
- kafka
container_name: kafka3
ports:
- "9096:9096"
environment:
KAFKA_NODE_ID: 2
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LISTENERS: INTERNAL://kafka3:9092,CONTROLLER://kafka3:9093,EXTERNAL://0.0.0.0:9096
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka3:9092,EXTERNAL://localhost:9096
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_CONTROLLER_QUORUM_VOTERS: 0@kafka1:9093,1@kafka2:9093,2@kafka3:9093
KAFKA_CLUSTER_ID: 'abcdefghijklmnopqrstuv'
KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
EOF
Now, let’s start the local setup:
docker compose up -d
Kong Konnect
If you don’t have a Konnect account, you can get started quickly with our onboarding wizard.
- The following Konnect items are required to complete this tutorial:
- Personal access token (PAT): Create a new personal access token by opening the Konnect PAT page and selecting Generate Token.
-
Set the personal access token as an environment variable:
export KONNECT_TOKEN='YOUR KONNECT TOKEN'Copied!
Kong Event Gateway running
Run the quickstart script to automatically provision a demo Kong Gateway control plane and data plane, and configure your environment:
curl -Ls https://get.konghq.com/event-gateway | bash -s -- -k $KONNECT_TOKEN -N kafka_event_gateway
This sets up an Kong Gateway control plane named event-gateway-quickstart, provisions a local data plane, and prints out the following environment variable export:
export EVENT_GATEWAY_ID=your-gateway-id
Copy and paste the command with your Event Gateway ID into your terminal to configure your session.
This quickstart script is meant for demo purposes only, therefore it runs locally with most default parameters and a small number of exposed ports. If you want to run Kong Gateway as a part of a production-ready platform, set up your control plane and data planes through the Konnect UI, or using Terraform.
Overview
Mutual TLS (mTLS) secures client-to-gateway communication by requiring clients to present a certificate during the TLS handshake. Event Gateway verifies the client certificate against a TLS trust bundle that contains one or more trusted CA certificates.
When combined with client_certificate authentication on a virtual cluster, the certificate’s Common Name (CN) becomes the client’s principal name. This allows you to enforce fine-grained access control using ACL policies.
flowchart LR
C[Kafka Client] -->|1. TLS handshake +
client certificate| L
subgraph gw [Event Gateway]
L[Listener] -->|2. Verify against
trust bundle| TB[TLS Trust
Bundle]
L -->|3. Route| VC[Virtual
Cluster]
VC -->|4. Extract principal
from certificate CN| ACL[ACL
Policy]
end
ACL -->|5. Proxy| K[Kafka
Broker]
The client_authentication configuration on the TLS server policy supports two modes:
|
Mode |
Behavior |
|---|---|
required
|
The client must present a valid certificate. Connections without a certificate are rejected. |
requested
|
The gateway requests a certificate but allows connections without one. If a certificate is presented but cannot be verified, the connection is closed. |
This guide uses required mode to enforce mTLS for all connections, and sets up ACL policies to give different permissions to two clients based on their certificate identity.
Generate certificates
For this guide, we generate self-signed certificates for testing. In production, use certificates issued by your organization’s CA.
-
Generate a CA key and certificate:
openssl genrsa -out ca.key 2048 openssl req -new -x509 -key ca.key -out ca.crt -days 365 \ -subj "/CN=Demo mTLS CA/O=Kong Demo/C=EU" \ -addext "basicConstraints=critical,CA:TRUE" \ -addext "keyUsage=critical,keyCertSign,cRLSign"Copied! -
Generate a client certificate for the producer client:
openssl genrsa -out producer.key 2048 openssl req -new -key producer.key -out producer.csr \ -subj "/CN=producer-client/O=Kong Demo/C=EU" openssl x509 -req -in producer.csr -CA ca.crt -CAkey ca.key \ -CAcreateserial -out producer.crt -days 90 \ -extfile <(printf "basicConstraints=CA:FALSE\nkeyUsage=digitalSignature,keyEncipherment\nextendedKeyUsage=clientAuth")Copied! -
Generate a client certificate for the consumer client:
openssl genrsa -out consumer.key 2048 openssl req -new -key consumer.key -out consumer.csr \ -subj "/CN=consumer-client/O=Kong Demo/C=EU" openssl x509 -req -in consumer.csr -CA ca.crt -CAkey ca.key \ -CAcreateserial -out consumer.crt -days 90 \ -extfile <(printf "basicConstraints=CA:FALSE\nkeyUsage=digitalSignature,keyEncipherment\nextendedKeyUsage=clientAuth")Copied! -
Generate a server certificate for the gateway listener:
openssl genrsa -out server.key 2048 openssl req -new -x509 -key server.key -out server.crt -days 365 \ -subj "/CN=localhost/O=Kong Demo/C=EU" \ -addext "subjectAltName=DNS:localhost"Copied! -
Export the certificates and key to environment variables:
export CA_CERT="$(awk '{printf "%s\\n", $0}' ca.crt)"
export SERVER_CERT="$(awk '{printf "%s\\n", $0}' server.crt)"
export SERVER_KEY="$(cat server.key | base64)"
Create a backend cluster
Use the following command to create a backend cluster that connects to the Kafka servers you set up:
BACKEND_CLUSTER_ID=$(curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/backend-clusters" \
--no-progress-meter --fail-with-body \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"name": "backend_cluster",
"bootstrap_servers": [
"kafka1:9092",
"kafka2:9092",
"kafka3:9092"
],
"authentication": {
"type": "anonymous"
},
"tls": {
"enabled": false
}
}' | jq -r ".id"
)
Create a listener
Run the following command to create a listener:
LISTENER_ID=$(curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/listeners" \
--no-progress-meter --fail-with-body \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"name": "mtls_listener",
"addresses": [
"0.0.0.0"
],
"ports": [
"19092-19095"
]
}' | jq -r ".id"
)
Create a TLS trust bundle
A TLS trust bundle stores CA certificates used to verify client certificates during the mTLS handshake. Create the bundle:
BUNDLE_ID=$(curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/tls-trust-bundles" \
--no-progress-meter --fail-with-body \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"name": "demo-ca-bundle",
"description": "CA certificate for client verification",
"config": {
"trusted_ca": "'$CA_CERT'"
}
}' | jq -r ".id"
)
Create a TLS server listener policy
Create a TLS server policy with client_authentication enabled.
The gateway presents the server certificate to clients and verifies client certificates against the trust bundle.
The principal_mapping field is an expression that extracts a principal name from the client certificate after a successful TLS handshake. The expression has access to a context.certificate variable with the following fields:
|
Field |
Type |
Description |
|---|---|---|
context.certificate.subject
|
map |
Subject distinguished name as a map. Access individual attributes like context.certificate.subject['CN'] (Common Name) or context.certificate.subject['O'] (Organization).
|
context.certificate.issuer
|
map |
Issuer distinguished name as a map, same format as subject.
|
context.certificate.serialNumber
|
string | Serial number of the certificate. |
context.certificate.sans.dns
|
array | DNS Subject Alternative Names. |
context.certificate.sans.uri
|
array | URI Subject Alternative Names. |
If principal_mapping is omitted, the principal defaults to the full subject distinguished name (for example, CN=producer-client, O=Kong Demo, C=EU).
This guide uses context.certificate.subject['CN'] to extract only the Common Name, so the principal becomes producer-client.
curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/listeners/$LISTENER_ID/policies" \
--no-progress-meter --fail-with-body \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"type": "tls_server",
"name": "mtls_policy",
"config": {
"certificates": [
{
"certificate": "'$SERVER_CERT'",
"key": "'$SERVER_KEY'"
}
],
"client_authentication": {
"mode": "required",
"principal_mapping": "context.certificate.subject[\"CN\"]",
"tls_trust_bundles": [
{
"id": "'$BUNDLE_ID'"
}
]
}
}
}'
Create a virtual cluster
Create a virtual cluster with client_certificate authentication.
The certificate’s Common Name (CN) is extracted via principal_mapping and used as the principal name for ACL evaluation.
VIRTUAL_CLUSTER_ID=$(curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/virtual-clusters" \
--no-progress-meter --fail-with-body \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"name": "mtls_virtual_cluster",
"destination": {
"id": "'$BACKEND_CLUSTER_ID'"
},
"dns_label": "mtls-vc",
"authentication": [
{
"type": "client_certificate"
}
],
"acl_mode": "enforce_on_gateway"
}' | jq -r ".id"
)
Create a forward-to-virtual-cluster listener policy
Add a Forward to Virtual Cluster policy, which will forward requests based on a defined mapping to our virtual cluster:
curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/listeners/$LISTENER_ID/policies" \
--no-progress-meter --fail-with-body \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"type": "forward_to_virtual_cluster",
"name": "forward_to_mtls_vc",
"config": {
"type": "port_mapping",
"advertised_host": "localhost",
"destination": {
"id": "'$VIRTUAL_CLUSTER_ID'"
}
}
}'
Create ACL policies
Create ACL policies that restrict access based on the certificate principal name. The producer client gets write access and the consumer client gets read access.
Producer ACL
Allow the producer-client principal to produce messages and describe topics:
curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/virtual-clusters/$VIRTUAL_CLUSTER_ID/cluster-policies" \
--no-progress-meter --fail-with-body \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"type": "acls",
"name": "producer_acl",
"condition": "context.auth.principal.name == \"producer-client\"",
"config": {
"rules": [
{
"resource_type": "topic",
"action": "allow",
"operations": [
{
"name": "write"
},
{
"name": "describe"
},
{
"name": "describe_configs"
}
],
"resource_names": [
{
"match": "*"
}
]
}
]
}
}'
Consumer ACL
Allow the consumer-client principal to consume messages and manage consumer groups:
curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/virtual-clusters/$VIRTUAL_CLUSTER_ID/cluster-policies" \
--no-progress-meter --fail-with-body \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"type": "acls",
"name": "consumer_acl",
"condition": "context.auth.principal.name == \"consumer-client\"",
"config": {
"rules": [
{
"resource_type": "topic",
"action": "allow",
"operations": [
{
"name": "read"
},
{
"name": "describe"
},
{
"name": "describe_configs"
}
],
"resource_names": [
{
"match": "*"
}
]
},
{
"resource_type": "group",
"action": "allow",
"operations": [
{
"name": "read"
},
{
"name": "describe"
}
],
"resource_names": [
{
"match": "*"
}
]
}
]
}
}'
Configure kafkactl
Set up kafkactl with four contexts:
-
direct: connects to the backend Kafka cluster directly, bypassing the gateway -
producer: uses the producer client certificate through the mTLS-protected listener -
consumer: uses the consumer client certificate through the mTLS-protected listener -
no_cert: connects to the gateway with TLS but without a client certificate, for testing mTLS rejection
Create a Kafka topic
Create a test topic using the direct context, which connects directly to the backend Kafka cluster:
kafkactl -C kafkactl.yaml --context direct create topic my-test-topic
Validate
Produce with the producer client
Produce a message through the mTLS-protected listener using the producer context:
kafkactl -C kafkactl.yaml --context producer produce my-test-topic --value="Hello from mTLS producer"
Consume with the consumer client
Consume the message using the consumer context:
kafkactl -C kafkactl.yaml --context consumer consume my-test-topic --from-beginning --exit
You should see:
Hello from mTLS producer
Verify ACL enforcement
Verify that the consumer client cannot produce messages:
kafkactl -C kafkactl.yaml --context consumer produce my-test-topic --value="This should fail"
The operation fails because the consumer-client principal only has read access.
Connect without a client certificate
Verify that the gateway rejects connections without a client certificate using the no_cert context, which has TLS enabled but does not present a client certificate:
kafkactl -C kafkactl.yaml --context no_cert get topics
The connection fails because client_authentication.mode is set to required and no client certificate was presented.
Cleanup
Clean up Kong Event Gateway resources
When you’re done experimenting with this example, clean up the resources:
-
If you created a new Event Gateway control plane and want to conserve your free trial credits or avoid unnecessary charges, delete the new control plane used in this tutorial.
-
Stop and remove the containers:
docker-compose downCopied!
This will stop all services and remove the containers, but preserve your configuration files for future use.