cat <<EOF > get-oauth-token.sh
#!/bin/bash
curl -s --fail -X POST "$ISSUER_URL/oauth/token" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=client_credentials" \
-d "client_id=$CLIENT_ID" \
-d "client_secret=$CLIENT_SECRET" \
-d "scope=my-scope" | jq -r '{"token": .access_token}'
EOF
chmod u+x get-oauth-token.sh
Set up Kong Event Gateway with Kong Identity OAuth
- Create a Kong Identity auth server, scope, claim and client.
- Create a Kong Event Gateway with a virtual cluster that can verify OAuth tokens from clients.
- Create an ACL policy to restrict access to a specific client.
Prerequisites
Install kafkactl
Install kafkactl. You’ll need it to interact with Kafka clusters. Version >= 5.17.0 is needed to support script driven OAuth token generation.
Start a local Kafka cluster
Start a Docker Compose cluster with multiple Kafka services.
First, we need to create a docker-compose.yaml file. This file will define the services we want to run in our local environment:
cat <<EOF > docker-compose.yaml
name: kafka_cluster
networks:
kafka:
name: kafka_event_gateway
services:
kafka1:
image: apache/kafka:4.1.1
networks:
- kafka
container_name: kafka1
ports:
- "9094:9094"
environment:
KAFKA_NODE_ID: 0
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LISTENERS: INTERNAL://kafka1:9092,CONTROLLER://kafka1:9093,EXTERNAL://0.0.0.0:9094
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka1:9092,EXTERNAL://localhost:9094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_CONTROLLER_QUORUM_VOTERS: 0@kafka1:9093,1@kafka2:9093,2@kafka3:9093
KAFKA_CLUSTER_ID: 'abcdefghijklmnopqrstuv'
KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
kafka2:
image: apache/kafka:4.1.1
networks:
- kafka
container_name: kafka2
ports:
- "9095:9095"
environment:
KAFKA_NODE_ID: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LISTENERS: INTERNAL://kafka2:9092,CONTROLLER://kafka2:9093,EXTERNAL://0.0.0.0:9095
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka2:9092,EXTERNAL://localhost:9095
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_CONTROLLER_QUORUM_VOTERS: 0@kafka1:9093,1@kafka2:9093,2@kafka3:9093
KAFKA_CLUSTER_ID: 'abcdefghijklmnopqrstuv'
KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
kafka3:
image: apache/kafka:4.1.1
networks:
- kafka
container_name: kafka3
ports:
- "9096:9096"
environment:
KAFKA_NODE_ID: 2
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LISTENERS: INTERNAL://kafka3:9092,CONTROLLER://kafka3:9093,EXTERNAL://0.0.0.0:9096
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka3:9092,EXTERNAL://localhost:9096
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_CONTROLLER_QUORUM_VOTERS: 0@kafka1:9093,1@kafka2:9093,2@kafka3:9093
KAFKA_CLUSTER_ID: 'abcdefghijklmnopqrstuv'
KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
EOF
Now, let’s start the local setup:
docker compose up -d
Kong Konnect
If you don’t have a Konnect account, you can get started quickly with our onboarding wizard.
- The following Konnect items are required to complete this tutorial:
- Personal access token (PAT): Create a new personal access token by opening the Konnect PAT page and selecting Generate Token.
-
Set the personal access token as an environment variable:
export KONNECT_TOKEN='YOUR KONNECT TOKEN'Copied!
Kong Event Gateway running
Run the quickstart script to automatically provision a demo Kong Gateway control plane and data plane, and configure your environment:
curl -Ls https://get.konghq.com/event-gateway | bash -s -- -k $KONNECT_TOKEN -N kafka_event_gateway
This sets up an Kong Gateway control plane named event-gateway-quickstart, provisions a local data plane, and prints out the following environment variable export:
export EVENT_GATEWAY_ID=your-gateway-id
Copy and paste the command with your Event Gateway ID into your terminal to configure your session.
This quickstart script is meant for demo purposes only, therefore it runs locally with most default parameters and a small number of exposed ports. If you want to run Kong Gateway as a part of a production-ready platform, set up your control plane and data planes through the Konnect UI, or using Terraform.
Create an auth server in Kong Identity
Before you can configure the authentication plugin, you must first create an auth server in Kong Identity. We recommend creating different auth servers for different environments or subsidiaries. The auth server name is unique per each organization and each Konnect region.
Create an auth server using the /v1/auth-servers endpoint:
curl -X POST "https://us.api.konghq.com/v1/auth-servers" \
--no-progress-meter --fail-with-body \
-H "Authorization: Bearer $KONNECT_TOKEN"\
-H "Content-Type: application/json" \
--json '{
"name": "Appointments Dev",
"audience": "http://myhttpbin.dev",
"description": "Auth server for the Appointment dev environment"
}'
Export the auth server ID and issuer URL:
export AUTH_SERVER_ID='YOUR-AUTH-SERVER-ID'
export ISSUER_URL='YOUR-ISSUER-URL'
Configure the auth server with scopes
Configure a scope in your auth server using the /v1/auth-servers/$AUTH_SERVER_ID/scopes endpoint:
SCOPE_ID=$(curl -X POST "https://us.api.konghq.com/v1/auth-servers/$AUTH_SERVER_ID/scopes" \
--no-progress-meter --fail-with-body \
-H "Authorization: Bearer $KONNECT_TOKEN"\
-H "Content-Type: application/json" \
--json '{
"name": "my-scope",
"description": "Scope to test Kong Identity",
"default": false,
"include_in_metadata": false,
"enabled": true
}' | jq -r ".id"
)
Configure the auth server with custom claims
Configure a custom claim using the /v1/auth-servers/$AUTH_SERVER_ID/claims endpoint:
curl -X POST "https://us.api.konghq.com/v1/auth-servers/$AUTH_SERVER_ID/claims" \
--no-progress-meter --fail-with-body \
-H "Authorization: Bearer $KONNECT_TOKEN"\
-H "Content-Type: application/json" \
--json '{
"name": "test-claim",
"value": "test",
"include_in_token": true,
"include_in_all_scopes": false,
"include_in_scopes": [
"'$SCOPE_ID'"
],
"enabled": true
}'
You can also configure dynamic custom claims with dynamic claim templating to generate claims during runtime.
Create a client in the auth server
The client is the machine-to-machine credential. In this tutorial, Konnect will autogenerate the client ID and secret, but you can alternatively specify one yourself.
Configure the client using the /v1/auth-servers/$AUTH_SERVER_ID/clients endpoint:
curl -X POST "https://us.api.konghq.com/v1/auth-servers/$AUTH_SERVER_ID/clients" \
--no-progress-meter --fail-with-body \
-H "Authorization: Bearer $KONNECT_TOKEN"\
-H "Content-Type: application/json" \
--json '{
"name": "Client",
"grant_types": [
"client_credentials"
],
"allow_all_scopes": false,
"allow_scopes": [
"'$SCOPE_ID'"
],
"access_token_duration": 3600,
"id_token_duration": 3600,
"response_types": [
"id_token",
"token"
]
}'
Export your client secret and client ID:
export CLIENT_SECRET='YOUR-CLIENT-SECRET'
export CLIENT_ID='YOUR-CLIENT-ID'
Add a backend cluster
Run the following command to create a new backend cluster:
BACKEND_CLUSTER_ID=$(curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/backend-clusters" \
--no-progress-meter --fail-with-body \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"name": "default_backend_cluster",
"bootstrap_servers": [
"kafka1:9092",
"kafka2:9092",
"kafka3:9092"
],
"insecure_allow_anonymous_virtual_cluster_auth": true,
"authentication": {
"type": "anonymous"
},
"tls": {
"enabled": false
}
}' | jq -r ".id"
)
Add a virtual cluster
Run the following command to create a new virtual cluster associated with our backend cluster:
VIRTUAL_CLUSTER_ID=$(curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/virtual-clusters" \
--no-progress-meter --fail-with-body \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"name": "example_virtual_cluster",
"destination": {
"id": "'$BACKEND_CLUSTER_ID'"
},
"dns_label": "vcluster-1",
"acl_mode": "enforce_on_gateway",
"authentication": [
{
"type": "anonymous"
},
{
"type": "oauth_bearer",
"mediation": "terminate",
"jwks": {
"endpoint": "'$ISSUER_URL'/.well-known/jwks"
}
}
]
}' | jq -r ".id"
)
Notice that the cluster will accept both anonymous and OAuth authentication method. We’ll later restrict access using ACLs.
Add a listener
A listener represents hostname-port or IP-port combinations that connect to TCP sockets. In this example, we’re going to use port mapping, so we need to expose a range of ports.
Run the following command to create a new listener:
LISTENER_ID=$(curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/listeners" \
--no-progress-meter --fail-with-body \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"name": "example_listener",
"addresses": [
"0.0.0.0"
],
"ports": [
"19092-19095"
]
}' | jq -r ".id"
)
Add a listener policy
The listener needs a policy to tell it how to process requests and what to do with them. In this example, we’re going to use the Forward to Virtual Cluster policy, which will forward requests based on a defined mapping to our virtual cluster.
Run the following command to add the listener policy:
curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/listeners/$LISTENER_ID/policies" \
--no-progress-meter --fail-with-body \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"type": "forward_to_virtual_cluster",
"name": "forward",
"config": {
"type": "port_mapping",
"advertised_host": "localhost",
"destination": {
"id": "'$VIRTUAL_CLUSTER_ID'"
}
}
}'
For demo purposes, we’re using port mapping, which assigns each Kafka broker to a dedicated port on the Event Gateway. In production, we recommend using SNI routing instead.
Create an ACL policies for the client
Add the ACL policy for the client:
curl -X POST "https://us.api.konghq.com/v1/event-gateways/$EVENT_GATEWAY_ID/virtual-clusters/$VIRTUAL_CLUSTER_ID/cluster-policies" \
--no-progress-meter --fail-with-body \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"type": "acls",
"name": "acl_policy",
"condition": "context.auth.principal.name == \"'$CLIENT_ID'\"",
"config": {
"rules": [
{
"resource_type": "topic",
"action": "allow",
"operations": [
{
"name": "describe"
},
{
"name": "describe_configs"
},
{
"name": "read"
},
{
"name": "write"
}
],
"resource_names": [
{
"match": "*"
}
]
}
]
}
}'
This ACL policy will add full topic access to the client with the matching client id.
Setup kafkactl to use OAuth
This step requires a
kafkactlversion >= 5.17.0. To check your version, runkafkactl version.
Note that this script is for demo purposes only and hard-codes client ID, client secret, and scope. For production, we recommended securing sensitive data.
kafkactl will generate tokens using a script. Let’s create the script:
Next, create a kafkactl configuration with both non-authenticated and authenticated access:
cat <<EOF > kafkactl.yaml
contexts:
direct:
brokers:
- localhost:9095
- localhost:9096
- localhost:9094
vc:
brokers:
- localhost:19092
vc-oauth:
sasl:
enabled: true
mechanism: oauth
tokenprovider:
plugin: generic
options:
script: ./get-oauth-token.sh
args: []
brokers:
- localhost:19092
EOF
Validate
Run through the following commands to validate your configuration.
Access topics with auth
Create a topic bypassing the gateway:
kafkactl -C kafkactl.yaml --context direct create topic my-test-topic
List topics using an authenticated client:
kafkactl -C kafkactl.yaml --context vc-oauth list topics
The output should look like this:
TOPIC PARTITIONS REPLICATION FACTOR
my-test-topic 1 1
Access topics without auth
Now try listing topics without auth:
kafkactl -C kafkactl.yaml --context vc list topics
The output should be an empty list:
TOPIC PARTITIONS REPLICATION FACTOR
As you can see, when using OAuth we can retrieve the topic. However, when using anonymous access, the topic isn’t visible as this user doesn’t have the appropriate ACLs.
Cleanup
Clean up Kong Event Gateway resources
When you’re done experimenting with this example, clean up the resources:
-
If you created a new Event Gateway control plane and want to conserve your free trial credits or avoid unnecessary charges, delete the new control plane used in this tutorial.
-
Stop and remove the containers:
docker-compose downCopied!
This will stop all services and remove the containers, but preserve your configuration files for future use.