Kong Event Gateway is a Kafka proxy that uses the Kafka protocol. Kafka software clients connect to the proxy as if it were part of a regular Kafka cluster. This lets you productize your Kafka cluster to clients inside and outside of your business.
Kong Event Gateway architecture
How it works
Event Gateway uses a hybrid deployment model, separating the control plane from the data plane.
- Control plane (Konnect): The control plane (CP) is fully managed by Kong within the Konnect platform. It provides a centralized UI and API to manage backend clusters, virtual clusters, listeners, and policies. The control plane generates data plane certificates and pushes configuration updates to the proxy nodes. It never sees the actual Kafka message payloads.
- Data plane (self-managed): The data plane (DP) consists of stateless proxy nodes running within your own cluster. These nodes intercept Kafka client traffic, evaluate it against the policies pushed by the control plane, and proxy the allowed traffic to the backend Kafka brokers.
Periodically, the data plane polls the control plane for configuration updates.
Depending on the type of configuration update, the connection between the Kafka client and the backend can be affected:
- Updates to virtual cluster policies don’t cause a connection drop. Policies reload dynamically and take effect on the next request.
- Updates to any other part of the configuration (for example, listener policies, auth, or namespaces in virtual clusters) cause a connection drop. When the data plane receives configuration updates, it restarts the proxy services.
The Kafka client is designed to handle short-lived connection drops.
The following diagram illustrates the high-level architecture:
flowchart TB subgraph Konnect ["Konnect (Kong-managed cloud)"] CP["Event Gateway control plane"] end CP--DP pulls config
from CP-->Customer subgraph Customer ["Self-managed
(on-prem or cloud)"] direction LR KafkaClient["Kafka Client
producer + consumer
(e.g. Java, Python,
Go app)"] subgraph EGW ["Event Gateway data plane"] Analytics["Virtual cluster:
analytics
policies: e.g. ACL, filter"] Payments["Virtual cluster:
payments
policies: e.g. ACL,
Schema, Filter"] end BackendKafka["Backend Kafka
cluster"] KafkaClient<-->EGW<-->BackendKafka OB["Observability system
metrics & logs"] EGW--OTEL
exporter-->OB end style Konnect stroke-dasharray:3 style Customer stroke-dasharray:3
Figure 1: The control plane (CP) is fully managed in Konnect. When the data plane (DP) polls the CP for configuration, the CP pushes the config to the self-managed DP. The DP proxies Kafka client traffic through virtual clusters to backend Kafka clusters, and exports metrics and logs to an observability system via OpenTelemetry.
Event Gateway entities
In Event Gateway, an entity is a component or object that makes up the Event Gateway and its ecosystem. Entities represent the various building blocks used to configure and manage Event Gateway, and each entity has a specific role. Configuration for entities running on the data plane is stored in the control plane.
Event Gateway’s workflow is composed of the following core entities:
|
Entity |
Description |
References |
|---|---|---|
| Listener | Listeners represent IP/TCP port combinations at which the gateway listens for connections from clients. A listener can have policies that enforce TLS certificates and perform SNI routing. The listener runs at Layer 4 of the network stack. | |
| Backend cluster |
The target Kafka clusters proxied by the gateway are called backend clusters. Backend clusters are similar to gateway services in Kong API Gateway. The Konnect backend cluster entity abstracts the connection details to the actual physical Kafka cluster running in your environment.
There can be multiple backend clusters proxied by the same gateway. Event Gateway control planes store information about how to authenticate to backend clusters, whether or not to verify the cluster’s TLS certificates, and how often to fetch metadata from the cluster. |
|
| Virtual cluster |
Virtual clusters expose a modified view of the backend cluster. From the client’s perspective, the virtual cluster is a real Kafka cluster. Virtual clusters are similar to routes in Kong API Gateway, but there are no HTTP semantics on a virtual cluster.
The gateway admin can define policies on the virtual clusters that can, for example, define which topics are exposed to which clients or what actions can be taken on the backend cluster. As of now, a virtual cluster can only be associated with exactly one backend cluster and so cannot aggregate data from multiple backend clusters. |
|
| Policy |
Policies control how Kafka protocol traffic is modified between the client and the backend cluster.
There are two main types of policies:
|
Hostname mapping
When a Kafka client connects to the Event Gateway proxy, the proxy acts as the Kafka bootstrap server. The bootstrap server informs the Kafka client about all the brokers in the cluster, and the client then handles balancing requests to all brokers.
To proxy the backend cluster, Event Gateway receives the hostname metadata from the backend cluster and maps each hostname from the cluster to a hostname that it serves. There are two ways to do this: port mapping, or using TLS with SNI. You configure both options on a listener policy.
For example, let’s say that there are three brokers in the cluster: kafka1, kafka2, and kafka3.
Each broker exposes port 9092, and the proxy is listening on the IP 10.0.0.1.
The proxy exposes a different server for each host in the cluster.
Depending on your use requirements, you can expose the brokers to the proxy in one of the following ways: with port mapping or with SNI mapping.
Port mapping
Let’s use an example where the proxy exposes the following ports:
10.0.0.1:9092 → kafka1:9092 (bootstrap port)
10.0.0.1:9093 → kafka1:9092
10.0.0.1:9094 → kafka2:9092
10.0.0.1:9095 → kafka3:9092
Kafka clients are meant to be configured only with a bootstrap port. Mapping ports is easier for getting started, but we don’t recommend using this method in production because it’s less flexible.
For an example configuration, see Forward via port mapping.
SNI mapping
The proxy exposes multiple hostnames using SNI. This lets you expose multiple servers on the same port. Using our example ports, the mapping looks like this:
bootstrap.my-event-gateway.acme:9092 → kafka1:9092 (bootstrap hostname)
broker-1.my-event-gateway.acme:9092 → kafka1:9092
broker-2.my-event-gateway.acme:9092 → kafka2:9092
broker-3.my-event-gateway.acme:9092 → kafka3:9092
Kafka clients are meant to be configured only with a bootstrap hostname. We recommend this method for production.
You must provide a TLS certificate for every host exposed on the Event Gateway. This can be done through a certificate with a wildcard SAN, a single certificate with multiple SANs, or multiple certificates in the same bundle.
Shared suffix v1.1+
Alternatively, you can set broker_host_format.type to shared_suffix in the listener policy, so that you can use one wildcard SAN for all virtual clusters. In this case, the mapping looks like this
bootstrap-my-event-gateway.acme:9092 → kafka1:9092 (bootstrap hostname)
broker-1-my-event-gateway.acme:9092 → kafka1:9092
broker-2-my-event-gateway.acme:9092 → kafka2:9092
broker-3-my-event-gateway.acme:9092 → kafka3:9092
In all cases, the client must also be able to resolve the hostnames to the IP address of the gateway.
For example configurations, see: