Exclude broken objects with fallback configuration
Enable the FallbackConfiguration feature gate for Kong Ingress Controller
Prerequisites
Kong Konnect
If you don’t have a Konnect account, you can get started quickly with our onboarding wizard.
- The following Konnect items are required to complete this tutorial:
    - Personal access token (PAT): Create a new personal access token by opening the Konnect PAT page and selecting Generate Token.
 
- 
    Set the personal access token as an environment variable: export KONNECT_TOKEN='YOUR KONNECT TOKEN'Copied!
Enable the Gateway API
- 
    Install the Gateway API CRDs before installing Kong Ingress Controller. kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yamlCopied!
- 
    Create a GatewayandGatewayClassinstance to use.
echo "
apiVersion: v1
kind: Namespace
metadata:
  name: kong
---
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: kong
  annotations:
    konghq.com/gatewayclass-unmanaged: 'true'
spec:
  controllerName: konghq.com/kic-gateway-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: kong
spec:
  gatewayClassName: kong
  listeners:
  - name: proxy
    port: 80
    protocol: HTTP
    allowedRoutes:
      namespaces:
         from: All
" | kubectl apply -n kong -f -
Create a KIC Control Plane
Use the Konnect API to create a new CLUSTER_TYPE_K8S_INGRESS_CONTROLLER Control Plane:
CONTROL_PLANE_DETAILS=$( curl -X POST "https://us.api.konghq.com/v2/control-planes" \
     -H "Authorization: Bearer $KONNECT_TOKEN" \
     --json '{
       "name": "My KIC CP",
       "cluster_type": "CLUSTER_TYPE_K8S_INGRESS_CONTROLLER"
     }')
We’ll need the id and telemetry_endpoint for the values.yaml file later. Save them as environment variables:
CONTROL_PLANE_ID=$(echo $CONTROL_PLANE_DETAILS | jq -r .id)
CONTROL_PLANE_TELEMETRY=$(echo $CONTROL_PLANE_DETAILS | jq -r '.config.telemetry_endpoint | sub("https://";"")')
Create mTLS certificates
Kong Ingress Controller talks to Konnect over a connected secured with TLS certificates.
Generate a new certificate using openssl:
openssl req -new -x509 -nodes -newkey rsa:2048 -subj "/CN=kongdp/C=US" -keyout ./tls.key -out ./tls.crt
The certificate needs to be a single line string to send it to the Konnect API with curl. Use awk to format the certificate:
export CERT=$(awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' tls.crt);
Next, upload the certificate to Konnect:
 curl -X POST "https://us.api.konghq.com/v2/control-planes/$CONTROL_PLANE_ID/dp-client-certificates" \
     -H "Authorization: Bearer $KONNECT_TOKEN" \
     --json '{
       "cert": "'$CERT'"
     }'
Finally, store the certificate in a Kubernetes secret so that Kong Ingress Controller can read it:
kubectl create namespace kong -o yaml --dry-run=client | kubectl apply -f -
kubectl create secret tls konnect-client-tls -n kong --cert=./tls.crt --key=./tls.key
Kong Ingress Controller running (attached to Konnect)
- 
    Add the Kong Helm charts: helm repo add kong https://charts.konghq.com helm repo updateCopied!
- 
    Create a values.yamlfile:cat <<EOF > values.yaml controller: ingressController: image: tag: "3.5" env: feature_gates: "FillIDs=true" konnect: license: enabled: true enabled: true controlPlaneID: "$CONTROL_PLANE_ID" tlsClientCertSecretName: konnect-client-tls apiHostname: "us.kic.api.konghq.com" gateway: image: repository: kong tag: "3.9.1" env: konnect_mode: 'on' vitals: "off" cluster_mtls: pki cluster_telemetry_endpoint: "$CONTROL_PLANE_TELEMETRY:443" cluster_telemetry_server_name: "$CONTROL_PLANE_TELEMETRY" cluster_cert: /etc/secrets/konnect-client-tls/tls.crt cluster_cert_key: /etc/secrets/konnect-client-tls/tls.key lua_ssl_trusted_certificate: system proxy_access_log: "off" dns_stale_ttl: "3600" secretVolumes: - konnect-client-tls EOFCopied!
- 
    Install Kong Ingress Controller using Helm: helm install kong kong/ingress -n kong --create-namespace --set controller.ingressController.env.feature_gates="FallbackConfiguration=true" --set controller.ingressController.env.dump_config=true --values ./values.yamlCopied!
- 
    Set $PROXY_IPas an environment variable for future commands:export PROXY_IP=$(kubectl get svc --namespace kong kong-gateway-proxy -o jsonpath='{range .status.loadBalancer.ingress[0]}{@.ip}{@.hostname}{end}') echo $PROXY_IPCopied!
Kong Ingress Controller running
- 
    Add the Kong Helm charts: helm repo add kong https://charts.konghq.com helm repo updateCopied!
- 
    Install Kong Ingress Controller using Helm: helm install kong kong/ingress -n kong --create-namespace --set controller.ingressController.env.feature_gates="FallbackConfiguration=true" --set controller.ingressController.env.dump_config=trueCopied!
- 
    Set $PROXY_IPas an environment variable for future commands:export PROXY_IP=$(kubectl get svc --namespace kong kong-gateway-proxy -o jsonpath='{range .status.loadBalancer.ingress[0]}{@.ip}{@.hostname}{end}') echo $PROXY_IPCopied!
Required Kubernetes resources
This how-to requires some Kubernetes services to be available in your cluster. These services will be used by the resources created in this how-to.
kubectl apply -f https://developer.konghq.com/manifests/kic/echo-service.yaml -n kong
Scenario
In this example, we’ll consider a situation where:
- We have two Routes pointing to the same Service. One Route is configured withKongPlugins providing authentication and base rate limiting. Everything works as expected.
- We add one more rate limiting KongPluginthat will be associated with the second Route and a specificKongConsumerso that it can be rate limited in a different way than the base rate limiting. But, we forget to associate theKongConsumerwith theKongPlugin. It results in the Route being broken because of duplicated rate limiting plugins.
Configure plugins
This how-to requires three plugins to demonstrate how fallback configuration works.
- 
    As the example uses a Consumer, we need to create an authentication plugin to identify the incoming request: echo " apiVersion: configuration.konghq.com/v1 kind: KongPlugin metadata: name: key-auth namespace: kong annotations: kubernetes.io/ingress.class: kong plugin: key-auth " | kubectl apply -f -Copied!
- 
    Unidentified traffic has a base rate limit of one request per second: echo " apiVersion: configuration.konghq.com/v1 kind: KongPlugin metadata: name: rate-limit-base namespace: kong annotations: kubernetes.io/ingress.class: kong plugin: rate-limiting config: second: 1 policy: local " | kubectl apply -f -Copied!
- 
    Identified Consumers have a rate limit of five requests per second: echo " apiVersion: configuration.konghq.com/v1 kind: KongPlugin metadata: name: rate-limit-consumer namespace: kong annotations: kubernetes.io/ingress.class: kong plugin: rate-limiting config: second: 5 policy: local " | kubectl apply -f -Copied!
Create Routes
Let’s create two Routes for testing purposes:
- 
route-ahas no plugins attached
- 
route-bhas the three plugins created above attached
Create a Consumer
Finally, let’s create a KongConsumer with credentials and associate the rate-limit-consumer KongPlugin.
Create a Secret containing the key-auth credential:
echo 'apiVersion: v1
kind: Secret
metadata:
  name: bob-key-auth
  namespace: kong
  labels:
    konghq.com/credential: key-auth
stringData:
  key: bob-password
' | kubectl apply -f -
Then create a KongConsumer that references this Secret:
echo "
apiVersion: configuration.konghq.com/v1
kind: KongConsumer
metadata:
  name: bob
  namespace: kong
  annotations:
    kubernetes.io/ingress.class: kong
    konghq.com/plugins: rate-limit-consumer
username: bob
credentials:
- bob-key-auth
" | kubectl apply -f -
Validate the Routes
At this point we can validate that our Routes are working as expected.
Route A
route-a is accessible without any authentication and will return an HTTP 200:
 curl "$PROXY_IP/route-a"
 curl "$PROXY_IP/route-a"
The results should look like this:
Welcome, you are connected to node orbstack.
Running on Pod echo-74c66b778-szf8f.
In namespace default.
With IP address 192.168.194.13.
Route B
Authenticated requests with the valid apikey header on the route-b should be accepted:
 curl "$PROXY_IP/route-b" \
     -H "apikey:bob-password"
 curl "$PROXY_IP/route-b" \
     -H "apikey:bob-password"
The results should look like this:
Welcome, you are connected to node orbstack.
Running on Pod echo-74c66b778-szf8f.
In namespace default.
With IP address 192.168.194.13.
Requests without the apikey header should be rejected:
 curl "$PROXY_IP/route-b"
 curl "$PROXY_IP/route-b"
The results should look like this:
{
  "message":"No API key found in request",
  "request_id":"520c396c6c32b0400f7c33531b7f9b2c"
}
Make a breaking change
Now, let’s simulate a situation where we introduce a breaking change to the configuration. We’ll remove the rate-limit-consumer KongPlugin from the KongConsumer so that the route-b will now have two rate-limiting plugins associated with it, which is an invalid Kong Gateway configuration:
kubectl annotate -n kong kongconsumer bob konghq.com/plugins-
Verify the broken route was excluded
This will cause the route-b to break as there are two KongPlugins using the same type (rate-limiting). We expect route-b to be excluded from the configuration.
Let’s verify this:
 curl "$PROXY_IP/route-b"
 curl "$PROXY_IP/route-b"
The results should look like this:
{
  "message":"no Route matched with those values",
  "request_id":"209a6b14781179103528093188ed4008"
}%
Inspecting diagnostic endpoints
The Route isn’t configured because the Fallback Configuration mechanism is excluding the broken HTTPRoute.
We can verify this by inspecting the diagnostic endpoint:
kubectl port-forward -n kong deploy/kong-controller 10256 &
sleep 0.5; curl localhost:10256/debug/config/fallback | jq
The results should look like this:
{
  "status": "triggered",
  "brokenObjects": [
    {
      "group": "configuration.konghq.com",
      "kind": "KongPlugin",
      "namespace": "default",
      "name": "rate-limit-consumer",
      "id": "7167315d-58f5-4aea-8aa5-a9d989f33a49"
    }
  ],
  "excludedObjects": [
    {
      "group": "configuration.konghq.com",
      "kind": "KongPlugin",
      "version": "v1",
      "namespace": "default",
      "name": "rate-limit-consumer",
      "id": "7167315d-58f5-4aea-8aa5-a9d989f33a49",
      "causingObjects": [
        "configuration.konghq.com/KongPlugin:default/rate-limit-consumer"
      ]
    },
    {
      "group": "gateway.networking.k8s.io",
      "kind": "HTTPRoute",
      "version": "v1",
      "namespace": "default",
      "name": "route-b",
      "id": "fc82aa3d-512c-42f2-b7c3-e6f0069fcc94",
      "causingObjects": [
        "configuration.konghq.com/KongPlugin:default/rate-limit-consumer"
      ]
    }
  ]
}
Verify the working Route is still operational and can be updated
We can also ensure the other HTTPRoute is still working:
 curl "$PROXY_IP/route-a"
 curl "$PROXY_IP/route-a"
The results should look like this:
Welcome, you are connected to node orbstack.
Running on Pod echo-74c66b778-szf8f.
In namespace default.
With IP address 192.168.194.13.
What’s more, we’re still able to update the correct HTTPRoute without any issues. Let’s modify route-a’s path:
kubectl patch -n kong httproute route-a --type merge -p '{"spec":{"rules":[{"matches":[{"path":{"type":"PathPrefix","value":"/route-a-modified"}}],"backendRefs":[{"name":"echo","port":1027}]}]}}'
Let’s verify the updated HTTPRoute:
 curl "$PROXY_IP/route-a-modified"
 curl "$PROXY_IP/route-a-modified"
The results should look like this:
Welcome, you are connected to node orbstack.
Running on Pod echo-74c66b778-szf8f.
In namespace default.
With IP address 192.168.194.13.
The Fallback Configuration mechanism has successfully isolated the broken HTTPRoute and allowed the correct one to be updated.
Cleanup
Delete created Kubernetes resources
kubectl delete -n kong -f https://developer.konghq.com/manifests/kic/echo-service.yaml
Uninstall KIC from your cluster
helm uninstall kong -n kong