Kong Gateway 3.10 to 3.14 LTS upgrade

Uses: Kong Gateway

Kong Gateway supports direct upgrades between long-term support (LTS) versions of Kong Gateway Enterprise. This guide walks you through upgrading from Kong Gateway Enterprise 3.10 LTS to Kong Gateway Enterprise 3.14 LTS.

We recommend upgrading directly to the latest patch version of the LTS release for the most stability. Earlier patch versions may have known issues.

There are three upgrade strategies available when upgrading from an LTS version to a newer LTS version: in-place, dual-cluster, and rolling. This guide describes the best applicable strategy for each deployment mode that Kong Gateway supports. Additionally, it lists some fundamental factors that play important roles in the upgrade process, and explains how to back up and recover data.

This guide uses the following terms in the context of Kong Gateway:

  • Upgrade: The overall process of switching from an older to a newer version of Kong Gateway.
  • Migration: The migration of your data store data into a new environment. For example, the process of moving data from an old PostgreSQL instance to a new one is referred to as database migration.

To make sure your upgrade is successful, carefully review all the steps in this guide. It’s very important to understand all the preparation steps and choose the recommended upgrade path based on your deployment type.

Caution: The migration pattern described in this document can only happen between two LTS versions, Kong Gateway Enterprise 3.10 LTS and Kong Gateway Enterprise 3.14 LTS. If you apply this document to other release intervals, database modifications may be run in the wrong sequence and leave the database schema in a broken state.

Prerequisites

Read this document thoroughly to successfully complete the upgrade process, as it includes all the necessary operational knowledge for the upgrade.

Upgrade journey overview

Preparation phase

There are a number of steps you must complete before upgrading to Kong Gateway 3.14 LTS:

  1. Work through any listed prerequisites.
  2. Back up your database or your declarative configuration files.
  3. Choose the right strategy for upgrading based on your deployment topology.
  4. Review the Kong Gateway changes from 3.10 to 3.14 for any breaking changes that may affect your deployments.
  5. Using your chosen strategy, test migration in a pre-production environment.

Performing the upgrade

The actual execution of the upgrade depends on type of deployment you have with Kong Gateway. In this part of the upgrade journey, you will use the strategy you determined during the preparation phase.

  1. Execute your chosen upgrade strategy on dev.
  2. Move from dev to prod.
  3. Smoke test.
  4. Finish the upgrade or roll back and try again.

Now, let’s move on to preparation, starting with your backup options.

Preparation: Choose a backup strategy

Always back up your database or declarative configuration files before an upgrade. The kong migrations commands used during upgrade and database migration are not reversible.

There are two main types of backup for Kong Gateway entities:

  • Database backup: PostgreSQL has native exporting and importing tools that are reliable and performant, and that ensure consistency when backing up or restoring data. If you’re running Kong Gateway in Traditional or Hybrid mode, you should always take a database-native backup.
  • Declarative backup: Kong ships two declarative backup tools: decK and the Kong CLI, which support managing Kong Gateway entities in the declarative format. For Traditional and Hybrid mode deployments, use these tools to create secondary backups. For DB-less mode deployments, use the Kong CLI and manually manage declarative configuration files.

We highly recommend backing up your data using both methods if possible, as this offers you recovery flexibility.

The database-native tools are robust and can restore data instantly compared to the declarative tools. In case of data corruption, try to do a database-level restore first. Otherwise, bootstrap a new database and use declarative tools to restore configuration from backup files.

Review the Backup and restore guide to prepare backups of your configuration. If you run into any issues and need to roll back, you can also reference that guide to restore your old data store.

Preparation: Choose an upgrade strategy based on deployment mode

Choose your deployment mode:

Here’s a flowchart that breaks down how the decision process works:

 
flowchart TD
    A{Deployment type?} --> B(Traditional mode)
    A{Deployment type?} --> C(Hybrid mode)
    A{Deployment type?} --> D(DB-less mode)
    A{Deployment type?} --> E(Konnect DP)
    B ---> F{Enough hardware to 
    run another cluster?}
    C --> G(Upgrade CP first) & H(Upgrade DP second)
    D ----> K([Rolling upgrade])
    E ----> K
    G --> F
    F ---Yes--->I([Dual-cluster upgrade])
    F ---No--->J([In-place upgrade])
    H ---> K
    click K "/gateway/upgrade/rolling/"
    click I "/gateway/upgrade/dual-cluster/"
    click J "/gateway/upgrade/in-place/"
  

Figure 1: Choose an upgrade strategy based on your deployment type. For Traditional mode, choose a dual-cluster upgrade if you have enough resources, or an in-place upgrade if you don’t have enough resources. For DB-less mode and Konnect DPs, use a rolling upgrade. For Hybrid mode, use one of the Traditional mode strategies for CPs, and the rolling upgrade for DPs.

See the following sections for breakdowns of each strategy.

Traditional mode

A Traditional mode deployment is when all Kong Gateway components are running in one environment, and there is no Control Plane/Data Plane separation.

You have two options when upgrading Kong Gateway in Traditional mode:

  • Dual-cluster upgrade: A new Kong Gateway cluster of version Y is deployed alongside the current version X, so that two clusters serve requests concurrently during the upgrade process.
  • In-place upgrade: An in-place upgrade reuses the existing database and has to shut down cluster X first, then configure the new cluster Y to point to the database.

We recommend using a dual-cluster upgrade if you have the resources to run another cluster concurrently. Use the in-place method only if resources are limited, as it will cause business downtime.

Dual-cluster upgrade

Upgrading Kong Gateway from one LTS version to another LTS version with zero downtime can be achieved through a dual-cluster upgrade strategy. This approach involves setting up a new cluster running the upgraded version of Kong Gateway alongside the existing cluster running the current version.

At a high level, the process typically involves the following steps:

  1. Provisioning a same-size deployment: You need to ensure that the new cluster, which will run the upgraded version of Kong Gateway, has the same capacity and resources as the existing cluster. This ensures that both clusters can handle the same amount of traffic and workload.

  2. Setting up dual-cluster deployment: Once the new cluster is provisioned, you can start deploying your APIs and configurations to both clusters simultaneously. The dual cluster deployment allows both the old and new clusters to coexist and process requests in parallel.

  3. Data synchronization: During the dual cluster deployment, data synchronization is crucial to ensure that both clusters have the same data. This can involve migrating data from the old cluster to the new one or setting up a shared data storage solution to keep both clusters in sync. Import the database from the old cluster to the new cluster by using a snapshot or pg_restore.

  4. Traffic rerouting: As the new cluster is running alongside the old one, you can start gradually routing incoming traffic to the new cluster. This process can be done gradually or through a controlled switchover mechanism to minimize any impact on users. This can be achieved by any load balancer, like DNS, Nginx, F5, or even a Kong Gateway node with Canary plugin enabled.

  5. Testing and validation: Before performing a complete switchover to the new cluster, it is essential to thoroughly test and validate the functionality of the upgraded version. This includes testing APIs, plugins, authentication mechanisms, and other functionalities to ensure they are working as expected.

  6. Complete switchover: Once you are confident that the upgraded cluster is fully functional and stable, you can redirect all incoming traffic to the new cluster. This step completes the upgrade process and decommissions the old cluster.

By following this dual cluster deployment strategy, you can achieve a smooth and zero-downtime upgrade from one LTS version of Kong Gateway to another. This approach helps ensure high availability and uninterrupted service for your users throughout the upgrade process.

In-place upgrade

While an in-place upgrade allows you to perform the upgrade on the same infrastructure, it does require some downtime during the actual upgrade process. Plan a suitable maintenance or downtime window during which you can perform the upgrade. During this period, the Kong Gateway will be temporarily unavailable.

For scenarios where zero downtime is critical, consider the dual-cluster upgrade method, keeping in mind the additional resources and complexities.

DB-less mode

In DB-less mode, each independent Kong Gateway node loads a copy of declarative Kong Gateway configuration data into memory without persistent database storage, so failure of some nodes doesn’t spread to other nodes.

Deployments in this mode should use the rolling upgrade strategy. You could parse the validity of the declarative YAML contents with version Y, using the deck gateway validate or the kong config parse command.

You must back up your current kong.yaml file before starting the upgrade.

Hybrid mode

Hybrid mode deployments consist of one or more Control Plane (CP) nodes, and one or more Data Plane (DP) nodes. CP nodes use a database to store Kong Gateway configuration data, whereas DP nodes don’t, since they get all of the needed information from the CP. The recommended upgrade process is a combination of different upgrade strategies for each type of node, CP or DP.

The major challenge with a Hybrid mode upgrade is the communication between the CP and DP. As Hybrid mode requires the minor version of the CP to be no less than that of the DP, you must upgrade CP nodes before DP nodes.

The upgrade must be carried out in two phases:

  1. Upgrade the CP according to the recommendations in the section Traditional mode, while DP nodes are still serving API requests.
  2. Upgrade DP nodes using the recommendations from the section DB-less mode. Point the new DP nodes to the new CP to avoid version conflicts.

The role decoupling feature between CP and DP enables DP nodes to serve API requests while upgrading CP. With this method, there is no business downtime.

Custom plugins (either your own plugins or third-party plugins that are not shipped with Kong Gateway) need to be installed on both the Control Plane and the Data Planes in Hybrid mode. Install the plugins on the Control Plane first, and then the Data Planes.

See the following sections for a breakdown of the options for Hybrid mode deployments.

Control Planes

CP nodes must be upgraded before DP nodes. CP nodes serve an admin-only role and require database support. You can select from the same upgrade strategies nominated for Traditional mode (dual-cluster or in-place), as described in figure 2 and figure 3 respectively.

Upgrading the CP nodes using the dual-cluster strategy:

 
flowchart TD
    DBA[(Current
    database)]
    DBB[(New 
    database)]
    CPX(Current Control Plane X)
    Admin(No admin 
    write operations)
    CPY(New Control Plane Y)
    DPX(fa:fa-layer-group Current Data Plane X nodes)
    API(API requests)

    DBA -.- CPX -."DP connects to either 
CP X...".- DPX Admin -.X.- CPX & CPY DBB --pg_restore--- CPY -."...OR to CP Y".- DPX API--> DPX style API stroke:none!important,fill:none!important style DBA stroke-dasharray:3 style CPX stroke-dasharray:3 style Admin fill:none!important,stroke:none!important,color:#d44324!important linkStyle 2,3 stroke:#d44324!important,color:#d44324!important

Figure 2: The diagram shows a CP upgrade using the dual-cluster strategy. The new CP Y is deployed alongside the current CP X, while current DP nodes X are still serving API requests.

Upgrading the CP nodes using the in-place strategy:

 
flowchart 
    DBA[(Database)]
    CPX(Current Control Plane X 
#40;inactive#41;) Admin(No admin
write operations) CPY(New Control Plane Y) DPX(fa:fa-layer-group Current Data Plane X nodes) API(API requests) DBA -..- CPX -."DP connects to either
CP X...".- DPX Admin -.X.- CPX & CPY DBA --"kong migrations up
kong migrations finish"--- CPY -."...OR to CP Y".- DPX API--> DPX style API stroke:none!important,fill:none!important style CPX stroke-dasharray:3 style Admin fill:none!important,stroke:none!important,color:#d44324!important linkStyle 2,3 stroke:#d44324!important,color:#d44324!important

Figure 3: The diagram shows a CP upgrade using the in-place strategy, where the current CP X is directly replaced by a new CP Y. The database is reused by the new CP Y, and the current CP X is shut down once all nodes are migrated.

From the two diagrams, you can see that DP nodes X remain connected to the current CP node X, or alternatively switch to the new CP node Y. Kong Gateway guarantees that new minor versions of CPs are compatible with old minor versions of the DP, so you can temporarily point DP nodes X to the new CP node Y. This lets you pause the upgrade process if needed, or conduct it over a longer period of time.

This setup is meant to be temporary, to be used only during the upgrade process. We do not recommend running a combination of new versions of CP nodes and old versions of DP nodes in a long-term production deployment.

After the CP upgrade, cluster X can be decommissioned. You can delay this task to the very end of the DP upgrade.

Data Planes

Once the CP nodes are upgraded, you can move on to upgrade the DP nodes. The only supported upgrade strategy for DP upgrades is the rolling upgrade. The following diagrams, figure 4 and 5, are the counterparts of figure 2 and 3 respectively.

Using the dual-cluster strategy with a rolling upgrade workflow:

 
flowchart TD
    DBX[(Current 
database)] DBY[(New
database)] CPX(Current Control Plane X) CPY(New Control Plane Y) DPX(Current Data Planes X) DPY(New Data Planes Y) API(API requests) LB(Load balancer) Admin(No admin
write operations) Admin2(No admin
write operations) subgraph A [ ] Admin -.X.- CPX DBX -.- CPX DBY --- CPY CPX -."Current DP connects to
either CP X...".- DPX Admin2 -.X.- CPY CPY -."...OR to CP Y".- DPX DPX -.90%..- LB CPY --- DPY --10%---- LB end subgraph B [ ] API --> LB & LB & LB end linkStyle 0,4 stroke:#d44324!important,color:#d44324!important linkStyle 8,9 stroke:#b6d7a8!important style CPX stroke-dasharray:3 style DPX stroke-dasharray:3 style DBX stroke-dasharray:3 style API stroke:none!important,fill:none!important style A stroke:none!important,display:none!important style B stroke:none!important,display:none!important style Admin fill:none!important,stroke:none!important,color:#d44324!important style Admin2 fill:none!important,stroke:none!important,color:#d44324!important

Figure 4: The diagram shows a DP upgrade using the dual-cluster and rolling strategies. The new CP Y is deployed alongside with the current CP X, while current DP nodes X are still serving API requests. In the image, the background color of the current database and CP X is grey instead of white, signaling that the old CP is already upgraded and might have been decommissioned.

Using the in-place strategy strategy with a rolling upgrade workflow:

 
flowchart 
    DBA[(Database)]
    CPX(Current Control Plane X 
#40;inactive#41;) CPY(New Control Plane Y) DPX(Current Data Planes X) DPY(New Data Planes Y) API(API requests) LB(Load balancer) Admin(No admin
write operations) Admin2(No admin
write operations) subgraph A [ ] Admin -.X.- CPX DBA -.X.- CPX DBA --- CPY CPX -."Current DP connects to
either CP X...".- DPX Admin2 -.X.- CPY CPY -."OR to CP Y".- DPX -.90%..- LB CPY --- DPY --10%---- LB end subgraph B [ ] API --> LB & LB & LB end linkStyle 0,1,4 stroke:#d44324!important,color:#d44324!important linkStyle 8,9 stroke:#b6d7a8!important style CPX stroke-dasharray:3,stroke:#c1c6cdff!important style DPX stroke-dasharray:3 style A stroke:none!important,color:#fff!important style B stroke:none!important,color:#fff!important style Admin fill:none!important,stroke:none!important,color:#d44324!important style Admin2 fill:none!important,stroke:none!important,color:#d44324!important

Figure 5: The diagram shows a DP upgrade using the in-place and rolling strategies. The diagram shows that the database is reused by the new CP Y, while current DP nodes X are still serving API requests.

When cluster fallback configuration is enabled, upgrade both the exporting instances and importing instances to exactly the same new version, including the patch level (for example, 3.11.0.3). After upgrading, validate that fallback configuration is successfully re-exported.

Preparation: Review gateway changes

The following tables categorize all relevant changelog entries from Kong Gateway Enterprise 3.10.0.0 up to 3.14.0.1. Carefully review each entry and make changes to your configuration accordingly.

Removed or deprecated

The feature or behaviors in the following table have been permanently removed, or are deprecated and slated for future removal. By updating settings based on the table below, you can avoid any potential issues that may arise from using deprecated aliases and ensure that your Kong Gateway instance functions correctly with the most recent changes and improvements.

It’s essential to keep configurations up to date to maintain the system’s stability, security, and optimal performance.

Change

Category

Action Required

AI Proxy and AI Proxy Advanced

The preserve route type in these plugins has been deprecated and will be removed in a future version.
Plugins Update your AI Proxy and AI Proxy Advanced plugin configurations to use a supported route type. See the route_type options for AI Proxy and route_type options for AI Proxy Advanced.
WASM

Support for the beta WASM module was removed. The Datakit plugin is now bundled as a Lua plugin and no longer requires WASM to run.
Plugins Remove any WASM-related configuration from your deployment.
Datakit

The Datakit plugin no longer supports the handlebars node type.
Plugins Update any Datakit plugin configurations that use the handlebars node type.
Kafka Consume

The Kafka Consume plugin can no longer be applied to a Service. This plugin doesn’t proxy to a Service, so attaching it to one causes issues.

If you previously attached a Kafka Consume plugin to a Service, the plugin will no longer take effect:

  • If there is an Upstream configured in the Service, requests will be proxied to the Upstream.
  • If there is no Upstream configured in the Service, requests will not be proxied, and the plugin won’t take effect.
Plugins Remove any Service scoping from your Kafka Consume plugin configurations and reattach the plugin to a supported entity instead.
Record/map fields with an empty object default value ({}) are now correctly JSON-encoded as objects. They were previously incorrectly encoded as arrays. Admin API If you have any automation that depends on these fields being encoded as arrays, adjust it accordingly.
Service Protection

The priority of the Service Protection plugin changed from 915 to 901. The plugin now executes after other rate limiting plugins, and only evaluates requests that have passed rate limiting.
Plugins If you have custom plugins with a priority between 901 and 915 that depend on the Service Protection plugin, adjust their priorities or use dynamic plugin ordering.
AI Semantic Prompt Guard

The config.rules.max_request_body_size parameter has been replaced with config.max_request_body_size.

The old parameter is deprecated and will be removed in a future version.
Plugins Update your AI Semantic Prompt Guard plugin configurations to use config.max_request_body_size.
The SHA1 algorithm has been deprecated or removed in several places and the default algorithm has changed to SHA256.

For the Event Hooks entity, this is a breaking change. Event hook calls are now signed with HMAC-SHA256 instead of HMAC-SHA1.

For the following plugins, the SHA1 algorithm is deprecated but still supported in existing configurations:

  • Basic Auth: Uses SHA256 by default in new configurations.
  • HMAC Auth: HMAC-SHA1 is no longer included in the default set of algorithms.
  • OAuth2: Uses SHA256 for the access token cache key instead of SHA1.
Security Update your Event Hook configurations to account for HMAC-SHA256 signing.

We strongly recommend updating plugin configurations to use SHA256 whenever possible.
The untrusted_lua configuration option introduces two new modes: strict and lax, in addition to the existing sandbox mode. The default value has changed from sandbox to strict.

  • strict (new default): Does not permit network operations. Cannot be extended via untrusted_lua_sandbox_requires or untrusted_lua_sandbox_environment.
  • lax: Permits untrusted Lua code to perform network operations.
  • sandbox: Previous default.

Plugins that rely on capabilities previously allowed by sandbox mode may fail.

Security Review any plugins that use Lua sandbox capabilities. To revert to the old behavior, set untrusted_lua to sandbox or on.

These options are not recommended for security reasons.

For more information, see Sandboxing.

OpenTelemetry

The config.access_logs_endpoint parameter has changed to config.access_logs.endpoint. The old field is deprecated and will be removed in a future version.
Plugins Update your OpenTelemetry plugin configuration to use config.access_logs.endpoint.
OpenID Connect

The following header claims fields have been replaced with new fields:

  • config.upstream_headers_claims and config.upstream_headers_names → replaced by config.upstream_headers
  • config.downstream_headers_claims and config.downstream_headers_names → replaced by config.downstream_headers

The new fields support nested claims. The old fields are deprecated and will be removed in a future version.

Plugins Update your OpenID Connect plugin configuration to use config.upstream_headers and config.downstream_headers.
OpenID Connect

The config.consumer_claim field has been converted to config.consumer_claims. The parameter now accepts an array of arrays instead of an array of strings.

The old config.consumer_claim field is deprecated and will be removed in a future version.
Plugins Update your OpenID Connect plugin configuration to use config.consumer_claims.
The Kong Gateway global configuration option tls_certificate_verify now defaults to on. This affects a number of entities. SSL The recommended action depends on the affected item. See the following table of TLS certificate changes for a breakdown and recommended actions for each item.

To revert to the old behavior for all of the affected configurations, set tls_certificate_verify to off.

TLS certificate changes

The following configurations are affected by the tls_certificate_verify default change:

Category

Impact

Action

PostgreSQL database When the PostgreSQL configuration contains pg_ssl_verify = off, Kong Gateway can fail to start. Add the PostgreSQL server’s certificate to lua_ssl_trusted_certificate and set pg_ssl_verify to on.
Gateway Services Gateway Service entities with tls_verify = false where the Service protocol is https, tls, grpcs, or wss are affected as follows:
  • Traditional mode: Existing Gateway Service entities with tls_verify = false can still be loaded and used, but updating the Service’s config with tls_verify = false returns an error from the Admin API.
  • DB-less mode: Kong Gateway will fail to boot if the declarative configuration contains a Service with tls_verify = false.
  • Hybrid mode (on-prem): The data plane can boot but can’t receive a valid configuration from the control plane, and errors will appear in the data plane log.
Update the schema for all affected Gateway Services (where protocol is https, tls, grpcs, or wss) by setting tls_verify = true.
Plugins and Redis Partials Any plugin or Redis Partial configured with one of the affected certificate verification fields below is affected by this change. See the full list of plugins and fields in the following table.

If you have existing plugins using these values, Kong Gateway’s behavior differs based on deployment mode:
  • Traditional mode: Plugins can still be loaded and used, but updating the plugin’s config returns an error from the Admin API.
  • DB-less mode: Kong Gateway will fail to boot if the declarative configuration contains these values.
  • Hybrid mode (on-prem): The data plane can’t receive a valid configuration from the control plane, and errors will appear in the data plane log.
Update the schema for all affected plugins by setting these values to true. See the full list of affected plugins and fields in the table below.
To see your plugin’s schema, find your plugin on the Plugin Hub, then open the Configuration reference tab.
HashiCorp Vault HashiCorp Vault won’t function if lua_ssl_trusted_certificate isn’t configured with a valid certificate. Add the HashiCorp Vault server’s certificate to lua_ssl_trusted_certificate.
Custom plugins Custom plugins that use https, tls, grpcs, or wss may not work if their implementation doesn’t verify the server certificate. Manually update the custom plugin implementation to verify the server certificate.
Event Hooks The webhook handler’s ssl_verify setting is now true by default. Ensure your webhook endpoints have a valid TLS certificate.

To revert to the old behavior, set tls_certificate_verify to off.

The following table lists all of the plugin fields affected by the TLS/SSL certificate verification changes in Kong Gateway 3.14:

Plugin

Affected fields

ACE rate_limiting.redis.ssl_verify
ACME
  • storage_config.redis.ssl_verify
  • storage_config.vault.tls_verify
AI AWS Guardrails ssl_verify
AI Azure Content Safety ssl_verify
AI LLM as Judge https_verify
AI Proxy Advanced
  • vectordb.pgvector.ssl_verify
  • vectordb.redis.ssl_verify
AI RAG Injector
  • vectordb.pgvector.ssl_verify
  • vectordb.redis.ssl_verify
AI Rate Limiting Advanced redis.ssl_verify
AI Semantic Cache
  • vectordb.pgvector.ssl_verify
  • vectordb.redis.ssl_verify
AI Semantic Prompt Guard
  • vectordb.pgvector.ssl_verify
  • vectordb.redis.ssl_verify
AI Semantic Response Guard
  • vectordb.pgvector.ssl_verify
  • vectordb.redis.ssl_verify
AWS Lambda ssl_verify
Azure Functions https_verify
Basic Auth brute_force_protection.redis.ssl_verify
Confluent
  • security.ssl_verify
  • schema_registry.confluent.authentication.oauth2_client.ssl_verify
Confluent Consume
  • security.ssl_verify
  • schema_registry.confluent.authentication.oauth2_client.ssl_verify
  • topics.schema_registry.confluent.authentication.oauth2_client.ssl_verify
Datakit
  • nodes[].ssl_verify (for nodes with type: call)
  • resources.cache.redis.ssl_verify
Forward Proxy https_verify
GraphQL Proxy Cache Advanced redis.ssl_verify
GraphQL Rate Limiting Advanced redis.ssl_verify
Header Cert Auth ssl_verify
HTTP Log ssl_verify
JWT Signer
  • access_token_endpoints_ssl_verify
  • channel_token_endpoints_ssl_verify
  • The /rotate endpoint now enables certificate verification by default
Kafka Consume
  • security.ssl_verify
  • schema_registry.confluent.authentication.oauth2_client.ssl_verify
  • topics.schema_registry.confluent.authentication.oauth2_client.ssl_verify
Kafka Log
  • security.ssl_verify
  • schema_registry.confluent.authentication.oauth2_client.ssl_verify
Kafka Upstream
  • security.ssl_verify
  • schema_registry.confluent.authentication.oauth2_client.ssl_verify
LDAP Auth verify_ldap_host
LDAP Auth Advanced verify_ldap_host
mTLS Auth ssl_verify
OpenID Connect
  • ssl_verify
  • cluster_cache_redis.ssl_verify
  • redis.ssl_verify
  • session_memcached_ssl_verify
Proxy Cache Advanced redis.ssl_verify
Rate Limiting redis.ssl_verify
Rate Limiting Advanced redis.ssl_verify
Redis Partials ssl_verify
Request Callout
  • cache.redis.ssl_verify
  • callouts.request.http_opts.ssl_verify
Response Rate Limiting redis.ssl_verify
SAML redis.ssl_verify
Service Protection redis.ssl_verify
Solace Consume session.ssl_validate_certificate
Solace Log session.ssl_validate_certificate
Solace Upstream session.ssl_validate_certificate
TCP Log ssl_verify
Upstream OAuth
  • client.ssl_verify
  • cache.redis.ssl_verify

Compatible

The following table lists behavior changes that may cause your database configuration or kong.conf to fail. This includes deprecated (but not removed) features.

Change

Category

Action Required

Konnect Application Auth

The priority of the internal konnect-application-auth plugin changed from 950 to 960. This ensures that the execution order of the konnect-application-auth plugin and the ACL plugin is correct.
Plugins If you have custom plugins with a priority between 950 and 960 that depend on the konnect-application-auth plugin, adjust their priorities or use dynamic plugin ordering.
AI Semantic Cache, AI Semantic Prompt Guard, and AI Proxy Advanced

These plugins now use a separate column to store the namespace instead of including it in the table name.

This change invalidates all caches previously created by these plugins.
Plugins If you are using a long cache TTL for AI Semantic Cache, AI Semantic Prompt Guard, or AI Proxy Advanced, plan for cache warmup after upgrading.
Record and map fields with an empty object default value ({}) are now correctly JSON-encoded as objects. They were previously incorrectly encoded as arrays. Admin API If you have any integrations or scripts that expect empty record or map fields to be returned as arrays, update them to expect objects.
The default setting for Route protocols has changed from http,https to https.

New Routes will have this default value, while existing Routes are unaffected.
Router If you have any automation that creates Routes, update your configuration to set the required protocol explicitly.
hide_credentials is now set to true by default in the following plugins:

This change doesn’t affect existing plugins, but new plugins will have this setting configured by default.

Plugins Review any automation that creates new plugin configurations, and adjust if needed.
Kong Gateway now validates the database connection configuration at startup and won’t start if errors are detected. DB config Before upgrading, verify that your database connection settings in kong.conf are correct and that the database is reachable.

Perform upgrade

Now that you have chosen an upgrade strategy and reviewed all the relevant changes between the 3.10 and 3.14 LTS releases, you can start the upgrade with your chosen strategy:

Traditional mode or control planes in hybrid mode:

DB-less mode or data planes in hybrid mode:

Troubleshooting

If you run into issues during the upgrade and need to roll back, restore Kong Gateway based on the backup method.

Help us make these docs great!

Kong Developer docs are open source. If you find these useful and want to make them better, contribute today!