Set up with OpenAI and Valkeyv3.14+

Enable AI Semantic Caching with OpenAI embeddings API and a Valkey vector database.

Valkey is automatically detected when using the redis vectordb strategy. Kong Gateway queries the server and uses the Valkey-specific driver when it detects a Valkey backend.

If you use the text-embedding-ada-002 as an embedding model, you must set a fixed dimension of 1536, as required by the official model specification. Alternatively, use the text-embedding-3-small model, which supports dynamic dimensions and works without specifying a fixed value.

Prerequisites

  • The AI Proxy or AI Proxy Advanced plugin is enabled

  • An OpenAI account

  • A Valkey 8.x instance.

  • Port 6379, or your custom Valkey port is open and reachable from Kong Gateway.

Environment variables

  • OPENAI_API_KEY: Your OpenAI API key

  • VALKEY_HOST: The host where your Valkey instance runs

Set up the plugin

Help us make these docs great!

Kong Developer docs are open source. If you find these useful and want to make them better, contribute today!