Limit A2A request body size

TL;DR

Enable the Request Size Limiting plugin on the same service or route as the AI A2A Proxy plugin. Requests that exceed the configured body size are rejected with 413.

Prerequisites

This is a Konnect tutorial and requires a Konnect personal access token.

  1. Create a new personal access token by opening the Konnect PAT page and selecting Generate Token.

  2. Export your token to an environment variable:

     export KONNECT_TOKEN='YOUR_KONNECT_PAT'
    
  3. Run the quickstart script to automatically provision a Control Plane and Data Plane, and configure your environment:

     curl -Ls https://get.konghq.com/quickstart | bash -s -- -k $KONNECT_TOKEN --deck-output
    

    This sets up a Konnect Control Plane named quickstart, provisions a local Data Plane, and prints out the following environment variable exports:

     export DECK_KONNECT_TOKEN=$KONNECT_TOKEN
     export DECK_KONNECT_CONTROL_PLANE_NAME=quickstart
     export KONNECT_CONTROL_PLANE_URL=https://us.api.konghq.com
     export KONNECT_PROXY_URL='http://localhost:8000'
    

    Copy and paste these into your terminal to configure your session.

This tutorial requires Kong Gateway Enterprise. If you don’t have Kong Gateway set up yet, you can use the quickstart script with an enterprise license to get an instance of Kong Gateway running almost instantly.

  1. Export your license to an environment variable:

     export KONG_LICENSE_DATA='LICENSE-CONTENTS-GO-HERE'
    
  2. Run the quickstart script:

    curl -Ls https://get.konghq.com/quickstart | bash -s -- -e KONG_LICENSE_DATA 
    

    Once Kong Gateway is ready, you will see the following message:

     Kong Gateway Ready
    

decK is a CLI tool for managing Kong Gateway declaratively with state files. To complete this tutorial, install decK version 1.43 or later.

This guide uses deck gateway apply, which directly applies entity configuration to your Gateway instance. We recommend upgrading your decK installation to take advantage of this tool.

You can check your current decK version with deck version.

For this tutorial, you’ll need Kong Gateway entities, like Gateway Services and Routes, pre-configured. These entities are essential for Kong Gateway to function but installing them isn’t the focus of this guide. Follow these steps to pre-configure them:

  1. Run the following command:

    echo '
    _format_version: "3.0"
    services:
      - name: a2a-currency-agent
        url: http://host.docker.internal:10000
    routes:
      - name: a2a-route
        paths:
        - "/a2a"
        strip_path: true
        service:
          name: a2a-currency-agent
        protocols:
        - http
        - https
    ' | deck gateway apply -
    

To learn more about entities, you can read our entities documentation.

This tutorial uses OpenAI:

  1. Create an OpenAI account.
  2. Get an API key.
  3. Create a decK variable with the API key:

    export DECK_OPENAI_API_KEY='YOUR OPENAI API KEY'
    

You need a running A2A-compliant agent. This guide uses a sample currency conversion agent from the A2A project.

Create a docker-compose.yaml file:

cat <<'EOF' > docker-compose.yaml
services:
  a2a-agent:
    container_name: a2a-currency-agent
    build:
      context: .
      dockerfile_inline: |
        FROM python:3.12-slim
        WORKDIR /app
        RUN pip install uv && apt-get update && apt-get install -y git
        RUN git clone --depth 1 https://github.com/a2aproject/a2a-samples.git /tmp/a2a && \
            cp -r /tmp/a2a/samples/python/agents/langgraph/* . && \
            rm -rf /tmp/a2a
        ENV UV_COMPILE_BYTECODE=1 UV_LINK_MODE=copy
        RUN uv sync --frozen --no-dev
        EXPOSE 10000
        CMD ["uv", "run", "app", "--host", "0.0.0.0"]
    environment:
      - model_source=openai
      - API_KEY=${DECK_OPENAI_API_KEY}
      - TOOL_LLM_URL=https://api.openai.com/v1
      - TOOL_LLM_NAME=gpt-5.1
    ports:
      - "10000:10000"
EOF

Export your OpenAI API key and start the agent:

export DECK_OPENAI_API_KEY='your-openai-key'
docker compose up --build -d

The agent listens on port 10000 and uses the A2A JSON-RPC protocol to handle currency conversion queries. In this guide, the gateway service points to host.docker.internal:10000 instead of the container name because Kong Gateway runs in its own container with a separate DNS resolver.

Enable the AI A2A Proxy plugin

The AI A2A Proxy plugin parses A2A JSON-RPC requests and proxies them to the upstream agent.

Setting max_request_body_size to 0 disables the body size cap entirely, so the full request body is buffered for payload logging and request detection — which is required in this guide since log_payloads is enabled. Any positive value sets a hard byte ceiling instead. For more details on logging options, see the AI A2A Proxy plugin reference.

echo '
_format_version: "3.0"
plugins:
  - name: ai-a2a-proxy
    config:
      max_request_body_size: 0
      logging:
        log_statistics: true
        log_payloads: true
' | deck gateway apply -

Enable the Request Size Limiting plugin

The Request Size Limiting plugin rejects requests with a body larger than the configured limit. This configuration sets a 1 MB limit, which is intentionally low to make it easier to trigger in this guide.

echo '
_format_version: "3.0"
plugins:
  - name: request-size-limiting
    config:
      allowed_payload_size: 1
      size_unit: megabytes
      require_content_length: false
' | deck gateway apply -

require_content_length is set to false so the plugin inspects the actual body size rather than relying on the Content-Length header. Set allowed_payload_size to a value appropriate for your production workload.

Validate requests within the size limit

Send a standard A2A request that falls within the 1 MB limit:

curl -X POST "$KONNECT_PROXY_URL/a2a" \
     --no-progress-meter --fail-with-body  \
     -H "Content-Type: application/json" \
     --json '{
       "jsonrpc": "2.0",
       "id": "1",
       "method": "message/send",
       "params": {
         "message": {
           "kind": "message",
           "messageId": "msg-001",
           "role": "user",
           "parts": [
             {
               "kind": "text",
               "text": "How much is 100 USD in EUR?"
             }
           ]
         }
       }
     }'

curl -X POST "http://localhost:8000/a2a" \
     --no-progress-meter --fail-with-body  \
     -H "Content-Type: application/json" \
     --json '{
       "jsonrpc": "2.0",
       "id": "1",
       "method": "message/send",
       "params": {
         "message": {
           "kind": "message",
           "messageId": "msg-001",
           "role": "user",
           "parts": [
             {
               "kind": "text",
               "text": "How much is 100 USD in EUR?"
             }
           ]
         }
       }
     }'

Kong Gateway proxies the request to the upstream A2A agent and returns a JSON-RPC response.

Validate oversized requests are rejected

Generate a payload that exceeds 1 MB and send it as an A2A request:

python3 -c "
import json
payload = {
    'jsonrpc': '2.0',
    'id': '2',
    'method': 'message/send',
    'params': {
        'message': {
            'kind': 'message',
            'messageId': 'msg-002',
            'role': 'user',
            'parts': [
                {
                    'kind': 'text',
                    'text': 'A' * 1100000
                }
            ]
        }
    }
}
print(json.dumps(payload))
" > /tmp/large_payload.json

curl -i --no-progress-meter \
  http://localhost:8000/a2a \
  -H "Content-Type: application/json" \
  -d @/tmp/large_payload.json
python3 -c "
import json
payload = {
    'jsonrpc': '2.0',
    'id': '2',
    'method': 'message/send',
    'params': {
        'message': {
            'kind': 'message',
            'messageId': 'msg-002',
            'role': 'user',
            'parts': [
                {
                    'kind': 'text',
                    'text': 'A' * 1100000
                }
            ]
        }
    }
}
print(json.dumps(payload))
" > /tmp/large_payload.json

curl -i --no-progress-meter \
  $KONNECT_PROXY_URL/a2a \
  -H "Content-Type: application/json" \
  -d @/tmp/large_payload.json

The Kong Gateway rejects the request with 413 Request Entity Too Large:

HTTP/2 413
...
{
  "message": "Request size limit exceeded"
}

Cleanup

If you created a new control plane and want to conserve your free trial credits or avoid unnecessary charges, delete the new control plane used in this tutorial.

curl -Ls https://get.konghq.com/quickstart | bash -s -- -d

FAQs

A2A messages can carry FilePart and DataPart content alongside text. Without a size limit, a client could send arbitrarily large payloads to the upstream agent, consuming memory and bandwidth. The Request Size Limiting plugin rejects oversized requests before they reach the upstream.

The two settings serve different purposes. config.max_request_body_size on the AI A2A Proxy plugin controls how much of the request body the plugin reads for JSON-RPC detection. The Request Size Limiting plugin rejects the entire request if the body exceeds the configured limit. Set both if you want to cap detection parsing and reject oversized requests.

No. The Request Size Limiting plugin checks the request body size, not the response. Streaming SSE responses from the upstream agent are not affected.

Help us make these docs great!

Kong Developer docs are open source. If you find these useful and want to make them better, contribute today!