Performance testing benchmarks

As of Kong Gateway 3.6.x, Kong publishes performance results on Kong Gateway, along with the test methodology and details. Kong plans to conduct and publish Kong Gateway performance results for each subsequent minor release.

In addition to viewing our performance test results, you can use our public test suite to conduct your own performance tests with Kong Gateway.

Kong Gateway performance testing method and results for 3.8.x

Kong tests performance results for Kong Gateway using our public test suite.

The following sections explain the test methodology, results, and configuration.

Test method

The performance tests cover a number of baseline configurations and common use cases of Kong Gateway. The following describes the test cases used and the configuration methodology:

  • Environment: Kubernetes environment on AWS infrastructure.
  • Test use cases:
  • Routes and consumers: Each case was tested with two different options: one with one route and one consumer, and one with 100 routes and 100 consumers, for a total of eight test cases. For test cases that didn’t require authentication, no consumers were used.
  • Traffic distribution: Normal distribution across both routes and consumers.
  • Protocol: HTTPS only.
  • Sample size: Each test case was run five times, each for a duration of 15 minutes. The results are an average of the five different test runs.

Kong Gateway 3.8.x performance benchmark results

Test typeNumber of routes/consumersRequests per second (RPS)P99 (ms)P95 (ms)
Kong proxy with no plugins1 route, 0 consumers142443.46.243.55
Kong proxy with no plugins100 routes, 0 consumers137561.76.363.58
Rate limit and no auth1 route, 0 consumers120897.48.083.60
Rate limit and no auth100 routes, 0 consumers116867.28.513.78
Rate limit and key auth1 route, 1 consumer105657.48.624.38
Rate limit and key auth100 routes, 100 consumers100047.69.124.45
Rate limit and basic auth1 route, 1 consumer98031.610.475.02
Rate limit and basic auth100 routes, 100 consumers92548.29.805.25

Test environment

Kong ran these tests in AWS using EC2 machines. We used Kubernetes taints to ensure that Kong Gateway is on its own node while the load testing and observability tools are on their own separate nodes in the same cluster.

The Kong Gateway ran on a single dedicated instance of c5.4xlarge, and the two nodes for the observability stack and K6 ran on dedicated c5.metal instances. We used the metal instances for the observability load generation toolchain to ensure they aren’t resource constrained in any way. Since K6 is very resource demanding when generating a high amount of traffic during tests, we observed that using smaller or less powerful instances for the toolchain caused the observability load generation tools to be a bottleneck for Kong Gateway performance.

Test configuration

For these tests, we changed the number of worker processes to match the number of available cores to the node running Kong Gateway, which was 16 vCPU. Accordingly, we set the number of processes to 16. This follows Kong’s overall performance guidance. Outside of this change, no other tuning was made.

Conduct your own performance test using Kong’s test suite

You can use Kong’s public test suite repo to help you spin up an EKS cluster with Kong Gateway, Redis, Prometheus, and Grafana installed. Additionally, it will configure K6, a popular open source load testing tool. You can use this test suite to conduct your own performance tests.

Once the cluster is generated, you can apply the provided yaml to configure the Kong Gateway for the included test cases and the observability plugins for metrics scraping by the Prometheus instance already provisioned in the cluster. If you’d rather define your own test scenarios, you can also define the Kong Gateway configuration you want to test and apply it to the cluster.

From there, you can use the included bash scripts to run K6 tests. After the tests complete, you can port-forward into the cluster and view the Grafana dashboard with the performance results.

More information