You are browsing documentation for an older version. See the latest documentation here.

Monitoring with Prometheus

Prometheus is a popular systems monitoring and alerting toolkit. Prometheus implements a multi-dimensional time series data model and distributed storage system where metrics data is collected via a pull model over HTTP.

Kong Gateway supports Prometheus with the Prometheus Plugin that exposes Kong Gateway performance and proxied upstream service metrics on the /metrics endpoint.

This guide will help you setup a test Kong Gateway and Prometheus service. Then you will generate sample requests to Kong Gateway and observe the collected monitoring data.

Prerequisites

This guide assumes the following tools are installed locally:

  • Docker is used to run Kong Gateway, the supporting database, and Prometheus locally.
  • curl is used to send requests to Kong Gateway. curl is pre-installed on most systems.

Configure Prometheus monitoring

  1. Install Kong Gateway:

    This step is optional if you wish to use an existing Kong Gateway installation. When using an existing Kong Gateway, you will need to modify the commands to account for network connectivity and installed Kong Gateway services and routes.

    1. curl -Ls https://get.konghq.com/quickstart | bash -s -- -m

    The -m flag instructs the script to install a mock service that is used in this guide to generate sample metrics.

    Once the Kong Gateway is ready, you will see the following message:

    1. Kong Gateway Ready
  2. Install the Prometheus Kong Gateway plugin:

    1. curl -s -X POST http://localhost:8001/plugins/ \
    2. --data "name=prometheus"

    You should receive a JSON response with the details of the installed plugin.

  3. Create a Prometheus configuration file named prometheus.yml in the current directory, and copy the following values:

    1. scrape_configs:
    2. - job_name: 'kong'
    3. scrape_interval: 5s
    4. static_configs:
    5. - targets: ['kong-quickstart-gateway:8001']

    See the Prometheus Configuration Documentation for details on these settings.

  4. Run a Prometheus server, and pass it the configuration file created in the previous step. Prometheus will begin to scrape metrics data from Kong Gateway.

    1. docker run -d --name kong-quickstart-prometheus \
    2. --network=kong-quickstart-net -p 9090:9090 \
    3. -v $(PWD)/prometheus.yml:/etc/prometheus/prometheus.yml \
    4. prom/prometheus:latest
  5. Generate sample traffic to the mock service. This allows you to observe metrics generated from the StatsD plugin. The following command generates 60 requests over one minute. Run the following in a new terminal:

    1. for _ in {1..60}; do {curl -s localhost:8000/mock/anything; sleep 1; } done
  6. You can view the metric data directly from Kong Gateway by querying the /metrics endpoint on the Admin API:

    1. curl -s localhost:8001/metrics

    Kong Gateway will report system wide performance metrics by default. When the Plugin has been installed and traffic is being proxied, it will record additional metrics across service, route, and upstream dimensions.

    The response will look similar to the following snippet:

    1. # HELP kong_bandwidth Total bandwidth in bytes consumed per service/route in Kong
    2. # TYPE kong_bandwidth counter
    3. kong_bandwidth{service="mock",route="mock",type="egress"} 13579
    4. kong_bandwidth{service="mock",route="mock",type="ingress"} 540
    5. # HELP kong_datastore_reachable Datastore reachable from Kong, 0 is unreachable
    6. # TYPE kong_datastore_reachable gauge
    7. kong_datastore_reachable 1
    8. # HELP kong_http_status HTTP status codes per service/route in Kong
    9. # TYPE kong_http_status counter
    10. kong_http_status{service="mock",route="mock",code="200"} 6
    11. # HELP kong_latency Latency added by Kong, total request time and upstream latency for each service/route in Kong
    12. # TYPE kong_latency histogram
    13. kong_latency_bucket{service="mock",route="mock",type="kong",le="1"} 4
    14. kong_latency_bucket{service="mock",route="mock",type="kong",le="2"} 4
    15. ...

    See the Kong Prometheus Plugin documentation for details on the available metrics and configurations.

  7. Prometheus provides multiple ways to query collected metric data.

    You can view the Prometheus expression viewer by opening a browser to http://localhost:9090/graph.

    You can also query Prometheus directly using it’s HTTP API:

    1. curl -s 'localhost:9090/api/v1/query?query=kong_http_status'

    Prometheus also provides documentation for setting up Grafana as a visualization tool for the collected time series data.

Cleanup

Once you are done experimenting with Prometheus and Kong Gateway, you can use the following commands to stop and remove the services created in this guide:

  1. docker stop kong-quickstart-prometheus
  2. curl -Ls https://get.konghq.com/quickstart | bash -s -- -d

More information