You are browsing documentation for an older version. See the latest documentation here.

Securing the Admin API

Kong Gateway’s Admin API provides a RESTful interface for administration and configuration of Services, Routes, Plugins, Consumers, and Credentials. Because this API allows full control of Kong, it is important to secure this API against unwanted access. This document describes a few possible approaches to securing the Admin API.

Network Layer Access Restrictions

Minimal Listening Footprint

By default since its 0.12.0 release, Kong will only accept requests from the local interface, as specified in its default admin_listen value:

  1. admin_listen = 127.0.0.1:8001

If you change this value, always ensure to keep the listening footprint to a minimum, in order to avoid exposing your Admin API to third-parties, which could seriously compromise the security of your Kong cluster as a whole. For example, avoid binding Kong to all of your interfaces, by using values such as 0.0.0.0:8001.

Layer 3/4 Network Controls

In cases where the Admin API must be exposed beyond a localhost interface, network security best practices dictate that network-layer access be restricted as much as possible. Consider an environment in which Kong listens on a private network interface, but should only be accessed by a small subset of an IP range. In such a case, host-based firewalls (e.g. iptables) are useful in limiting input traffic ranges. For example:

  1. # assume that Kong is listening on the address defined below, as defined as a
  2. # /24 CIDR block, and only a select few hosts in this range should have access
  3. grep admin_listen /etc/kong/kong.conf
  4. admin_listen 10.10.10.3:8001
  5. # explicitly allow TCP packets on port 8001 from the Kong node itself
  6. # this is not necessary if Admin API requests are not sent from the node
  7. iptables -A INPUT -s 10.10.10.3 -m tcp -p tcp --dport 8001 -j ACCEPT
  8. # explicitly allow TCP packets on port 8001 from the following addresses
  9. iptables -A INPUT -s 10.10.10.4 -m tcp -p tcp --dport 8001 -j ACCEPT
  10. iptables -A INPUT -s 10.10.10.5 -m tcp -p tcp --dport 8001 -j ACCEPT
  11. # drop all TCP packets on port 8001 not in the above IP list
  12. iptables -A INPUT -m tcp -p tcp --dport 8001 -j DROP

Additional controls, such as similar ACLs applied at a network device level, are encouraged, but fall outside the scope of this document.

Kong API Loopback

Kong’s routing design allows it to serve as a proxy for the Admin API itself. In this manner, Kong itself can be used to provide fine-grained access control to the Admin API. Such an environment requires bootstrapping a new Service that defines the admin_listen address as the Service’s url.

For example, let’s assume that Kong admin_listen is 127.0.0.1:8001, so it is only available from localhost. The port 8000 is serving proxy traffic, presumably exposed via myhost.dev:8000

We want to expose Admin API via the url :8000/admin-api, in a controlled way. We can do so by creating a Service and Route for it from inside 127.0.0.1:

  1. curl -X POST http://127.0.0.1:8001/services \
  2. --data name=admin-api \
  3. --data host=127.0.0.1 \
  4. --data port=8001
  5. curl -X POST http://127.0.0.1:8001/services/admin-api/routes \
  6. --data paths[]=/admin-api

We can now transparently reach the Admin API through the proxy server, from outside 127.0.0.1:

  1. curl myhost.dev:8000/admin-api/services
  2. {
  3. "data":[
  4. {
  5. "id": "653b21bd-4d81-4573-ba00-177cc0108dec",
  6. "created_at": 1422386534,
  7. "updated_at": 1422386534,
  8. "name": "admin-api",
  9. "retries": 5,
  10. "protocol": "http",
  11. "host": "127.0.0.1",
  12. "port": 8001,
  13. "path": "/admin-api",
  14. "connect_timeout": 60000,
  15. "write_timeout": 60000,
  16. "read_timeout": 60000
  17. }
  18. ],
  19. "total":1
  20. }

From here, simply apply desired Kong-specific security controls (such as basic or key authentication, IP restrictions, or access control lists) as you would normally to any other Kong API.

If you are using Docker to host Kong Gateway Enterprise, you can accomplish a similar task using a declarative configuration such as this one:

  1. _format_version: "3.0"
  2. services:
  3. - name: admin-api
  4. url: http://127.0.0.1:8001
  5. routes:
  6. - paths:
  7. - /admin-api
  8. plugins:
  9. - name: key-auth
  10. consumers:
  11. - username: admin
  12. keyauth_credentials:
  13. - key: secret

Under this configuration, the Admin API will be available through the /admin-api, but only for requests accompanied with ?apikey=secret query parameters.

Assuming that the file above is stored in $(pwd)/kong.yml, a DB-less Kong Gateway Enterprise can use it as it starts like this:

  1. docker run -d --name kong-ee \
  2. -e "KONG_DATABASE=off" \
  3. -e "KONG_DECLARATIVE_CONFIG=/home/kong/kong.yml"
  4. -e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
  5. -e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
  6. -e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
  7. -e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
  8. -e "KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl" \
  9. -v $(pwd):/home/kong
  10. kong-ee

With a PostgreSQL database, the initialization steps would be the following:

  1. # Start PostgreSQL on a Docker container
  2. # Notice that PG_PASSWORD needs to be set
  3. docker run --name kong-ee-database \
  4. -p 5432:5432 \
  5. -e "POSTGRES_USER=kong" \
  6. -e "POSTGRES_DB=kong" \
  7. -e "POSTGRES_PASSWORD=$PG_PASSWORD" \
  8. -d postgres:9.6
  9. # Run Kong migrations to initialize the database
  10. docker run --rm \
  11. --link kong-ee-database:kong-ee-database \
  12. -e "KONG_DATABASE=postgres" \
  13. -e "KONG_PG_HOST=kong-ee-database" \
  14. -e "KONG_PG_PASSWORD=$PG_PASSWORD" \
  15. kong-ee kong migrations bootstrap
  16. # Load the configuration file which enables the Admin API loopback
  17. # Notice that it is assumed that kong.yml is located in $(pwd)/kong.yml
  18. docker run --rm \
  19. --link kong-ee-database:kong-ee-database \
  20. -e "KONG_DATABASE=postgres" \
  21. -e "KONG_PG_HOST=kong-ee-database" \
  22. -e "KONG_PG_PASSWORD=$PG_PASSWORD" \
  23. -v $(pwd):/home/kong \
  24. kong-ee kong config db_import /home/kong/kong.yml
  25. # Start Kong
  26. docker run -d --name kong \
  27. --link kong-ee-database:kong-ee-database \
  28. -e "KONG_DATABASE=postgres" \
  29. -e "KONG_PG_HOST=kong-ee-database" \
  30. -e "KONG_PG_PASSWORD=$PG_PASSWORD" \
  31. -e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
  32. -e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
  33. -e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
  34. -e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
  35. -e "KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl" \
  36. kong-ee

In both cases, once Kong is up and running, the Admin API would be available but protected:

  1. curl myhost.dev:8000/admin-api/services
  2. => HTTP/1.1 401 Unauthorized
  3. curl myhost.dev:8000/admin-api/services?apikey=secret
  4. => HTTP/1.1 200 OK
  5. {
  6. "data": [
  7. {
  8. "ca_certificates": null,
  9. "client_certificate": null,
  10. "connect_timeout": 60000,
  11. ...
  12. }
  13. ]
  14. }

Custom Nginx Configuration

Kong is tightly coupled with Nginx as an HTTP daemon, and can thus be integrated into environments with custom Nginx configurations. In this manner, use cases with complex security/access control requirements can use the full power of Nginx/OpenResty to build server/location blocks to house the Admin API as necessary. This allows such environments to leverage native Nginx authorization and authentication mechanisms, ACL modules, etc., in addition to providing the OpenResty environment on which custom/complex security controls can be built.

For more information on integrating Kong into custom Nginx configurations, see Custom Nginx configuration & embedding Kong.

Role Based Access Control

Kong Gateway users can configure role-based access control to secure access to the Admin API. RBAC allows for fine-grained control over resource access based on a model of user roles and permissions. Users are assigned to one or more roles, which each in turn possess one or more permissions granting or denying access to a particular resource. In this way, fine-grained control over specific Admin API resources can be enforced, while scaling to allow complex, case-specific uses.