DNS

Kuma ships with DNS resolver to provide service naming - a mapping of hostname to Virtual IPs (VIPs) of services registered in Kuma.

The usage of Kuma DNS is only relevant when transparent proxying is used.

How it works

Kuma DNS server responds to type A and AAAA DNS requests, and answers with A or AAAA records, for example redis.mesh. 60 IN A 240.0.0.100 or redis.mesh. 60 IN AAAA fd00:fd00::100.

The virtual IPs are allocated by the control plane from the configured CIDR (by default 240.0.0.0/4) , by constantly scanning the services available in all Kuma meshes. When a service is removed, its VIP is also freed, and Kuma DNS does not respond for it with A and AAAA DNS record. Virtual IPs are stable (replicated) between instances of the control plane and data plane proxies.

Once a new VIP is allocated or an old VIP is freed, the control plane configures the data plane proxy with this change.

All name lookups are handled locally by the data plane proxy, not by the control plane. This approach allows for more robust handling of name resolution. For example, when the control plane is down, a data plane proxy can still resolve DNS.

The data plane proxy DNS consists of:

  • an Envoy DNS filter provides responses from the mesh for DNS records
  • a CoreDNS instance launched by kuma-dp that sends requests between the Envoy DNS filter and the original host DNS
  • iptable rules that will redirect the original DNS traffic to the local CoreDNS instance

As the DNS requests are sent to the Envoy DNS filter first, any DNS name that exists inside the mesh will always resolve to the mesh address. This in practice means that DNS name present in the mesh will “shadow” equivalent names that exist outside the mesh.

Kuma DNS is not a service discovery mechanism, it does not return real IP address of service instances. Instead, it always returns a single VIP that is assigned to the relevant service in the mesh. This makes for a unified view of all services within a single zone or across multiple zones.

The default TTL is 60 seconds, to ensure the client synchronizes with Kuma DNS and to account for any intervening changes.

Installation

Kuma DNS is enabled by default whenever kuma-dp sidecar proxy is injected.

Follow the instruction in transparent proxying .

Special considerations

This mode implements advanced networking techniques, so take special care for the following cases:

Overriding the CoreDNS configuration

In some cases it might be useful for you to configure the default CoreDNS configuration.

Kuma supports overriding the CoreDNS configuration from control plane for both Kubernetes and Universal installations; for Universal installations, Kuma also supports overriding from data planes. When overriding from control plane, all the data planes in the mesh will use the overridden DNS configuration.

Only overriding from control plane is supported. To override, you can configure the bootstrap server in kuma-cp:

  1. bootstrapServer:
  2. corefileTemplatePath: "/path/to/mounted-corefile-template" # ENV: KUMA_BOOTSTRAP_SERVER_PARAMS_COREFILE_TEMPLATE_PATH

You’ll also need to mount the DNS configuration template file into the control plane by adding an extra configMap, here are the steps:

Create a configmap in the namespace in which the control plane is installed:

  1. # create the namespace if it does not exist
  2. kubectl create namespace kuma-system
  3. # create the configmap, make sure the file exist on disk
  4. kubectl create --namespace kuma-system configmap corefile-template \
  5. --from-file corefile-template=/path/to/corefile-template-on-disk

Point to this configmap when installing Kuma:

  1. kumactl install control-plane \
  2. --env-var "KUMA_BOOTSTRAP_SERVER_PARAMS_COREFILE_TEMPLATE_PATH=/path/to/mounted-corefile-template" \
  3. --set "controlPlane.extraConfigMaps[0].name=corefile-template" \
  4. --set "controlPlane.extraConfigMaps[0].mountPath=/path/to/mounted-corefile-template/corefile-template" \
  5. | kubectl apply -f -
  1. helm install --namespace kuma-system \
  2. --set "controlPlane.envVars.KUMA_BOOTSTRAP_SERVER_PARAMS_COREFILE_TEMPLATE_PATH=/path/to/mounted-corefile-template" \
  3. --set "controlPlane.extraConfigMaps[0].name=corefile-template" \
  4. --set "controlPlane.extraConfigMaps[0].mountPath=/path/to/mounted-corefile-template/corefile-template" \
  5. kuma kuma/kuma

Both overriding from the control plane and data planes are supported.

To override DNS configuration from the control plane, you can configure the bootstrap server in kuma-cp:

  1. bootstrapServer:
  2. corefileTemplatePath: "/path/to/mounted-corefile-template" # ENV: KUMA_BOOTSTRAP_SERVER_PARAMS_COREFILE_TEMPLATE_PATH

Please make sure the file path do exist on disk.

To override DNS configuration from data planes, use --dns-coredns-config-template-path as an argument to kuma-dp. When the data plane is connecting to a control plane that also has DNS configuration overridden, overridden from data plane will take precedence.

Once supported, you’ll need to prepare a DNS configuration file to be used for overriding. This file is a CoreDNS configuration that is processed as a go-template.

Editing should base on the existing and default configuration. For example, you may use the following configuration to make the DNS server not respond errors to IPv6 queries when your cluster has IPv6 disabled:

  1. .:{{ .CoreDNSPort }} {
  2. # add a plugin to return NOERROR for IPv6 queries
  3. template IN AAAA . {
  4. rcode NOERROR
  5. fallthrough
  6. }
  7. forward . 127.0.0.1:{{ .EnvoyDNSPort }}
  8. # We want all requests to be sent to the Envoy DNS Filter, unsuccessful responses should be forwarded to the original DNS server.
  9. # For example: requests other than A, AAAA and SRV will return NOTIMP when hitting the envoy filter and should be sent to the original DNS server.
  10. # Codes from: https://github.com/miekg/dns/blob/master/msg.go#L138
  11. alternate NOTIMP,FORMERR,NXDOMAIN,SERVFAIL,REFUSED . /etc/resolv.conf
  12. prometheus localhost:{{ .PrometheusPort }}
  13. errors
  14. }
  15. .:{{ .CoreDNSEmptyPort }} {
  16. template ANY ANY . {
  17. rcode NXDOMAIN
  18. }
  19. }

Configuration

You can configure Kuma DNS in kuma-cp:

  1. dnsServer:
  2. CIDR: "240.0.0.0/4" # ENV: KUMA_DNS_SERVER_CIDR
  3. domain: "mesh" # ENV: KUMA_DNS_SERVER_DOMAIN
  4. serviceVipEnabled: true # ENV: KUMA_DNS_SERVER_SERVICE_VIP_ENABLED

The CIDR field sets the IP range of virtual IPs. The default 240.0.0.0/4 is reserved for future IPv4 use and is guaranteed to be non-routable. We strongly recommend to not change this value unless you have a specific need for a different IP range.

The domain field specifies the default .mesh DNS zone that Kuma DNS provides resolution for. It’s only relevant when serviceVipEnabled is set to true.

The serviceVipEnabled field defines if there should be a vip generated for each kuma.io/service. This can be disabled for performance reason and virtual-outbound provides a more flexible way to do this.

Usage

Consuming a service handled by Kuma DNS, whether from Kuma-enabled Pod on Kubernetes or VM with kuma-dp, is based on the automatically generated kuma.io/service tag. The resulting domain name has the format {service tag}.mesh. For example:

  1. <kuma-enabled-pod>$ curl http://echo-server_echo-example_svc_1010.mesh:80
  2. <kuma-enabled-pod>$ curl http://echo-server_echo-example_svc_1010.mesh

You can also use a DNS RFC1035 compliant name by replacing the underscores in the service name with dots. For example:

  1. <kuma-enabled-pod>$ curl http://echo-server.echo-example.svc.1010.mesh:80
  2. <kuma-enabled-pod>$ curl http://echo-server.echo-example.svc.1010.mesh

The default listeners created on the VIP default to port 80, so the port can be omitted with a standard HTTP client.

Kuma DNS allocates a VIP for every service within a mesh. Then, it creates an outbound virtual listener for every VIP. If you inspect the result of curl localhost:9901/config_dump, you can see something similar to:

  1. {
  2. "name": "outbound:240.0.0.1:80",
  3. "active_state": {
  4. "version_info": "51adf4e6-287e-491a-9ae2-e6eeaec4e982",
  5. "listener": {
  6. "@type": "type.googleapis.com/envoy.api.v2.Listener",
  7. "name": "outbound:240.0.0.1:80",
  8. "address": {
  9. "socket_address": {
  10. "address": "240.0.0.1",
  11. "port_value": 80
  12. }
  13. },
  14. "filter_chains": [
  15. {
  16. "filters": [
  17. {
  18. "name": "envoy.filters.network.tcp_proxy",
  19. "typed_config": {
  20. "@type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
  21. "stat_prefix": "echo-server_kuma-test_svc_80",
  22. "cluster": "echo-server_kuma-test_svc_80"
  23. }
  24. }
  25. ]
  26. }
  27. ],
  28. "deprecated_v1": {
  29. "bind_to_port": false
  30. },
  31. "traffic_direction": "OUTBOUND"
  32. },
  33. "last_updated": "2020-07-06T14:32:59.732Z"
  34. }
  35. }

The following setup will work when serviceVipEnabled=true which is a default value.

The preferred way to define hostnames is using Virtual Outbounds. Virtual Outbounds also makes it possible to define dynamic hostnames using specific tags or to expose services on a different port.