DNS Proxying

In addition to capturing application traffic, Istio can also capture DNS requests to improve the performance and usability of your mesh. When proxying DNS, all DNS requests from an application will be redirected to the sidecar, which stores a local mapping of domain names to IP addresses. If the request can be handled by the sidecar, it will directly return a response to the application, avoiding a roundtrip to the upstream DNS server. Otherwise, the request is forwarded upstream following the standard /etc/resolv.conf DNS configuration.

While Kubernetes provides DNS resolution for Kubernetes Services out of the box, any custom ServiceEntrys will not be recognized. With this feature, ServiceEntry addresses can be resolved without requiring custom configuration of a DNS server. For Kubernetes Services, the DNS response will be the same, but with reduced load on kube-dns and increased performance.

This functionality is also available for services running outside of Kubernetes. This means that all internal services can be resolved without clunky workarounds to expose Kubernetes DNS entries outside of the cluster.

Getting started

This feature is not currently enabled by default. To enable it, install Istio with the following settings:

  1. $ cat <<EOF | istioctl install -y -f -
  2. apiVersion: install.istio.io/v1alpha1
  3. kind: IstioOperator
  4. spec:
  5. meshConfig:
  6. defaultConfig:
  7. proxyMetadata:
  8. # Enable basic DNS proxying
  9. ISTIO_META_DNS_CAPTURE: "true"
  10. # Enable automatic address allocation, optional
  11. ISTIO_META_DNS_AUTO_ALLOCATE: "true"
  12. EOF

This can also be enabled on a per-pod basis with the proxy.istio.io/config annotation.

When deploying to a VM using istioctl workload entry configure, basic DNS proxying will be enabled by default.

DNS capture In action

To try out the DNS capture, first setup a ServiceEntry for some external service:

  1. apiVersion: networking.istio.io/v1beta1
  2. kind: ServiceEntry
  3. metadata:
  4. name: external-address
  5. spec:
  6. addresses:
  7. - 198.51.100.0
  8. hosts:
  9. - address.internal
  10. ports:
  11. - name: http
  12. number: 80
  13. protocol: HTTP

Without the DNS capture, a request to address.internal would likely fail to resolve. Once this is enabled, you should instead get a response back based on the configured address:

  1. $ curl -v address.internal
  2. * Trying 198.51.100.1:80...

Address auto allocation

In the above example, you had a predefined IP address for the service to which you sent the request. However, it’s common to access external services that do not have stable addresses, and instead rely on DNS. In this case, the DNS proxy will not have enough information to return a response, and will need to forward DNS requests upstream.

This is especially problematic with TCP traffic. Unlike HTTP requests, which are routed based on Host headers, TCP carries much less information; you can only route on the destination IP and port number. Because you don’t have a stable IP for the backend, you cannot route based on that either, leaving only port number, which leads to conflicts when multiple ServiceEntrys for TCP services share the same port.

To work around these issues, the DNS proxy additionally supports automatically allocating addresses for ServiceEntrys that do not explicitly define one. This is configured by the ISTIO_META_DNS_AUTO_ALLOCATE option.

When this feature is enabled, the DNS response will include a distinct and automatically assigned address for each ServiceEntry. The proxy is then configured to match requests to this IP address, and forward the request to the corresponding ServiceEntry.

Because this feature modifies DNS responses, it may not be compatible with all applications.

To try this out, configure another ServiceEntry:

  1. apiVersion: networking.istio.io/v1beta1
  2. kind: ServiceEntry
  3. metadata:
  4. name: external-auto
  5. spec:
  6. hosts:
  7. - auto.internal
  8. ports:
  9. - name: http
  10. number: 80
  11. protocol: HTTP
  12. resolution: STATIC
  13. endpoints:
  14. - address: 198.51.100.2

Now, send a request:

  1. $ curl -v auto.internal
  2. * Trying 240.240.0.1:80...

As you can see, the request is sent to an automatically allocated address, 240.240.0.1. These addresses will be picked from the 240.240.0.0/16 reserved IP address range to avoid conflicting with real services.

See also

Expanding into New Frontiers - Smart DNS Proxying in Istio

Workload Local DNS resolution to simplify VM integration, multicluster, and more.

Direct encrypted traffic from IBM Cloud Kubernetes Service Ingress to Istio Ingress Gateway

Configure the IBM Cloud Kubernetes Service Application Load Balancer to direct traffic to the Istio Ingress gateway with mutual TLS.

Multicluster Istio configuration and service discovery using Admiral

Automating Istio configuration for Istio deployments (clusters) that work as a single mesh.

Istio as a Proxy for External Services

Configure Istio ingress gateway to act as a proxy for external services.

Multi-Mesh Deployments for Isolation and Boundary Protection

Deploy environments that require isolation into separate meshes and enable inter-mesh communication by mesh federation.

Secure Control of Egress Traffic in Istio, part 3

Comparison of alternative solutions to control egress traffic including performance considerations.