DNS Proxying

In addition to capturing application traffic, Istio can also capture DNS requests to improve the performance and usability of your mesh. When proxying DNS, all DNS requests from an application will be redirected to the sidecar, which stores a local mapping of domain names to IP addresses. If the request can be handled by the sidecar, it will directly return a response to the application, avoiding a roundtrip to the upstream DNS server. Otherwise, the request is forwarded upstream following the standard /etc/resolv.conf DNS configuration.

While Kubernetes provides DNS resolution for Kubernetes Services out of the box, any custom ServiceEntrys will not be recognized. With this feature, ServiceEntry addresses can be resolved without requiring custom configuration of a DNS server. For Kubernetes Services, the DNS response will be the same, but with reduced load on kube-dns and increased performance.

This functionality is also available for services running outside of Kubernetes. This means that all internal services can be resolved without clunky workarounds to expose Kubernetes DNS entries outside of the cluster.

Getting started

This feature is not currently enabled by default. To enable it, install Istio with the following settings:

  1. $ cat <<EOF | istioctl install -y -f -
  2. apiVersion: install.istio.io/v1alpha1
  3. kind: IstioOperator
  4. spec:
  5. meshConfig:
  6. defaultConfig:
  7. proxyMetadata:
  8. # Enable basic DNS proxying
  9. ISTIO_META_DNS_CAPTURE: "true"
  10. # Enable automatic address allocation, optional
  11. ISTIO_META_DNS_AUTO_ALLOCATE: "true"
  12. EOF

This can also be enabled on a per-pod basis with the proxy.istio.io/config annotation:

  1. kind: Deployment
  2. metadata:
  3. name: curl
  4. spec:
  5. ...
  6. template:
  7. metadata:
  8. annotations:
  9. proxy.istio.io/config: |
  10. proxyMetadata:
  11. ISTIO_META_DNS_CAPTURE: "true"
  12. ISTIO_META_DNS_AUTO_ALLOCATE: "true"
  13. ...

When deploying to a VM using istioctl workload entry configure, basic DNS proxying will be enabled by default.

DNS capture In action

To try out the DNS capture, first setup a ServiceEntry for some external service:

  1. $ kubectl apply -f - <<EOF
  2. apiVersion: networking.istio.io/v1
  3. kind: ServiceEntry
  4. metadata:
  5. name: external-address
  6. spec:
  7. addresses:
  8. - 198.51.100.1
  9. hosts:
  10. - address.internal
  11. ports:
  12. - name: http
  13. number: 80
  14. protocol: HTTP
  15. EOF

Bring up a client application to initiate the DNS request:

Zip

  1. $ kubectl label namespace default istio-injection=enabled --overwrite
  2. $ kubectl apply -f @samples/curl/curl.yaml@

Without the DNS capture, a request to address.internal would likely fail to resolve. Once this is enabled, you should instead get a response back based on the configured address:

  1. $ kubectl exec deploy/curl -- curl -sS -v address.internal
  2. * Trying 198.51.100.1:80...

Address auto allocation

In the above example, you had a predefined IP address for the service to which you sent the request. However, it’s common to access external services that do not have stable addresses, and instead rely on DNS. In this case, the DNS proxy will not have enough information to return a response, and will need to forward DNS requests upstream.

This is especially problematic with TCP traffic. Unlike HTTP requests, which are routed based on Host headers, TCP carries much less information; you can only route on the destination IP and port number. Because you don’t have a stable IP for the backend, you cannot route based on that either, leaving only port number, which leads to conflicts when multiple ServiceEntrys for TCP services share the same port. Refer to the following section for more details.

To work around these issues, the DNS proxy additionally supports automatically allocating addresses for ServiceEntrys that do not explicitly define one. This is configured by the ISTIO_META_DNS_AUTO_ALLOCATE option.

Please see DNS Auto Allocation V2 for a new enhanced implementation of auto allocation supported by Istio from 1.23 onwards. DNS Auto Allocation V2 is recommended for sidecar mode and required for ambient mode.

When this feature is enabled, the DNS response will include a distinct and automatically assigned address for each ServiceEntry. The proxy is then configured to match requests to this IP address, and forward the request to the corresponding ServiceEntry. When using ISTIO_META_DNS_AUTO_ALLOCATE, Istio will automatically allocate non-routable VIPs (from the Class E subnet) to such services as long as they do not use a wildcard host. The Istio agent on the sidecar will use the VIPs as responses to the DNS lookup queries from the application. Envoy can now clearly distinguish traffic bound for each external TCP service and forward it to the right target. For more information check respective Istio blog about smart DNS proxying.

Because this feature modifies DNS responses, it may not be compatible with all applications.

To try this out, configure another ServiceEntry:

  1. $ kubectl apply -f - <<EOF
  2. apiVersion: networking.istio.io/v1
  3. kind: ServiceEntry
  4. metadata:
  5. name: external-auto
  6. spec:
  7. hosts:
  8. - auto.internal
  9. ports:
  10. - name: http
  11. number: 80
  12. protocol: HTTP
  13. resolution: DNS
  14. EOF

Now, send a request:

  1. $ kubectl exec deploy/curl -- curl -sS -v auto.internal
  2. * Trying 240.240.0.1:80...

As you can see, the request is sent to an automatically allocated address, 240.240.0.1. These addresses will be picked from the 240.240.0.0/16 reserved IP address range to avoid conflicting with real services.

External TCP services without VIPs

By default, Istio has a limitation when routing external TCP traffic because it is unable to distinguish between multiple TCP services on the same port. This limitation is particularly apparent when using third party databases such as AWS Relational Database Service or any database setup with geographical redundancy. Similar, but different external TCP services, cannot be handled separately by default. For the sidecar to distinguish traffic between two different TCP services that are outside of the mesh, the services must be on different ports or they need to have globally unique VIPs.

For example, if you have two external database services, mysql-instance1 and mysql-instance2, and you create service entries for both, client sidecars will still have a single listener on 0.0.0.0:{port} that looks up the IP address of only mysql-instance1, from public DNS servers, and forwards traffic to it. It cannot route traffic to mysql-instance2 because it has no way of distinguishing whether traffic arriving at 0.0.0.0:{port} is bound for mysql-instance1 or mysql-instance2.

The following example shows how DNS proxying can be used to solve this problem. A virtual IP address will be assigned to every service entry so that client sidecars can clearly distinguish traffic bound for each external TCP service.

  1. Update the Istio configuration specified in the Getting Started section to also configure discoverySelectors that restrict the mesh to namespaces with istio-injection enabled. This will let us use any other namespaces in the cluster to run TCP services outside of the mesh.

    1. $ cat <<EOF | istioctl install -y -f -
    2. apiVersion: install.istio.io/v1alpha1
    3. kind: IstioOperator
    4. spec:
    5. meshConfig:
    6. defaultConfig:
    7. proxyMetadata:
    8. # Enable basic DNS proxying
    9. ISTIO_META_DNS_CAPTURE: "true"
    10. # Enable automatic address allocation, optional
    11. ISTIO_META_DNS_AUTO_ALLOCATE: "true"
    12. # discoverySelectors configuration below is just used for simulating the external service TCP scenario,
    13. # so that we do not have to use an external site for testing.
    14. discoverySelectors:
    15. - matchLabels:
    16. istio-injection: enabled
    17. EOF
  2. Deploy the first external sample TCP application:

    1. $ kubectl create ns external-1
    2. $ kubectl -n external-1 apply -f samples/tcp-echo/tcp-echo.yaml
  3. Deploy the second external sample TCP application:

    1. $ kubectl create ns external-2
    2. $ kubectl -n external-2 apply -f samples/tcp-echo/tcp-echo.yaml
  4. Configure ServiceEntry to reach external services:

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: networking.istio.io/v1
    3. kind: ServiceEntry
    4. metadata:
    5. name: external-svc-1
    6. spec:
    7. hosts:
    8. - tcp-echo.external-1.svc.cluster.local
    9. ports:
    10. - name: external-svc-1
    11. number: 9000
    12. protocol: TCP
    13. resolution: DNS
    14. ---
    15. apiVersion: networking.istio.io/v1
    16. kind: ServiceEntry
    17. metadata:
    18. name: external-svc-2
    19. spec:
    20. hosts:
    21. - tcp-echo.external-2.svc.cluster.local
    22. ports:
    23. - name: external-svc-2
    24. number: 9000
    25. protocol: TCP
    26. resolution: DNS
    27. EOF
  5. Verify listeners are configured separately for each service at the client side:

    1. $ istioctl pc listener deploy/curl | grep tcp-echo | awk '{printf "ADDRESS=%s, DESTINATION=%s %s\n", $1, $4, $5}'
    2. ADDRESS=240.240.105.94, DESTINATION=Cluster: outbound|9000||tcp-echo.external-2.svc.cluster.local
    3. ADDRESS=240.240.69.138, DESTINATION=Cluster: outbound|9000||tcp-echo.external-1.svc.cluster.local

DNS Auto Allocation V2

Istio now offers an enhanced implementation of DNS Auto Allocation. To use the new feature, replace the MeshConfig flag ISTIO_META_DNS_AUTO_ALLOCATE, which was used in the previous example, with the pilot environment variable PILOT_ENABLE_IP_AUTOALLOCATE while installing Istio. All examples given so far would work as is.

  1. $ cat <<EOF | istioctl install -y -f -
  2. apiVersion: install.istio.io/v1alpha1
  3. kind: IstioOperator
  4. spec:
  5. values:
  6. pilot:
  7. env:
  8. # Enable automatic address allocation, optional
  9. PILOT_ENABLE_IP_AUTOALLOCATE: "true"
  10. meshConfig:
  11. defaultConfig:
  12. proxyMetadata:
  13. # Enable basic DNS proxying
  14. ISTIO_META_DNS_CAPTURE: "true"
  15. # discoverySelectors configuration below is just used for simulating the external service TCP scenario,
  16. # so that we do not have to use an external site for testing.
  17. discoverySelectors:
  18. - matchLabels:
  19. istio-injection: enabled
  20. EOF

Users also have the flexibility for more granular configuration by adding the label networking.istio.io/enable-autoallocate-ip="true/false" to their ServiceEntry. This label configures whether a ServiceEntry without any spec.addresses set should get an IP address automatically allocated for it.

To try this out, update the existing ServiceEntry with the opt-out label:

  1. $ kubectl apply -f - <<EOF
  2. apiVersion: networking.istio.io/v1
  3. kind: ServiceEntry
  4. metadata:
  5. name: external-auto
  6. labels:
  7. networking.istio.io/enable-autoallocate-ip: "false"
  8. spec:
  9. hosts:
  10. - auto.internal
  11. ports:
  12. - name: http
  13. number: 80
  14. protocol: HTTP
  15. resolution: DNS
  16. EOF

Now, send a request and verify that the auto allocation is no longer happening:

  1. $ kubectl exec deploy/curl -- curl -sS -v auto.internal
  2. * Could not resolve host: auto.internal
  3. * shutting down connection #0

Cleanup

ZipZipZip

  1. $ kubectl -n external-1 delete -f @samples/tcp-echo/tcp-echo.yaml@
  2. $ kubectl -n external-2 delete -f @samples/tcp-echo/tcp-echo.yaml@
  3. $ kubectl delete -f @samples/curl/curl.yaml@
  4. $ istioctl uninstall --purge -y
  5. $ kubectl delete ns istio-system external-1 external-2
  6. $ kubectl label namespace default istio-injection-