Egress Gateways

This example does not work in Minikube.

The Accessing External Services task shows how to configure Istio to allow access to external HTTP and HTTPS services from applications inside the mesh. There, the external services are called directly from the client sidecar. This example also shows how to configure Istio to call external services, although this time indirectly via a dedicated egress gateway service.

Istio uses ingress and egress gateways to configure load balancers executing at the edge of a service mesh. An ingress gateway allows you to define entry points into the mesh that all incoming traffic flows through. Egress gateway is a symmetrical concept; it defines exit points from the mesh. Egress gateways allow you to apply Istio features, for example, monitoring and route rules, to traffic exiting the mesh.

Use case

Consider an organization that has a strict security requirement that all traffic leaving the service mesh must flow through a set of dedicated nodes. These nodes will run on dedicated machines, separated from the rest of the nodes running applications in the cluster. These special nodes will serve for policy enforcement on the egress traffic and will be monitored more thoroughly than other nodes.

Another use case is a cluster where the application nodes don’t have public IPs, so the in-mesh services that run on them cannot access the Internet. Defining an egress gateway, directing all the egress traffic through it, and allocating public IPs to the egress gateway nodes allows the application nodes to access external services in a controlled way.

Istio supports the Kubernetes Gateway API and intends to make it the default API for traffic management in the future. The following instructions allow you to choose to use either the Gateway API or the Istio configuration API when configuring traffic management in the mesh. Follow instructions under either the Gateway API or Istio APIs tab, according to your preference.

This document configures Istio using Gateway API features that are experimental Before using the Gateway API instructions, make sure to:

  1. Install the experimental version of the Gateway API CRDs:

    1. $ kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd/experimental?ref=v1.1.0" | kubectl apply -f -
  2. Configure Istio to read the alpha Gateway API resources by setting the PILOT_ENABLE_ALPHA_GATEWAY_API environment variable to true when installing Istio:

    1. $ istioctl install --set values.pilot.env.PILOT_ENABLE_ALPHA_GATEWAY_API=true --set profile=minimal -y

Before you begin

  • Setup Istio by following the instructions in the Installation guide.

    The egress gateway and access logging will be enabled if you install the demo configuration profile.

  • Deploy the sleep sample app to use as a test source for sending requests.

    Zip

    1. $ kubectl apply -f @samples/sleep/sleep.yaml@

    You can use any pod with curl installed as a test source.

  • Set the SOURCE_POD environment variable to the name of your source pod:

    1. $ export SOURCE_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})

    The instructions in this task create a destination rule for the egress gateway in the default namespace and assume that the client, SOURCE_POD, is also running in the default namespace. If not, the destination rule will not be found on the destination rule lookup path and the client requests will fail.

  • Enable Envoy’s access logging if not already enabled. For example, using istioctl:

    1. $ istioctl install <flags-you-used-to-install-Istio> --set meshConfig.accessLogFile=/dev/stdout

Deploy Istio egress gateway

Egress gateways are deployed automatically when using Gateway API to configure them. You can skip this section if you are using the Gateway API instructions in the following sections.

  1. Check if the Istio egress gateway is deployed:

    1. $ kubectl get pod -l istio=egressgateway -n istio-system

    If no pods are returned, deploy the Istio egress gateway by performing the following step.

  2. If you used an IstioOperator CR to install Istio, add the following fields to your configuration:

    1. spec:
    2. components:
    3. egressGateways:
    4. - name: istio-egressgateway
    5. enabled: true

    Otherwise, add the equivalent settings to your original istioctl install command, for example:

    1. $ istioctl install <flags-you-used-to-install-Istio> \
    2. --set "components.egressGateways[0].name=istio-egressgateway" \
    3. --set "components.egressGateways[0].enabled=true"

Egress gateway for HTTP traffic

First create a ServiceEntry to allow direct traffic to an external service.

  1. Define a ServiceEntry for edition.cnn.com.

    DNS resolution must be used in the service entry below. If the resolution is NONE, the gateway will direct the traffic to itself in an infinite loop. This is because the gateway receives a request with the original destination IP address which is equal to the service IP of the gateway (since the request is directed by sidecar proxies to the gateway).

    With DNS resolution, the gateway performs a DNS query to get an IP address of the external service and directs the traffic to that IP address.

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: networking.istio.io/v1
    3. kind: ServiceEntry
    4. metadata:
    5. name: cnn
    6. spec:
    7. hosts:
    8. - edition.cnn.com
    9. ports:
    10. - number: 80
    11. name: http-port
    12. protocol: HTTP
    13. - number: 443
    14. name: https
    15. protocol: HTTPS
    16. resolution: DNS
    17. EOF
  2. Verify that your ServiceEntry was applied correctly by sending an HTTP request to http://edition.cnn.com/politics.

    1. $ kubectl exec "$SOURCE_POD" -c sleep -- curl -sSL -o /dev/null -D - http://edition.cnn.com/politics
    2. ...
    3. HTTP/1.1 301 Moved Permanently
    4. ...
    5. location: https://edition.cnn.com/politics
    6. ...
    7. HTTP/2 200
    8. Content-Type: text/html; charset=utf-8
    9. ...

    The output should be the same as in the TLS Origination for Egress Traffic example, without TLS origination.

  3. Create a Gateway for egress traffic to edition.cnn.com port 80.

To direct multiple hosts through an egress gateway, you can include a list of hosts, or use * to match all, in the Gateway. The subset field in the DestinationRule should be reused for the additional hosts.

  1. $ kubectl apply -f - <<EOF
  2. apiVersion: networking.istio.io/v1
  3. kind: Gateway
  4. metadata:
  5. name: istio-egressgateway
  6. spec:
  7. selector:
  8. istio: egressgateway
  9. servers:
  10. - port:
  11. number: 80
  12. name: http
  13. protocol: HTTP
  14. hosts:
  15. - edition.cnn.com
  16. ---
  17. apiVersion: networking.istio.io/v1
  18. kind: DestinationRule
  19. metadata:
  20. name: egressgateway-for-cnn
  21. spec:
  22. host: istio-egressgateway.istio-system.svc.cluster.local
  23. subsets:
  24. - name: cnn
  25. EOF
  1. $ kubectl apply -f - <<EOF
  2. apiVersion: gateway.networking.k8s.io/v1
  3. kind: Gateway
  4. metadata:
  5. name: cnn-egress-gateway
  6. annotations:
  7. networking.istio.io/service-type: ClusterIP
  8. spec:
  9. gatewayClassName: istio
  10. listeners:
  11. - name: http
  12. hostname: edition.cnn.com
  13. port: 80
  14. protocol: HTTP
  15. allowedRoutes:
  16. namespaces:
  17. from: Same
  18. EOF
  1. Configure route rules to direct traffic from the sidecars to the egress gateway and from the egress gateway to the external service:
  1. $ kubectl apply -f - <<EOF
  2. apiVersion: networking.istio.io/v1
  3. kind: VirtualService
  4. metadata:
  5. name: direct-cnn-through-egress-gateway
  6. spec:
  7. hosts:
  8. - edition.cnn.com
  9. gateways:
  10. - istio-egressgateway
  11. - mesh
  12. http:
  13. - match:
  14. - gateways:
  15. - mesh
  16. port: 80
  17. route:
  18. - destination:
  19. host: istio-egressgateway.istio-system.svc.cluster.local
  20. subset: cnn
  21. port:
  22. number: 80
  23. weight: 100
  24. - match:
  25. - gateways:
  26. - istio-egressgateway
  27. port: 80
  28. route:
  29. - destination:
  30. host: edition.cnn.com
  31. port:
  32. number: 80
  33. weight: 100
  34. EOF
  1. $ kubectl apply -f - <<EOF
  2. apiVersion: gateway.networking.k8s.io/v1
  3. kind: HTTPRoute
  4. metadata:
  5. name: direct-cnn-to-egress-gateway
  6. spec:
  7. parentRefs:
  8. - kind: ServiceEntry
  9. group: networking.istio.io
  10. name: cnn
  11. rules:
  12. - backendRefs:
  13. - name: cnn-egress-gateway-istio
  14. port: 80
  15. ---
  16. apiVersion: gateway.networking.k8s.io/v1
  17. kind: HTTPRoute
  18. metadata:
  19. name: forward-cnn-from-egress-gateway
  20. spec:
  21. parentRefs:
  22. - name: cnn-egress-gateway
  23. hostnames:
  24. - edition.cnn.com
  25. rules:
  26. - backendRefs:
  27. - kind: Hostname
  28. group: networking.istio.io
  29. name: edition.cnn.com
  30. port: 80
  31. EOF
  1. Resend the HTTP request to http://edition.cnn.com/politics.

    1. $ kubectl exec "$SOURCE_POD" -c sleep -- curl -sSL -o /dev/null -D - http://edition.cnn.com/politics
    2. ...
    3. HTTP/1.1 301 Moved Permanently
    4. ...
    5. location: https://edition.cnn.com/politics
    6. ...
    7. HTTP/2 200
    8. Content-Type: text/html; charset=utf-8
    9. ...

    The output should be the same as in the step 2.

  2. Check the log of the egress gateway pod for a line corresponding to our request.

If Istio is deployed in the istio-system namespace, the command to print the log is:

  1. $ kubectl logs -l istio=egressgateway -c istio-proxy -n istio-system | tail

You should see a line similar to the following:

  1. [2019-09-03T20:57:49.103Z] "GET /politics HTTP/2" 301 - "-" "-" 0 0 90 89 "10.244.2.10" "curl/7.64.0" "ea379962-9b5c-4431-ab66-f01994f5a5a5" "edition.cnn.com" "151.101.65.67:80" outbound|80||edition.cnn.com - 10.244.1.5:80 10.244.2.10:50482 edition.cnn.com -

If mutual TLS Authentication is enabled, and you have issues connecting to the egress gateway, run the following command to verify the certificate is correct:

  1. $ istioctl pc secret -n istio-system "$(kubectl get pod -l istio=egressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}')" -ojson | jq '[.dynamicActiveSecrets[] | select(.name == "default")][0].secret.tlsCertificate.certificateChain.inlineBytes' -r | base64 -d | openssl x509 -text -noout | grep 'Subject Alternative Name' -A 1
  2. X509v3 Subject Alternative Name: critical
  3. URI:spiffe://cluster.local/ns/istio-system/sa/istio-egressgateway-service-account

Access the log corresponding to the egress gateway using the Istio-generated pod label:

  1. $ kubectl logs -l gateway.networking.k8s.io/gateway-name=cnn-egress-gateway -c istio-proxy | tail

You should see a line similar to the following:

  1. [2024-01-09T15:35:47.283Z] "GET /politics HTTP/1.1" 301 - via_upstream - "-" 0 0 2 2 "172.30.239.55" "curl/7.87.0-DEV" "6c01d65f-a157-97cd-8782-320a40026901" "edition.cnn.com" "151.101.195.5:80" outbound|80||edition.cnn.com 172.30.239.16:55636 172.30.239.16:80 172.30.239.55:59224 - default.forward-cnn-from-egress-gateway.0

If mutual TLS Authentication is enabled, and you have issues connecting to the egress gateway, run the following command to verify the certificate is correct:

  1. $ istioctl pc secret "$(kubectl get pod -l gateway.networking.k8s.io/gateway-name=cnn-egress-gateway -o jsonpath='{.items[0].metadata.name}')" -ojson | jq '[.dynamicActiveSecrets[] | select(.name == "default")][0].secret.tlsCertificate.certificateChain.inlineBytes' -r | base64 -d | openssl x509 -text -noout | grep 'Subject Alternative Name' -A 1
  2. X509v3 Subject Alternative Name: critical
  3. URI:spiffe://cluster.local/ns/default/sa/cnn-egress-gateway-istio

Note that you only redirected the HTTP traffic from port 80 through the egress gateway. The HTTPS traffic to port 443 went directly to edition.cnn.com.

Cleanup HTTP gateway

Remove the previous definitions before proceeding to the next step:

  1. $ kubectl delete serviceentry cnn
  2. $ kubectl delete gateway istio-egressgateway
  3. $ kubectl delete virtualservice direct-cnn-through-egress-gateway
  4. $ kubectl delete destinationrule egressgateway-for-cnn
  1. $ kubectl delete serviceentry cnn
  2. $ kubectl delete gtw cnn-egress-gateway
  3. $ kubectl delete httproute direct-cnn-to-egress-gateway
  4. $ kubectl delete httproute forward-cnn-from-egress-gateway

Egress gateway for HTTPS traffic

In this section you direct HTTPS traffic (TLS originated by the application) through an egress gateway. You need to specify port 443 with protocol TLS in a corresponding ServiceEntry and egress Gateway.

  1. Define a ServiceEntry for edition.cnn.com:

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: networking.istio.io/v1
    3. kind: ServiceEntry
    4. metadata:
    5. name: cnn
    6. spec:
    7. hosts:
    8. - edition.cnn.com
    9. ports:
    10. - number: 443
    11. name: tls
    12. protocol: TLS
    13. resolution: DNS
    14. EOF
  2. Verify that your ServiceEntry was applied correctly by sending an HTTPS request to https://edition.cnn.com/politics.

    1. $ kubectl exec "$SOURCE_POD" -c sleep -- curl -sSL -o /dev/null -D - https://edition.cnn.com/politics
    2. ...
    3. HTTP/2 200
    4. Content-Type: text/html; charset=utf-8
    5. ...
  3. Create an egress Gateway for edition.cnn.com and route rules to direct the traffic through the egress gateway and from the egress gateway to the external service.

To direct multiple hosts through an egress gateway, you can include a list of hosts, or use * to match all, in the Gateway. The subset field in the DestinationRule should be reused for the additional hosts.

  1. $ kubectl apply -f - <<EOF
  2. apiVersion: networking.istio.io/v1
  3. kind: Gateway
  4. metadata:
  5. name: istio-egressgateway
  6. spec:
  7. selector:
  8. istio: egressgateway
  9. servers:
  10. - port:
  11. number: 443
  12. name: tls
  13. protocol: TLS
  14. hosts:
  15. - edition.cnn.com
  16. tls:
  17. mode: PASSTHROUGH
  18. ---
  19. apiVersion: networking.istio.io/v1
  20. kind: DestinationRule
  21. metadata:
  22. name: egressgateway-for-cnn
  23. spec:
  24. host: istio-egressgateway.istio-system.svc.cluster.local
  25. subsets:
  26. - name: cnn
  27. ---
  28. apiVersion: networking.istio.io/v1
  29. kind: VirtualService
  30. metadata:
  31. name: direct-cnn-through-egress-gateway
  32. spec:
  33. hosts:
  34. - edition.cnn.com
  35. gateways:
  36. - mesh
  37. - istio-egressgateway
  38. tls:
  39. - match:
  40. - gateways:
  41. - mesh
  42. port: 443
  43. sniHosts:
  44. - edition.cnn.com
  45. route:
  46. - destination:
  47. host: istio-egressgateway.istio-system.svc.cluster.local
  48. subset: cnn
  49. port:
  50. number: 443
  51. - match:
  52. - gateways:
  53. - istio-egressgateway
  54. port: 443
  55. sniHosts:
  56. - edition.cnn.com
  57. route:
  58. - destination:
  59. host: edition.cnn.com
  60. port:
  61. number: 443
  62. weight: 100
  63. EOF
  1. $ kubectl apply -f - <<EOF
  2. apiVersion: gateway.networking.k8s.io/v1
  3. kind: Gateway
  4. metadata:
  5. name: cnn-egress-gateway
  6. annotations:
  7. networking.istio.io/service-type: ClusterIP
  8. spec:
  9. gatewayClassName: istio
  10. listeners:
  11. - name: tls
  12. hostname: edition.cnn.com
  13. port: 443
  14. protocol: TLS
  15. tls:
  16. mode: Passthrough
  17. allowedRoutes:
  18. namespaces:
  19. from: Same
  20. ---
  21. apiVersion: gateway.networking.k8s.io/v1alpha2
  22. kind: TLSRoute
  23. metadata:
  24. name: direct-cnn-to-egress-gateway
  25. spec:
  26. parentRefs:
  27. - kind: ServiceEntry
  28. group: networking.istio.io
  29. name: cnn
  30. rules:
  31. - backendRefs:
  32. - name: cnn-egress-gateway-istio
  33. port: 443
  34. ---
  35. apiVersion: gateway.networking.k8s.io/v1alpha2
  36. kind: TLSRoute
  37. metadata:
  38. name: forward-cnn-from-egress-gateway
  39. spec:
  40. parentRefs:
  41. - name: cnn-egress-gateway
  42. hostnames:
  43. - edition.cnn.com
  44. rules:
  45. - backendRefs:
  46. - kind: Hostname
  47. group: networking.istio.io
  48. name: edition.cnn.com
  49. port: 443
  50. EOF
  1. Send an HTTPS request to https://edition.cnn.com/politics. The output should be the same as before.

    1. $ kubectl exec "$SOURCE_POD" -c sleep -- curl -sSL -o /dev/null -D - https://edition.cnn.com/politics
    2. ...
    3. HTTP/2 200
    4. Content-Type: text/html; charset=utf-8
    5. ...
  2. Check the log of the egress gateway’s proxy.

If Istio is deployed in the istio-system namespace, the command to print the log is:

  1. $ kubectl logs -l istio=egressgateway -n istio-system

You should see a line similar to the following:

  1. [2019-01-02T11:46:46.981Z] "- - -" 0 - 627 1879689 44 - "-" "-" "-" "-" "151.101.129.67:443" outbound|443||edition.cnn.com 172.30.109.80:41122 172.30.109.80:443 172.30.109.112:59970 edition.cnn.com

Access the log corresponding to the egress gateway using the Istio-generated pod label:

  1. $ kubectl logs -l gateway.networking.k8s.io/gateway-name=cnn-egress-gateway -c istio-proxy | tail

You should see a line similar to the following:

  1. [2024-01-11T21:09:42.835Z] "- - -" 0 - - - "-" 839 2504306 231 - "-" "-" "-" "-" "151.101.195.5:443" outbound|443||edition.cnn.com 172.30.239.8:34470 172.30.239.8:443 172.30.239.15:43956 edition.cnn.com -

Cleanup HTTPS gateway

  1. $ kubectl delete serviceentry cnn
  2. $ kubectl delete gateway istio-egressgateway
  3. $ kubectl delete virtualservice direct-cnn-through-egress-gateway
  4. $ kubectl delete destinationrule egressgateway-for-cnn
  1. $ kubectl delete serviceentry cnn
  2. $ kubectl delete gtw cnn-egress-gateway
  3. $ kubectl delete tlsroute direct-cnn-to-egress-gateway
  4. $ kubectl delete tlsroute forward-cnn-from-egress-gateway

Additional security considerations

Note that defining an egress Gateway in Istio does not in itself provides any special treatment for the nodes on which the egress gateway service runs. It is up to the cluster administrator or the cloud provider to deploy the egress gateways on dedicated nodes and to introduce additional security measures to make these nodes more secure than the rest of the mesh.

Istio cannot securely enforce that all egress traffic actually flows through the egress gateways. Istio only enables such flow through its sidecar proxies. If attackers bypass the sidecar proxy, they could directly access external services without traversing the egress gateway. Thus, the attackers escape Istio’s control and monitoring. The cluster administrator or the cloud provider must ensure that no traffic leaves the mesh bypassing the egress gateway. Mechanisms external to Istio must enforce this requirement. For example, the cluster administrator can configure a firewall to deny all traffic not coming from the egress gateway. The Kubernetes network policies can also forbid all the egress traffic not originating from the egress gateway (see the next section for an example). Additionally, the cluster administrator or the cloud provider can configure the network to ensure application nodes can only access the Internet via a gateway. To do this, the cluster administrator or the cloud provider can prevent the allocation of public IPs to pods other than gateways and can configure NAT devices to drop packets not originating at the egress gateways.

Apply Kubernetes network policies

This section shows you how to create a Kubernetes network policy to prevent bypassing of the egress gateway. To test the network policy, you create a namespace, test-egress, deploy the sleep sample to it, and then attempt to send requests to a gateway-secured external service.

  1. Follow the steps in the Egress gateway for HTTPS traffic section.

  2. Create the test-egress namespace:

    1. $ kubectl create namespace test-egress
  3. Deploy the sleep sample to the test-egress namespace.

    Zip

    1. $ kubectl apply -n test-egress -f @samples/sleep/sleep.yaml@
  4. Check that the deployed pod has a single container with no Istio sidecar attached:

    1. $ kubectl get pod "$(kubectl get pod -n test-egress -l app=sleep -o jsonpath={.items..metadata.name})" -n test-egress
    2. NAME READY STATUS RESTARTS AGE
    3. sleep-776b7bcdcd-z7mc4 1/1 Running 0 18m
  5. Send an HTTPS request to https://edition.cnn.com/politics from the sleep pod in the test-egress namespace. The request will succeed since you did not define any restrictive policies yet.

    1. $ kubectl exec "$(kubectl get pod -n test-egress -l app=sleep -o jsonpath={.items..metadata.name})" -n test-egress -c sleep -- curl -s -o /dev/null -w "%{http_code}\n" https://edition.cnn.com/politics
    2. 200
  6. Label the namespaces where the Istio control plane and egress gateway are running. If you deployed Istio in the istio-system namespace, the command is:

  1. $ kubectl label namespace istio-system istio=system
  1. $ kubectl label namespace istio-system istio=system
  2. $ kubectl label namespace default gateway=true
  1. Label the kube-system namespace.

    1. $ kubectl label ns kube-system kube-system=true
  2. Define a NetworkPolicy to limit the egress traffic from the test-egress namespace to traffic destined to the control plane, gateway, and to the kube-system DNS service (port 53).

    Network policies are implemented by the network plugin in your Kubernetes cluster. Depending on your test cluster, the traffic may not be blocked in the following step.

  1. $ cat <<EOF | kubectl apply -n test-egress -f -
  2. apiVersion: networking.k8s.io/v1
  3. kind: NetworkPolicy
  4. metadata:
  5. name: allow-egress-to-istio-system-and-kube-dns
  6. spec:
  7. podSelector: {}
  8. policyTypes:
  9. - Egress
  10. egress:
  11. - to:
  12. - namespaceSelector:
  13. matchLabels:
  14. kube-system: "true"
  15. ports:
  16. - protocol: UDP
  17. port: 53
  18. - to:
  19. - namespaceSelector:
  20. matchLabels:
  21. istio: system
  22. EOF
  1. $ cat <<EOF | kubectl apply -n test-egress -f -
  2. apiVersion: networking.k8s.io/v1
  3. kind: NetworkPolicy
  4. metadata:
  5. name: allow-egress-to-istio-system-and-kube-dns
  6. spec:
  7. podSelector: {}
  8. policyTypes:
  9. - Egress
  10. egress:
  11. - to:
  12. - namespaceSelector:
  13. matchLabels:
  14. kube-system: "true"
  15. ports:
  16. - protocol: UDP
  17. port: 53
  18. - to:
  19. - namespaceSelector:
  20. matchLabels:
  21. istio: system
  22. - to:
  23. - namespaceSelector:
  24. matchLabels:
  25. gateway: "true"
  26. EOF
  1. Resend the previous HTTPS request to https://edition.cnn.com/politics. Now it should fail since the traffic is blocked by the network policy. Note that the sleep pod cannot bypass the egress gateway. The only way it can access edition.cnn.com is by using an Istio sidecar proxy and by directing the traffic to the egress gateway. This setting demonstrates that even if some malicious pod manages to bypass its sidecar proxy, it will not be able to access external sites and will be blocked by the network policy.

    1. $ kubectl exec "$(kubectl get pod -n test-egress -l app=sleep -o jsonpath={.items..metadata.name})" -n test-egress -c sleep -- curl -v -sS https://edition.cnn.com/politics
    2. Hostname was NOT found in DNS cache
    3. Trying 151.101.65.67...
    4. Trying 2a04:4e42:200::323...
    5. Immediate connect fail for 2a04:4e42:200::323: Cannot assign requested address
    6. Trying 2a04:4e42:400::323...
    7. Immediate connect fail for 2a04:4e42:400::323: Cannot assign requested address
    8. Trying 2a04:4e42:600::323...
    9. Immediate connect fail for 2a04:4e42:600::323: Cannot assign requested address
    10. Trying 2a04:4e42::323...
    11. Immediate connect fail for 2a04:4e42::323: Cannot assign requested address
    12. connect to 151.101.65.67 port 443 failed: Connection timed out
  2. Now inject an Istio sidecar proxy into the sleep pod in the test-egress namespace by first enabling automatic sidecar proxy injection in the test-egress namespace:

    1. $ kubectl label namespace test-egress istio-injection=enabled
  3. Then redeploy the sleep deployment:

    Zip

    1. $ kubectl delete deployment sleep -n test-egress
    2. $ kubectl apply -f @samples/sleep/sleep.yaml@ -n test-egress
  4. Check that the deployed pod has two containers, including the Istio sidecar proxy (istio-proxy):

  1. $ kubectl get pod "$(kubectl get pod -n test-egress -l app=sleep -o jsonpath={.items..metadata.name})" -n test-egress -o jsonpath='{.spec.containers[*].name}'
  2. sleep istio-proxy

Before proceeding, you’ll need to create a similar destination rule as the one used for the sleep pod in the default namespace, to direct the test-egress namespace traffic through the egress gateway:

  1. $ kubectl apply -n test-egress -f - <<EOF
  2. apiVersion: networking.istio.io/v1
  3. kind: DestinationRule
  4. metadata:
  5. name: egressgateway-for-cnn
  6. spec:
  7. host: istio-egressgateway.istio-system.svc.cluster.local
  8. subsets:
  9. - name: cnn
  10. EOF
  1. $ kubectl get pod "$(kubectl get pod -n test-egress -l app=sleep -o jsonpath={.items..metadata.name})" -n test-egress -o jsonpath='{.spec.containers[*].name}'
  2. sleep istio-proxy
  1. Send an HTTPS request to https://edition.cnn.com/politics. Now it should succeed since the traffic flows to the egress gateway is allowed by the Network Policy you defined. The gateway then forwards the traffic to edition.cnn.com.

    1. $ kubectl exec "$(kubectl get pod -n test-egress -l app=sleep -o jsonpath={.items..metadata.name})" -n test-egress -c sleep -- curl -sS -o /dev/null -w "%{http_code}\n" https://edition.cnn.com/politics
    2. 200
  2. Check the log of the egress gateway’s proxy.

If Istio is deployed in the istio-system namespace, the command to print the log is:

  1. $ kubectl logs -l istio=egressgateway -n istio-system

You should see a line similar to the following:

  1. [2020-03-06T18:12:33.101Z] "- - -" 0 - "-" "-" 906 1352475 35 - "-" "-" "-" "-" "151.101.193.67:443" outbound|443||edition.cnn.com 172.30.223.53:39460 172.30.223.53:443 172.30.223.58:38138 edition.cnn.com -

Access the log corresponding to the egress gateway using the Istio-generated pod label:

  1. $ kubectl logs -l gateway.networking.k8s.io/gateway-name=cnn-egress-gateway -c istio-proxy | tail

You should see a line similar to the following:

  1. [2024-01-12T19:54:01.821Z] "- - -" 0 - - - "-" 839 2504837 46 - "-" "-" "-" "-" "151.101.67.5:443" outbound|443||edition.cnn.com 172.30.239.60:49850 172.30.239.60:443 172.30.239.21:36512 edition.cnn.com -

Cleanup network policies

  1. Delete the resources created in this section:

Zip

  1. $ kubectl delete -f @samples/sleep/sleep.yaml@ -n test-egress
  2. $ kubectl delete destinationrule egressgateway-for-cnn -n test-egress
  3. $ kubectl delete networkpolicy allow-egress-to-istio-system-and-kube-dns -n test-egress
  4. $ kubectl label namespace kube-system kube-system-
  5. $ kubectl label namespace istio-system istio-
  6. $ kubectl delete namespace test-egress

Zip

  1. $ kubectl delete -f @samples/sleep/sleep.yaml@ -n test-egress
  2. $ kubectl delete networkpolicy allow-egress-to-istio-system-and-kube-dns -n test-egress
  3. $ kubectl label namespace kube-system kube-system-
  4. $ kubectl label namespace istio-system istio-
  5. $ kubectl label namespace default gateway-
  6. $ kubectl delete namespace test-egress
  1. Follow the steps in the Cleanup HTTPS gateway section.

Cleanup

Shutdown the sleep service:

Zip

  1. $ kubectl delete -f @samples/sleep/sleep.yaml@