Managing egress traffic

In this guide, we’ll walk you through an example of egress traffic management: visualizing, applying policies and implementing advanced routing configuration for traffic that is targeted to destinations that reside outside of the cluster.

Managing egress traffic - 图1

Warning

No service mesh can provide a strong security guarantee about egress traffic by itself; for example, a malicious actor could bypass the Linkerd sidecar - and thus Linkerd’s egress controls - entirely. Fully restricting egress traffic in the presence of arbitrary applications thus typically requires a more comprehensive approach.

Visualizing egress traffic

In order to be able to capture egress traffic and apply policies to it we will make use of the EgressNetwork CRD. This CRD is namespace scoped - it applies to clients in the local namespace unless it is created in the globally configured egress namespace. For now, let’s create an egress-test namespace and add a single EgressNetwork to it.

  1. kubectl create ns egress-test
  2. kubectl apply -f - <<EOF
  3. apiVersion: policy.linkerd.io/v1alpha1
  4. kind: EgressNetwork
  5. metadata:
  6. namespace: egress-test
  7. name: all-egress-traffic
  8. spec:
  9. trafficPolicy: Allow
  10. EOF

This is enough to visualize egress traffic going through the system. In order to do so, you can deploy a simple curl container and start hitting an external to the cluster service:

  1. kubectl apply -f - <<EOF
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: client
  6. namespace: egress-test
  7. annotations:
  8. linkerd.io/inject: enabled
  9. spec:
  10. containers:
  11. - name: client
  12. image: curlimages/curl
  13. command:
  14. - "sh"
  15. - "-c"
  16. - "sleep infinity"
  17. EOF

Now SSH into the client container and start generating some external traffic:

  1. kubectl -n egress-test exec -it client-xxx -c client -- sh
  2. $ while sleep 1; do curl -s http://httpbin.org/get ; done

In a separate shell, you can use the Linkerd diagnostics command to visualize the traffic.

  1. linkerd dg proxy-metrics -n egress-test po/client | grep outbound_http_route_request_statuses_total
  2. outbound_http_route_request_statuses_total{
  3. parent_group="policy.linkerd.io",
  4. parent_kind="EgressNetwork",
  5. parent_namespace="egress-test",
  6. parent_name="all-egress-traffic",
  7. parent_port="80",
  8. parent_section_name="",
  9. route_group="",
  10. route_kind="default",
  11. route_namespace="",
  12. route_name="http-egress-allow",
  13. hostname="httpbin.org",
  14. http_status="200",
  15. error=""
  16. } 697

Notice that these raw metrics allow you to quickly identify egress traffic targeted towards different destinations simply by querying for parent_kind of type EgressNetwork. For now all traffic is allowed and we are simply observing it. We can also observe that because our EgressNetwork default traffic policy is set to Allow, the default http route is named as http-egress-allow. This is a placeholder route that is being populated automatically by the Linkerd controller.

Restricting egress traffic

After you have used metrics in order to compose a picture of your egress traffic, you can start applying policies that allow only some of it to go through. Let’s update our EgressNetwork and change its trafficPolicy to Deny:

  1. kubectl patch egressnetwork -n egress-test all-egress-traffic \
  2. -p '{"spec":{"trafficPolicy": "Deny"}}' --type=merge

Now, you should start observing failed requests from your client container. Furthermore, looking at metrics we can observe the following result:

  1. outbound_http_route_request_statuses_total{
  2. parent_group="policy.linkerd.io",
  3. parent_kind="EgressNetwork",
  4. parent_namespace="egress-test",
  5. parent_name="all-egress-traffic",
  6. parent_port="80",
  7. parent_section_name="",
  8. route_group="",
  9. route_kind="default",
  10. route_namespace="",
  11. route_name="http-egress-deny",
  12. hostname="httpbin.org",
  13. http_status="403",
  14. error=""
  15. } 45

We can clearly observe now that the traffic targets the same parent but the name of the route is now http-egress-deny. Furthermore, the http_status is 403 or Forbidden. By changing the traffic policy to Deny, we have forbidden all egress traffic originating from the local namespace. In order to allow some of it, we can make use of the Gateway API types. Assume that you want to allow traffic to httpbin.org but only for requests that target the /get endpoint. For that purpose we need to create the following HTTPRoute:

  1. kubectl apply -f - <<EOF
  2. apiVersion: gateway.networking.k8s.io/v1alpha2
  3. kind: HTTPRoute
  4. metadata:
  5. name: httpbin-get
  6. namespace: egress-test
  7. spec:
  8. parentRefs:
  9. - name: all-egress-traffic
  10. kind: EgressNetwork
  11. group: policy.linkerd.io
  12. namespace: egress-test
  13. port: 80
  14. rules:
  15. - matches:
  16. - path:
  17. value: "/get"
  18. EOF

We can see that traffic is now flowing again and if we look at metrics we will be able to see that this happens through the httpbin-get route.

  1. outbound_http_route_request_statuses_total{
  2. parent_group="policy.linkerd.io",
  3. parent_kind="EgressNetwork",
  4. parent_namespace="egress-test",
  5. parent_name="all-egress-traffic",
  6. parent_port="80",
  7. parent_section_name="",
  8. route_group="gateway.networking.k8s.io",
  9. route_kind="HTTPRoute",
  10. route_namespace="egress-test",
  11. route_name="httpbin-get",
  12. hostname="httpbin.org",
  13. http_status="200",
  14. error=""
  15. } 63

Interestingly enough though, if we go back to our client shell and we try to initiate HTTPS traffic to the same service, it will not be allowed:

  1. ~ $ curl -v https://httpbin.org/get
  2. curl: (35) TLS connect error: error:00000000:lib(0)::reason(0)

This is the case because our current configuration only allows plaintext HTTP traffic to go through the system. We can additionally allow HTTPS traffic, by using the Gateway API TLSRoute:

  1. kubectl apply -f - <<EOF
  2. apiVersion: gateway.networking.k8s.io/v1alpha2
  3. kind: TLSRoute
  4. metadata:
  5. name: tls-egress
  6. namespace: egress-test
  7. spec:
  8. hostnames:
  9. - httpbin.org
  10. parentRefs:
  11. - name: all-egress-traffic
  12. kind: EgressNetwork
  13. group: policy.linkerd.io
  14. namespace: egress-test
  15. port: 443
  16. rules:
  17. - backendRefs:
  18. - kind: EgressNetwork
  19. group: policy.linkerd.io
  20. name: all-egress-traffic
  21. EOF

This fixes the problem and we can see HTTPS requests to the external service succeeding reflected in the metrics:

  1. linkerd dg proxy-metrics -n egress-test po/client | grep outbound_tls_route_open_total
  2. outbound_tls_route_open_total{
  3. parent_group="policy.linkerd.io",
  4. parent_kind="EgressNetwork",
  5. parent_namespace="egress-test",
  6. parent_name="all-egress-traffic",
  7. parent_port="443",
  8. parent_section_name="",
  9. route_group="gateway.networking.k8s.io",
  10. route_kind="TLSRoute",
  11. route_namespace="egress-test",
  12. route_name="tls-egress",
  13. hostname="httpbin.org"
  14. } 2

This configuration allows traffic to httpbin.org only. In order to apply policy decisions for TLS connections, the proxy parses the SNI extension header from the ClientHello of the TLS session and uses that as the target hostname identifier. This means that if we try to initiate a request to github.com from our client, we will see the proxy eagerly closing the connection because it is not forbidden by our current policy configuration:

  1. linkerd dg proxy-metrics -n egress-test po/client | grep outbound_tls_route_close_total
  2. outbound_tls_route_close_total{
  3. parent_group="policy.linkerd.io",
  4. parent_kind="EgressNetwork",
  5. parent_namespace="egress-test",
  6. parent_name="all-egress-traffic",
  7. parent_port="443",
  8. parent_section_name="",
  9. route_group="",
  10. route_kind="default",
  11. route_namespace="",
  12. route_name="tls-egress-deny",
  13. hostname="github.com",
  14. error="forbidden"
  15. } 1

In a similar fashion we can use the other Gateway API route types such as GRPCRoute and TCPRoute to shape traffic that is captured by an EgressNetwork primitive. All these traffic types come with their corresponding set of route-based metrics that describe how traffic flows through the system and what policy decisions have been made.

Redirecting egress traffic back to the cluster

Using the Gateway API route types to model egress traffic allows us to implement some more advanced routing configurations. Assume that we want to apply the following rules:

  • unencrypted HTTP traffic can only target httpbin.org/get an no other endpoints
  • encrypted HTTPs traffic is allowed to all destinations
  • all other unencrypted HTTP traffic need to be redirected to an internal service

To begin with, let’s create our internal service to which traffic should be redirected:

  1. kubectl apply -f - <<EOF
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: internal-egress
  6. namespace: egress-test
  7. spec:
  8. type: ClusterIP
  9. selector:
  10. app: internal-egress
  11. ports:
  12. - port: 80
  13. protocol: TCP
  14. name: one
  15. ---
  16. apiVersion: apps/v1
  17. kind: Deployment
  18. metadata:
  19. namespace: egress-test
  20. name: internal-egress
  21. spec:
  22. replicas: 1
  23. selector:
  24. matchLabels:
  25. app: internal-egress
  26. template:
  27. metadata:
  28. labels:
  29. app: internal-egress
  30. annotations:
  31. linkerd.io/inject: enabled
  32. spec:
  33. containers:
  34. - name: legacy-app
  35. image: buoyantio/bb:v0.0.5
  36. command: [ "sh", "-c"]
  37. args:
  38. - "/out/bb terminus --h1-server-port 80 --response-text 'You cannot go there right now' --fire-and-forget"
  39. ports:
  40. - name: http-port
  41. containerPort: 80
  42. EOF

In order to allow the first rule, we need to create an HTTPRoute that looks like this:

  1. kubectl apply -f - <<EOF
  2. apiVersion: gateway.networking.k8s.io/v1alpha2
  3. kind: HTTPRoute
  4. metadata:
  5. name: httpbin-get
  6. namespace: egress-test
  7. spec:
  8. parentRefs:
  9. - name: all-egress-traffic
  10. kind: EgressNetwork
  11. group: policy.linkerd.io
  12. namespace: egress-test
  13. port: 80
  14. rules:
  15. - matches:
  16. - path:
  17. value: "/get"
  18. EOF

To allow all tls traffic, we need the following TLSRoute:

  1. kubectl apply -f - <<EOF
  2. apiVersion: gateway.networking.k8s.io/v1alpha2
  3. kind: TLSRoute
  4. metadata:
  5. name: tls-egress
  6. namespace: egress-test
  7. spec:
  8. parentRefs:
  9. - name: all-egress-traffic
  10. kind: EgressNetwork
  11. group: policy.linkerd.io
  12. namespace: egress-test
  13. port: 443
  14. rules:
  15. - backendRefs:
  16. - kind: EgressNetwork
  17. group: policy.linkerd.io
  18. name: all-egress-traffic
  19. EOF

Finally to redirect the rest of the plaintext HTTP traffic to the internal service, we create an HTTPRoute with a custom backend being the internal service:

  1. kubectl apply -f - <<EOF
  2. apiVersion: gateway.networking.k8s.io/v1alpha2
  3. kind: HTTPRoute
  4. metadata:
  5. name: unencrypted-http
  6. namespace: egress-test
  7. spec:
  8. parentRefs:
  9. - name: all-egress-traffic
  10. kind: EgressNetwork
  11. group: policy.linkerd.io
  12. namespace: egress-test
  13. port: 80
  14. rules:
  15. - backendRefs:
  16. - kind: Service
  17. name: internal-egress
  18. port: 80
  19. EOF

Now let’s verify all works as expected:

  1. # plaintext traffic goes as expected to the /get path
  2. $ curl http://httpbin.org/get
  3. {
  4. "args": {},
  5. "headers": {
  6. "Accept": "*/*",
  7. "Host": "httpbin.org",
  8. "User-Agent": "curl/8.11.0",
  9. "X-Amzn-Trace-Id": "Root=1-674599d4-77a473943844e9e31844b48e"
  10. },
  11. "origin": "51.116.126.217",
  12. "url": "http://httpbin.org/get"
  13. }
  14. # encrypted traffic can target all paths and hosts
  15. $ curl https://httpbin.org/ip
  16. {
  17. "origin": "51.116.126.217"
  18. }
  19. # arbitrary unencrypted traffic goes to the internal service
  20. $ curl http://google.com
  21. {
  22. "requestUID": "in:http-sid:terminus-grpc:-1-h1:80-190120723",
  23. "payload": "You cannot go there right now"}
  24. }

Cleanup

In order to clean everything up, simply delete the namespace: kubectl delete ns egress-test.