Pod-to-Pod Multi-cluster communication

By default, Linkerd’s multicluster extension works by sending all cross-cluster traffic through a gateway on the target cluster. However, when multiple Kubernetes clusters are deployed on a flat network where pods from one cluster can communicate directly with pods on another, Linkerd can export multicluster services in pod-to-pod mode where cross-cluster traffic does not go through the gateway, but instead goes directly to the target pods.

This guide will walk you through exporting multicluster services in pod-to-pod mode, setting up authorization policies, and monitoring the traffic.

Prerequisites

  • Two clusters. We will refer to them as east and west in this guide.
  • The clusters must be on a flat network. In other words, pods from one cluster must be able to address and connect to pods in the other cluster.
  • Each of these clusters should be configured as kubectl contexts. We’d recommend you use the names east and west so that you can follow along with this guide. It is easy to rename contexts with kubectl, so don’t feel like you need to keep it all named this way forever.

Step 1: Installing Linkerd and Linkerd-Viz

First, install Linkerd and Linkerd-Viz into both clusters, as described in the multicluster guide. Make sure to take care that both clusters share a common trust anchor.

Step 2: Installing Linkerd-Multicluster

We will install the multicluster extension into both clusters. We can install without the gateway because we will be using direct pod-to-pod communication.

  1. > linkerd --context east multicluster install --gateway=false | kubectl --context east apply -f -
  2. > linkerd --context east check
  3. > linkerd --context west multicluster install --gateway=false | kubectl --context west apply -f -
  4. > linkerd --context west check

Step 3: Linking the Clusters

We use the linkerd multilcuster link command to link our two clusters together. This is exactly the same as in the regular Multicluster guide except that we pass the --gateway=false flag to create a Link which doesn’t require a gateway.

  1. > linkerd --context east multicluster link --cluster-name=target --gateway=false | kubectl --context west apply -f -

Step 4: Deploy and Exporting a Service

For our guide, we’ll deploy the bb service, which is a simple server that just returns a static response. We deploy it into the target cluster:

  1. > cat <<EOF | linkerd --context east inject - | kubectl --context east apply -f -
  2. ---
  3. apiVersion: v1
  4. kind: Namespace
  5. metadata:
  6. name: mc-demo
  7. ---
  8. apiVersion: apps/v1
  9. kind: Deployment
  10. metadata:
  11. name: bb
  12. namespace: mc-demo
  13. spec:
  14. replicas: 1
  15. selector:
  16. matchLabels:
  17. app: bb
  18. template:
  19. metadata:
  20. labels:
  21. app: bb
  22. spec:
  23. containers:
  24. - name: terminus
  25. image: buoyantio/bb:v0.0.6
  26. args:
  27. - terminus
  28. - "--h1-server-port=8080"
  29. - "--response-text=hello\n"
  30. ports:
  31. - containerPort: 8080
  32. ---
  33. apiVersion: v1
  34. kind: Service
  35. metadata:
  36. name: bb
  37. namespace: mc-demo
  38. spec:
  39. ports:
  40. - name: http
  41. port: 8080
  42. targetPort: 8080
  43. selector:
  44. app: bb
  45. EOF

We then create the corresponding namespace on the source cluster

  1. > kubectl --context west create ns mc-demo

and set a label on the target service to export it. Notice that instead of the usual mirror.linkerd.io/exported=true label, we are setting mirror.linkerd.io/exported=remote-discovery which means that the service should be exported in remote discovery mode, which skips the gateway and allows pods from different clusters to talk to each other directly.

  1. > kubectl --context east -n mc-demo label svc/bb mirror.linkerd.io/exported=remote-discovery

You should immediately see a mirror service created in the source cluster:

  1. > kubectl --context west -n mc-demo get svc
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. bb-target ClusterIP 10.43.56.245 <none> 8080/TCP 114s

Step 5: Send some traffic!

We’ll use slow-cooker as our load generator in the source cluster to send to the bb service in the target cluster. Notice that we configure slow-cooker to send to our bb-target mirror service.

  1. > cat <<EOF | linkerd --context west inject - | kubectl --context west apply -f -
  2. ---
  3. apiVersion: v1
  4. kind: ServiceAccount
  5. metadata:
  6. name: slow-cooker
  7. namespace: mc-demo
  8. ---
  9. apiVersion: apps/v1
  10. kind: Deployment
  11. metadata:
  12. name: slow-cooker
  13. namespace: mc-demo
  14. spec:
  15. replicas: 1
  16. selector:
  17. matchLabels:
  18. app: slow-cooker
  19. template:
  20. metadata:
  21. labels:
  22. app: slow-cooker
  23. spec:
  24. serviceAccountName: slow-cooker
  25. containers:
  26. - args:
  27. - -c
  28. - |
  29. sleep 5 # wait for pods to start
  30. /slow_cooker/slow_cooker --qps 10 http://bb-target:8080
  31. command:
  32. - /bin/sh
  33. image: buoyantio/slow_cooker:1.3.0
  34. name: slow-cooker
  35. EOF

We should now be able to see that bb is receiving about 10 requests per second successfully in the target cluster:

  1. > linkerd --context east viz stat -n mc-demo deploy
  2. NAME MESHED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 TCP_CONN
  3. bb 1/1 100.00% 10.3rps 1ms 1ms 1ms 3

Step 6: Authorization Policy

One advantage of direct pod-to-pod communication is that the server can use authorization policies which allow only certain clients to connect. This is not possible when using the gateway, because client identity is lost when going through the gateway. For more background on how authorization policies work, see: Restricting Access To Services.

Let’s demonstrate that by creating an authorization policy which only allows the slow-cooker service account to connect to bb:

  1. > kubectl --context east apply -f - <<EOF
  2. ---
  3. apiVersion: policy.linkerd.io/v1beta1
  4. kind: Server
  5. metadata:
  6. namespace: mc-demo
  7. name: bb
  8. spec:
  9. podSelector:
  10. matchLabels:
  11. app: bb
  12. port: 8080
  13. ---
  14. apiVersion: policy.linkerd.io/v1alpha1
  15. kind: AuthorizationPolicy
  16. metadata:
  17. namespace: mc-demo
  18. name: bb-authz
  19. spec:
  20. targetRef:
  21. group: policy.linkerd.io
  22. kind: Server
  23. name: bb
  24. requiredAuthenticationRefs:
  25. - group: policy.linkerd.io
  26. kind: MeshTLSAuthentication
  27. name: bb-good
  28. ---
  29. apiVersion: policy.linkerd.io/v1alpha1
  30. kind: MeshTLSAuthentication
  31. metadata:
  32. namespace: mc-demo
  33. name: bb-good
  34. spec:
  35. identities:
  36. - 'slow-cooker.mc-demo.serviceaccount.identity.linkerd.cluster.local'
  37. EOF

With that policy in place, we can see that bb is admitting all of the traffic from slow-cooker:

  1. > linkerd --context east viz authz -n mc-demo deploy
  2. ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99
  3. default bb authorizationpolicy/bb-authz 0.0rps 100.00% 10.0rps 1ms 1ms 1ms
  4. default default:all-unauthenticated default/all-unauthenticated 0.0rps 100.00% 0.1rps 1ms 1ms 1ms
  5. probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.2rps 1ms 1ms 1ms

To demonstrate that slow-cooker is the only service account which is allowed to send to bb, we’ll create a second load generator called slow-cooker-evil which uses a different service account and which should be denied.

  1. > cat <<EOF | linkerd --context west inject - | kubectl --context west apply -f -
  2. ---
  3. apiVersion: v1
  4. kind: ServiceAccount
  5. metadata:
  6. name: slow-cooker-evil
  7. namespace: mc-demo
  8. ---
  9. apiVersion: apps/v1
  10. kind: Deployment
  11. metadata:
  12. name: slow-cooker-evil
  13. namespace: mc-demo
  14. spec:
  15. replicas: 1
  16. selector:
  17. matchLabels:
  18. app: slow-cooker-evil
  19. template:
  20. metadata:
  21. labels:
  22. app: slow-cooker-evil
  23. spec:
  24. serviceAccountName: slow-cooker-evil
  25. containers:
  26. - args:
  27. - -c
  28. - |
  29. sleep 5 # wait for pods to start
  30. /slow_cooker/slow_cooker --qps 10 http://bb-target:8080
  31. command:
  32. - /bin/sh
  33. image: buoyantio/slow_cooker:1.3.0
  34. name: slow-cooker
  35. EOF

Once the evil version of slow-cooker has been running for a bit, we can see that bb is accepting 10rps (from slow-cooker) and rejecting 10rps (from slow-cooker-evil):

  1. > linkerd --context east viz authz -n mc-demo deploy
  2. ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99
  3. default bb 10.0rps 0.00% 0.0rps 0ms 0ms 0ms
  4. default bb authorizationpolicy/bb-authz 0.0rps 100.00% 10.0rps 1ms 1ms 1ms
  5. default default:all-unauthenticated default/all-unauthenticated 0.0rps 100.00% 0.1rps 1ms 1ms 1ms
  6. probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.2rps 1ms 1ms 1ms