Install Multiple Istio Control Planes in a Single Cluster

The following information describes an experimental feature, which is intended for evaluation purposes only.

This guide walks you through the process of installing multiple Istio control planes within a single cluster and then a way to scope workloads to specific control planes. This deployment model has a single Kubernetes control plane with multiple Istio control planes and meshes. The separation between the meshes is provided by Kubernetes namespaces and RBAC.

Multiple meshes in a single cluster
Multiple meshes in a single cluster

Using discoverySelectors, you can scope Kubernetes resources in a cluster to specific namespaces managed by an Istio control plane. This includes the Istio custom resources (e.g., Gateway, VirtualService, DestinationRule, etc.) used to configure the mesh. Furthermore, discoverySelectors can be used to configure which namespaces should include the istio-ca-root-cert config map for a particular Istio control plane. Together, these functions allow mesh operators to specify the namespaces for a given control plane, enabling soft multi-tenancy for multiple meshes based on the boundary of one or more namespaces. This guide uses discoverySelectors, along with the revisions capability of Istio, to demonstrate how two meshes can be deployed on a single cluster, each working with a properly scoped subset of the cluster’s resources.

Before you begin

This guide requires that you have a Kubernetes cluster with any of the supported Kubernetes versions: 1.28, 1.29, 1.30, 1.31.

This cluster will host two control planes installed in two different system namespaces. The mesh application workloads will run in multiple application-specific namespaces, each namespace associated with one or the other control plane based on revision and discovery selector configurations.

Cluster configuration

Deploying multiple control planes

Deploying multiple Istio control planes on a single cluster can be achieved by using different system namespaces for each control plane. Istio revisions and discoverySelectors are then used to scope the resources and workloads that are managed by each control plane.

  1. Create the first system namespace, usergroup-1, and deploy istiod in it:

    1. $ kubectl create ns usergroup-1
    2. $ kubectl label ns usergroup-1 usergroup=usergroup-1
    3. $ istioctl install -y -f - <<EOF
    4. apiVersion: install.istio.io/v1alpha1
    5. kind: IstioOperator
    6. metadata:
    7. namespace: usergroup-1
    8. spec:
    9. profile: minimal
    10. revision: usergroup-1
    11. meshConfig:
    12. discoverySelectors:
    13. - matchLabels:
    14. usergroup: usergroup-1
    15. values:
    16. global:
    17. istioNamespace: usergroup-1
    18. EOF
  2. Create the second system namespace, usergroup-2, and deploy istiod in it:

    1. $ kubectl create ns usergroup-2
    2. $ kubectl label ns usergroup-2 usergroup=usergroup-2
    3. $ istioctl install -y -f - <<EOF
    4. apiVersion: install.istio.io/v1alpha1
    5. kind: IstioOperator
    6. metadata:
    7. namespace: usergroup-2
    8. spec:
    9. profile: minimal
    10. revision: usergroup-2
    11. meshConfig:
    12. discoverySelectors:
    13. - matchLabels:
    14. usergroup: usergroup-2
    15. values:
    16. global:
    17. istioNamespace: usergroup-2
    18. EOF
  3. Deploy a policy for workloads in the usergroup-1 namespace to only accept mutual TLS traffic:

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: security.istio.io/v1
    3. kind: PeerAuthentication
    4. metadata:
    5. name: "usergroup-1-peerauth"
    6. namespace: "usergroup-1"
    7. spec:
    8. mtls:
    9. mode: STRICT
    10. EOF
  4. Deploy a policy for workloads in the usergroup-2 namespace to only accept mutual TLS traffic:

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: security.istio.io/v1
    3. kind: PeerAuthentication
    4. metadata:
    5. name: "usergroup-2-peerauth"
    6. namespace: "usergroup-2"
    7. spec:
    8. mtls:
    9. mode: STRICT
    10. EOF

Verify the multiple control plane creation

  1. Check the labels on the system namespaces for each control plane:

    1. $ kubectl get ns usergroup-1 usergroup2 --show-labels
    2. NAME STATUS AGE LABELS
    3. usergroup-1 Active 13m kubernetes.io/metadata.name=usergroup-1,usergroup=usergroup-1
    4. usergroup-2 Active 12m kubernetes.io/metadata.name=usergroup-2,usergroup=usergroup-2
  2. Verify the control planes are deployed and running:

    1. $ kubectl get pods -n usergroup-1
    2. NAMESPACE NAME READY STATUS RESTARTS AGE
    3. usergroup-1 istiod-usergroup-1-5ccc849b5f-wnqd6 1/1 Running 0 12m
    1. $ kubectl get pods -n usergroup-2
    2. NAMESPACE NAME READY STATUS RESTARTS AGE
    3. usergroup-2 istiod-usergroup-2-658d6458f7-slpd9 1/1 Running 0 12m

    You will notice that one istiod deployment per usergroup is created in the specified namespaces.

  3. Run the following commands to list the installed webhooks:

    1. $ kubectl get validatingwebhookconfiguration
    2. NAME WEBHOOKS AGE
    3. istio-validator-usergroup-1-usergroup-1 1 18m
    4. istio-validator-usergroup-2-usergroup-2 1 18m
    5. istiod-default-validator 1 18m
    1. $ kubectl get mutatingwebhookconfiguration
    2. NAME WEBHOOKS AGE
    3. istio-revision-tag-default-usergroup-1 4 18m
    4. istio-sidecar-injector-usergroup-1-usergroup-1 2 19m
    5. istio-sidecar-injector-usergroup-2-usergroup-2 2 18m

    Note that the output includes istiod-default-validator and istio-revision-tag-default-usergroup-1, which are the default webhook configurations used for handling requests coming from resources which are not associated with any revision. In a fully scoped environment where every control plane is associated with its resources through proper namespace labeling, there is no need for these default webhook configurations. They should never be invoked.

Deploy application workloads per usergroup

  1. Create three application namespaces:

    1. $ kubectl create ns app-ns-1
    2. $ kubectl create ns app-ns-2
    3. $ kubectl create ns app-ns-3
  2. Label each namespace to associate them with their respective control planes:

    1. $ kubectl label ns app-ns-1 usergroup=usergroup-1 istio.io/rev=usergroup-1
    2. $ kubectl label ns app-ns-2 usergroup=usergroup-2 istio.io/rev=usergroup-2
    3. $ kubectl label ns app-ns-3 usergroup=usergroup-2 istio.io/rev=usergroup-2
  3. Deploy one curl and httpbin application per namespace:

    1. $ kubectl -n app-ns-1 apply -f samples/curl/curl.yaml
    2. $ kubectl -n app-ns-1 apply -f samples/httpbin/httpbin.yaml
    3. $ kubectl -n app-ns-2 apply -f samples/curl/curl.yaml
    4. $ kubectl -n app-ns-2 apply -f samples/httpbin/httpbin.yaml
    5. $ kubectl -n app-ns-3 apply -f samples/curl/curl.yaml
    6. $ kubectl -n app-ns-3 apply -f samples/httpbin/httpbin.yaml
  4. Wait a few seconds for the httpbin and curl pods to be running with sidecars injected:

    1. $ kubectl get pods -n app-ns-1
    2. NAME READY STATUS RESTARTS AGE
    3. httpbin-9dbd644c7-zc2v4 2/2 Running 0 115m
    4. curl-78ff5975c6-fml7c 2/2 Running 0 115m
    1. $ kubectl get pods -n app-ns-2
    2. NAME READY STATUS RESTARTS AGE
    3. httpbin-9dbd644c7-sd9ln 2/2 Running 0 115m
    4. curl-78ff5975c6-sz728 2/2 Running 0 115m
    1. $ kubectl get pods -n app-ns-3
    2. NAME READY STATUS RESTARTS AGE
    3. httpbin-9dbd644c7-8ll27 2/2 Running 0 115m
    4. curl-78ff5975c6-sg4tq 2/2 Running 0 115m

Verify the application to control plane mapping

Now that the applications are deployed, you can use the istioctl ps command to confirm that the application workloads are managed by their respective control plane, i.e., app-ns-1 is managed by usergroup-1, app-ns-2 and app-ns-3 are managed by usergroup-2:

  1. $ istioctl ps -i usergroup-1
  2. NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION
  3. httpbin-9dbd644c7-hccpf.app-ns-1 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-usergroup-1-5ccc849b5f-wnqd6 1.17-alpha.f5212a6f7df61fd8156f3585154bed2f003c4117
  4. curl-78ff5975c6-9zb77.app-ns-1 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-usergroup-1-5ccc849b5f-wnqd6 1.17-alpha.f5212a6f7df61fd8156f3585154bed2f003c4117
  1. $ istioctl ps -i usergroup-2
  2. NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION
  3. httpbin-9dbd644c7-vvcqj.app-ns-3 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-usergroup-2-658d6458f7-slpd9 1.17-alpha.f5212a6f7df61fd8156f3585154bed2f003c4117
  4. httpbin-9dbd644c7-xzgfm.app-ns-2 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-usergroup-2-658d6458f7-slpd9 1.17-alpha.f5212a6f7df61fd8156f3585154bed2f003c4117
  5. curl-78ff5975c6-fthmt.app-ns-2 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-usergroup-2-658d6458f7-slpd9 1.17-alpha.f5212a6f7df61fd8156f3585154bed2f003c4117
  6. curl-78ff5975c6-nxtth.app-ns-3 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-usergroup-2-658d6458f7-slpd9 1.17-alpha.f5212a6f7df61fd8156f3585154bed2f003c4117

Verify the application connectivity is ONLY within the respective usergroup

  1. Send a request from the curl pod in app-ns-1 in usergroup-1 to the httpbin service in app-ns-2 in usergroup-2. The communication should fail:

    1. $ kubectl -n app-ns-1 exec "$(kubectl -n app-ns-1 get pod -l app=curl -o jsonpath={.items..metadata.name})" -c curl -- curl -sIL http://httpbin.app-ns-2.svc.cluster.local:8000
    2. HTTP/1.1 503 Service Unavailable
    3. content-length: 95
    4. content-type: text/plain
    5. date: Sat, 24 Dec 2022 06:54:54 GMT
    6. server: envoy
  2. Send a request from the curl pod in app-ns-2 in usergroup-2 to the httpbin service in app-ns-3 in usergroup-2. The communication should work:

    1. $ kubectl -n app-ns-2 exec "$(kubectl -n app-ns-2 get pod -l app=curl -o jsonpath={.items..metadata.name})" -c curl -- curl -sIL http://httpbin.app-ns-3.svc.cluster.local:8000
    2. HTTP/1.1 200 OK
    3. server: envoy
    4. date: Thu, 22 Dec 2022 15:01:36 GMT
    5. content-type: text/html; charset=utf-8
    6. content-length: 9593
    7. access-control-allow-origin: *
    8. access-control-allow-credentials: true
    9. x-envoy-upstream-service-time: 3

Cleanup

  1. Clean up the first usergroup:

    1. $ istioctl uninstall --revision usergroup-1 --set values.global.istioNamespace=usergroup-1
    2. $ kubectl delete ns app-ns-1 usergroup-1
  2. Clean up the second usergroup:

    1. $ istioctl uninstall --revision usergroup-2 --set values.global.istioNamespace=usergroup-2
    2. $ kubectl delete ns app-ns-2 app-ns-3 usergroup-2

A Cluster Administrator must make sure that Mesh Administrators DO NOT have permission to invoke the global istioctl uninstall --purge command, because that would uninstall all control planes in the cluster.