- Install Primary-Remote on different networks
- Set the default network for
cluster1
- Configure
cluster1
as a primary - Install the east-west gateway in
cluster1
- Expose the control plane in
cluster1
- Set the control plane cluster for
cluster2
- Set the default network for
cluster2
- Configure
cluster2
as a remote - Attach
cluster2
as a remote cluster ofcluster1
- Install the east-west gateway in
cluster2
- Expose services in
cluster1
andcluster2
- Next Steps
- Cleanup
- Set the default network for
Install Primary-Remote on different networks
Follow this guide to install the Istio control plane on cluster1
(the primary cluster) and configure cluster2
(the remote cluster) to use the control plane in cluster1
. Cluster cluster1
is on the network1
network, while cluster2
is on the network2
network. This means there is no direct connectivity between pods across cluster boundaries.
Before proceeding, be sure to complete the steps under before you begin.
If you are testing multicluster setup on kind
you can use MetalLB to make use of EXTERNAL-IP
for LoadBalancer
services.
In this configuration, cluster cluster1
will observe the API Servers in both clusters for endpoints. In this way, the control plane will be able to provide service discovery for workloads in both clusters.
Service workloads across cluster boundaries communicate indirectly, via dedicated gateways for east-west traffic. The gateway in each cluster must be reachable from the other cluster.
Services in cluster2
will reach the control plane in cluster1
via the same east-west gateway.
Set the default network for cluster1
If the istio-system namespace is already created, we need to set the cluster’s network there:
$ kubectl --context="${CTX_CLUSTER1}" get namespace istio-system && \
kubectl --context="${CTX_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1
Configure cluster1
as a primary
Create the Istio configuration for cluster1
:
$ cat <<EOF > cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster1
network: network1
externalIstiod: true
EOF
Apply the configuration to cluster1
:
$ istioctl install --context="${CTX_CLUSTER1}" -f cluster1.yaml
Notice that values.global.externalIstiod
is set to true
. This enables the control plane installed on cluster1
to also serve as an external control plane for other remote clusters. When this feature is enabled, istiod
will attempt to acquire the leadership lock, and consequently manage, appropriately annotated remote clusters that will be attached to it (cluster2
in this case).
Install the east-west gateway in cluster1
Install a gateway in cluster1
that is dedicated to east-west traffic. By default, this gateway will be public on the Internet. Production systems may require additional access restrictions (e.g. via firewall rules) to prevent external attacks. Check with your cloud vendor to see what options are available.
$ @samples/multicluster/gen-eastwest-gateway.sh@ \
--network network1 | \
istioctl --context="${CTX_CLUSTER1}" install -y -f -
If the control-plane was installed with a revision, add the --revision rev
flag to the gen-eastwest-gateway.sh
command.
Wait for the east-west gateway to be assigned an external IP address:
$ kubectl --context="${CTX_CLUSTER1}" get svc istio-eastwestgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-eastwestgateway LoadBalancer 10.80.6.124 34.75.71.237 ... 51s
Expose the control plane in cluster1
Before we can install on cluster2
, we need to first expose the control plane in cluster1
so that services in cluster2
will be able to access service discovery:
$ kubectl apply --context="${CTX_CLUSTER1}" -n istio-system -f \
@samples/multicluster/expose-istiod.yaml@
If the control-plane was installed with a revision rev
, use the following command instead:
$ sed 's/{{.Revision}}/rev/g' @samples/multicluster/expose-istiod-rev.yaml.tmpl@ | kubectl apply --context="${CTX_CLUSTER1}" -n istio-system -f -
Set the control plane cluster for cluster2
We need identify the external control plane cluster that should manage cluster2
by annotating the istio-system namespace:
$ kubectl --context="${CTX_CLUSTER2}" create namespace istio-system
$ kubectl --context="${CTX_CLUSTER2}" annotate namespace istio-system topology.istio.io/controlPlaneClusters=cluster1
Setting the topology.istio.io/controlPlaneClusters
namespace annotation to cluster1
instructs the istiod
running in the same namespace (istio-system in this case) on cluster1
to manage cluster2
when it is attached as a remote cluster.
Set the default network for cluster2
Set the network for cluster2
by adding a label to the istio-system namespace:
$ kubectl --context="${CTX_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2
Configure cluster2
as a remote
Save the address of cluster1
’s east-west gateway.
$ export DISCOVERY_ADDRESS=$(kubectl \
--context="${CTX_CLUSTER1}" \
-n istio-system get svc istio-eastwestgateway \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}')
Now create a remote configuration on cluster2
.
$ cat <<EOF > cluster2.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: remote
values:
istiodRemote:
injectionPath: /inject/cluster/cluster2/net/network2
global:
remotePilotAddress: ${DISCOVERY_ADDRESS}
EOF
Here we’re configuring the location of the control plane using the injectionPath
and remotePilotAddress
parameters. Although convenient for demonstration, in a production environment it is recommended to instead configure the injectionURL
parameter using properly signed DNS certs similar to the configuration shown in the external control plane instructions.
Apply the configuration to cluster2
:
$ istioctl install --context="${CTX_CLUSTER2}" -f cluster2.yaml
Attach cluster2
as a remote cluster of cluster1
To attach the remote cluster to its control plane, we give the control plane in cluster1
access to the API Server in cluster2
. This will do the following:
Enables the control plane to authenticate connection requests from workloads running in
cluster2
. Without API Server access, the control plane will reject the requests.Enables discovery of service endpoints running in
cluster2
.
Because it has been included in the topology.istio.io/controlPlaneClusters
namespace annotation, the control plane on cluster1
will also:
Patch certs in the webhooks in
cluster2
.Start the namespace controller which writes configmaps in namespaces in
cluster2
.
To provide API Server access to cluster2
, we generate a remote secret and apply it to cluster1
:
$ istioctl create-remote-secret \
--context="${CTX_CLUSTER2}" \
--name=cluster2 | \
kubectl apply -f - --context="${CTX_CLUSTER1}"
Install the east-west gateway in cluster2
As we did with cluster1
above, install a gateway in cluster2
that is dedicated to east-west traffic and expose user services.
$ @samples/multicluster/gen-eastwest-gateway.sh@ \
--network network2 | \
istioctl --context="${CTX_CLUSTER2}" install -y -f -
Wait for the east-west gateway to be assigned an external IP address:
$ kubectl --context="${CTX_CLUSTER2}" get svc istio-eastwestgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-eastwestgateway LoadBalancer 10.0.12.121 34.122.91.98 ... 51s
Expose services in cluster1
and cluster2
Since the clusters are on separate networks, we also need to expose all user services (*.local) on the east-west gateway in both clusters. While these gateways are public on the Internet, services behind them can only be accessed by services with a trusted mTLS certificate and workload ID, just as if they were on the same network.
$ kubectl --context="${CTX_CLUSTER1}" apply -n istio-system -f \
@samples/multicluster/expose-services.yaml@
Since cluster2
is installed with a remote profile, exposing services on the primary cluster will expose them on the east-west gateways of both clusters.
Congratulations! You successfully installed an Istio mesh across primary and remote clusters on different networks!
Next Steps
You can now verify the installation.
Cleanup
Uninstall Istio in
cluster1
:$ istioctl uninstall --context="${CTX_CLUSTER1}" -y --purge
$ kubectl delete ns istio-system --context="${CTX_CLUSTER1}"
Uninstall Istio in
cluster2
:$ istioctl uninstall --context="${CTX_CLUSTER2}" -y --purge
$ kubectl delete ns istio-system --context="${CTX_CLUSTER2}"