- Google Kubernetes Engine
- Before you begin
- Create the GKE Clusters
- Create a Google Cloud firewall rule
- Install the Istio control plane
- Generate remote cluster manifest
- Install remote cluster manifest
- Create remote cluster’s kubeconfig for Istio Pilot
- Configure Istio control plane to discover the remote cluster
- Deploy Bookinfo Example Across Clusters
- Uninstalling
- 相关内容
Google Kubernetes Engine
This example shows how to configure a multicluster mesh with asingle-network deploymentover 2 Google Kubernetes Engine clusters.
Before you begin
In addition to the prerequisites for installing Istio the following setup is required for this example:
This sample requires a valid Google Cloud Platform project with billing enabled. If you arenot an existing GCP user, you may be able to enroll for a $300 US Free Trial credit.
- Create a Google Cloud Project tohost your GKE clusters.
- Install and initialize the Google Cloud SDK
Create the GKE Clusters
- Set the default project for
gcloud
to perform actions on:
$ gcloud config set project myProject
$ proj=$(gcloud config list --format='value(core.project)')
- Create 2 GKE clusters for use with the multicluster feature. Note:
—enable-ip-alias
is required toallow inter-cluster direct pod-to-pod communication. Thezone
value must be one of theGCP zones.
$ zone="us-east1-b"
$ cluster="cluster-1"
$ gcloud container clusters create $cluster --zone $zone --username "admin" \
--machine-type "n1-standard-2" --image-type "COS" --disk-size "100" \
--scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only",\
"https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring",\
"https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly",\
"https://www.googleapis.com/auth/trace.append" \
--num-nodes "4" --network "default" --enable-cloud-logging --enable-cloud-monitoring --enable-ip-alias --async
$ cluster="cluster-2"
$ gcloud container clusters create $cluster --zone $zone --username "admin" \
--machine-type "n1-standard-2" --image-type "COS" --disk-size "100" \
--scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only",\
"https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring",\
"https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly",\
"https://www.googleapis.com/auth/trace.append" \
--num-nodes "4" --network "default" --enable-cloud-logging --enable-cloud-monitoring --enable-ip-alias --async
- Wait for clusters to transition to the
RUNNING
state by polling their statuses via the following command:
$ gcloud container clusters list
- Get the clusters’ credentials (command details):
$ gcloud container clusters get-credentials cluster-1 --zone $zone
$ gcloud container clusters get-credentials cluster-2 --zone $zone
Validate
kubectl
access to each cluster and create acluster-admin
cluster role binding tied to the Kubernetes credentials associated with your GCP user.- For cluster-1:
$ kubectl config use-context "gke_${proj}_${zone}_cluster-1"
$ kubectl get pods --all-namespaces
$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user="$(gcloud config get-value core/account)"
- For cluster-2:
$ kubectl config use-context "gke_${proj}_${zone}_cluster-2"
$ kubectl get pods --all-namespaces
$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user="$(gcloud config get-value core/account)"
Create a Google Cloud firewall rule
To allow the pods on each cluster to directly communicate, create the following rule:
$ function join_by { local IFS="$1"; shift; echo "$*"; }
$ ALL_CLUSTER_CIDRS=$(gcloud container clusters list --format='value(clusterIpv4Cidr)' | sort | uniq)
$ ALL_CLUSTER_CIDRS=$(join_by , $(echo "${ALL_CLUSTER_CIDRS}"))
$ ALL_CLUSTER_NETTAGS=$(gcloud compute instances list --format='value(tags.items.[0])' | sort | uniq)
$ ALL_CLUSTER_NETTAGS=$(join_by , $(echo "${ALL_CLUSTER_NETTAGS}"))
$ gcloud compute firewall-rules create istio-multicluster-test-pods \
--allow=tcp,udp,icmp,esp,ah,sctp \
--direction=INGRESS \
--priority=900 \
--source-ranges="${ALL_CLUSTER_CIDRS}" \
--target-tags="${ALL_CLUSTER_NETTAGS}" --quiet
Install the Istio control plane
The following generates an Istio installation manifest, installs it, and enables automatic sidecar injection inthe default
namespace:
$ kubectl config use-context "gke_${proj}_${zone}_cluster-1"
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system > $HOME/istio_master.yaml
$ kubectl create ns istio-system
$ helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
$ kubectl apply -f $HOME/istio_master.yaml
$ kubectl label namespace default istio-injection=enabled
Wait for pods to come up by polling their statuses via the following command:
$ kubectl get pods -n istio-system
Generate remote cluster manifest
- Get the IPs of the control plane pods:
$ export PILOT_POD_IP=$(kubectl -n istio-system get pod -l istio=pilot -o jsonpath='{.items[0].status.podIP}')
$ export POLICY_POD_IP=$(kubectl -n istio-system get pod -l istio=mixer -o jsonpath='{.items[0].status.podIP}')
$ export TELEMETRY_POD_IP=$(kubectl -n istio-system get pod -l istio-mixer-type=telemetry -o jsonpath='{.items[0].status.podIP}')
- Generate remote cluster manifest:
$ helm template install/kubernetes/helm/istio \
--namespace istio-system --name istio-remote \
--values @install/kubernetes/helm/istio/values-istio-remote.yaml@ \
--set global.remotePilotAddress=${PILOT_POD_IP} \
--set global.remotePolicyAddress=${POLICY_POD_IP} \
--set global.remoteTelemetryAddress=${TELEMETRY_POD_IP} > $HOME/istio-remote.yaml
Install remote cluster manifest
The following installs the minimal Istio components and enables automatic sidecar injection onthe namespace default
in the remote cluster:
$ kubectl config use-context "gke_${proj}_${zone}_cluster-2"
$ kubectl create ns istio-system
$ kubectl apply -f $HOME/istio-remote.yaml
$ kubectl label namespace default istio-injection=enabled
Create remote cluster’s kubeconfig for Istio Pilot
The istio-remote
Helm chart creates a service account with minimal access for use by Istio Pilotdiscovery.
- Prepare environment variables for building the
kubeconfig
file for the service accountistio-multi
:
$ export WORK_DIR=$(pwd)
$ CLUSTER_NAME=$(kubectl config view --minify=true -o jsonpath='{.clusters[].name}')
$ CLUSTER_NAME="${CLUSTER_NAME##*_}"
$ export KUBECFG_FILE=${WORK_DIR}/${CLUSTER_NAME}
$ SERVER=$(kubectl config view --minify=true -o jsonpath='{.clusters[].cluster.server}')
$ NAMESPACE=istio-system
$ SERVICE_ACCOUNT=istio-multi
$ SECRET_NAME=$(kubectl get sa ${SERVICE_ACCOUNT} -n ${NAMESPACE} -o jsonpath='{.secrets[].name}')
$ CA_DATA=$(kubectl get secret ${SECRET_NAME} -n ${NAMESPACE} -o jsonpath="{.data['ca\.crt']}")
$ TOKEN=$(kubectl get secret ${SECRET_NAME} -n ${NAMESPACE} -o jsonpath="{.data['token']}" | base64 --decode)
An alternative to base64 —decode
is openssl enc -d -base64 -A
on many systems.
- Create a
kubeconfig
file in the working directory for the service accountistio-multi
:
$ cat <<EOF > ${KUBECFG_FILE}
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ${CA_DATA}
server: ${SERVER}
name: ${CLUSTER_NAME}
contexts:
- context:
cluster: ${CLUSTER_NAME}
user: ${CLUSTER_NAME}
name: ${CLUSTER_NAME}
current-context: ${CLUSTER_NAME}
kind: Config
preferences: {}
users:
- name: ${CLUSTER_NAME}
user:
token: ${TOKEN}
EOF
At this point, the remote clusters’ kubeconfig
files have been created in the ${WORK_DIR}
directory.The filename for a cluster is the same as the original kubeconfig
cluster name.
Configure Istio control plane to discover the remote cluster
Create a secret and label it properly for each remote cluster:
$ kubectl config use-context "gke_${proj}_${zone}_cluster-1"
$ kubectl create secret generic ${CLUSTER_NAME} --from-file ${KUBECFG_FILE} -n ${NAMESPACE}
$ kubectl label secret ${CLUSTER_NAME} istio/multiCluster=true -n ${NAMESPACE}
Deploy Bookinfo Example Across Clusters
- Install Bookinfo on the first cluster. Remove the
reviews-v3
deployment to deploy on remote:
$ kubectl config use-context "gke_${proj}_${zone}_cluster-1"
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@
$ kubectl apply -f @samples/bookinfo/networking/bookinfo-gateway.yaml@
$ kubectl delete deployment reviews-v3
- Install the
reviews-v3
deployment on the remote cluster.
$ kubectl config use-context "gke_${proj}_${zone}_cluster-2"
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@ -l service=ratings
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@ -l service=reviews
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@ -l account=reviews
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@ -l app=reviews,version=v3
Note: The ratings
service definition is added to the remote cluster because reviews-v3
is aclient of ratings
and creating the service object creates a DNS entry. The Istio sidecar in thereviews-v3
pod will determine the proper ratings
endpoint after the DNS lookup is resolved to aservice address. This would not be necessary if a multicluster DNS solution were additionally set up, e.g. asin a federated Kubernetes environment.
- Get the
istio-ingressgateway
service’s external IP to access thebookinfo
page to validate that Istiois including the remote’sreviews-v3
instance in the load balancing of reviews versions:
$ kubectl config use-context "gke_${proj}_${zone}_cluster-1"
$ kubectl get svc istio-ingressgateway -n istio-system
Access http://<GATEWAY_IP>/productpage
repeatedly and each version of reviews should be equally loadbalanced,including reviews-v3
in the remote cluster (red stars). It may take several accesses (dozens) to demonstratethe equal loadbalancing between reviews
versions.
Uninstalling
The following should be done in addition to the uninstall of Istio as described in theVPN-based multicluster uninstall section:
- Delete the Google Cloud firewall rule:
$ gcloud compute firewall-rules delete istio-multicluster-test-pods --quiet
- Delete the
cluster-admin
cluster role binding from each cluster no longer being used for Istio:
$ kubectl delete clusterrolebinding gke-cluster-admin-binding
- Delete any GKE clusters no longer in use. The following is an example delete command for the remote cluster,
cluster-2
:
$ gcloud container clusters delete cluster-2 --zone $zone
相关内容
Example multicluster mesh over two IBM Cloud Private clusters.
Shared control plane (multi-network)
Install an Istio mesh across multiple Kubernetes clusters using a shared control plane for disconnected cluster networks.
Shared control plane (single-network)
Install an Istio mesh across multiple Kubernetes clusters with a shared control plane and VPN connectivity between clusters.
通过控制平面副本集实例,在多个 Kubernetes 集群上安装 Istio 网格。
Multi-mesh deployments for isolation and boundary protection
Deploy environments that require isolation into separate meshes and enable inter-mesh communication by mesh federation.
Version Routing in a Multicluster Service Mesh
Configuring Istio route rules in a multicluster service mesh.