Virtual Machine Installation
Follow this guide to deploy Istio and connect a virtual machine to it.
Prerequisites
- Download the Istio release
- Perform any necessary platform-specific setup
- Check the requirements for Pods and Services
- Virtual machines must have IP connectivity to the ingress gateway in the connecting mesh, and optionally every pod in the mesh via L3 networking if enhanced performance is desired.
- Learn about Virtual Machine Architecture to gain an understanding of the high level architecture of Istio’s virtual machine integration.
Prepare the guide environment
- Create a virtual machine
Set the environment variables
VM_APP
,WORK_DIR
,VM_NAMESPACE
, andSERVICE_ACCOUNT
on your machine that you’re using to setup the cluster. (e.g.,WORK_DIR="${HOME}/vmintegration"
):$ VM_APP="<the name of the application this VM will run>"
$ VM_NAMESPACE="<the name of your service namespace>"
$ WORK_DIR="<a certificate working directory>"
$ SERVICE_ACCOUNT="<name of the Kubernetes service account you want to use for your VM>"
$ CLUSTER_NETWORK=""
$ VM_NETWORK=""
$ CLUSTER="Kubernetes"
$ VM_APP="<the name of the application this VM will run>"
$ VM_NAMESPACE="<the name of your service namespace>"
$ WORK_DIR="<a certificate working directory>"
$ SERVICE_ACCOUNT="<name of the Kubernetes service account you want to use for your VM>"
$ # Customize values for multi-cluster/multi-network as needed
$ CLUSTER_NETWORK="kube-network"
$ VM_NETWORK="vm-network"
$ CLUSTER="cluster1"
Create the working directory on your machine that you’re using to setup the cluster:
$ mkdir -p "${WORK_DIR}"
Install the Istio control plane
If your cluster already has an Istio control plane, you can skip the installation steps, but will still need to expose the control plane for virtual machine access.
Install Istio and expose the control plane on cluster so that your virtual machine can access it.
Create the
IstioOperator
spec for installation.$ cat <<EOF > ./vm-cluster.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: istio
spec:
values:
global:
meshID: mesh1
multiCluster:
clusterName: "${CLUSTER}"
network: "${CLUSTER_NETWORK}"
EOF
Install Istio.
$ istioctl install -f vm-cluster.yaml
This feature is actively in development and is considered experimental.
$ istioctl install -f vm-cluster.yaml --set values.pilot.env.PILOT_ENABLE_WORKLOAD_ENTRY_AUTOREGISTRATION=true --set values.pilot.env.PILOT_ENABLE_WORKLOAD_ENTRY_HEALTHCHECKS=true
Deploy the east-west gateway:
If the control-plane was installed with a revision, add the
--revision rev
flag to thegen-eastwest-gateway.sh
command.$ @samples/multicluster/gen-eastwest-gateway.sh@ --single-cluster | istioctl install -y -f -
$ @samples/multicluster/gen-eastwest-gateway.sh@ \
--mesh mesh1 --cluster "${CLUSTER}" --network "${CLUSTER_NETWORK}" | \
istioctl install -y -f -
Expose services inside the cluster via the east-west gateway:
Expose the control plane:
$ kubectl apply -n istio-system -f @samples/multicluster/expose-istiod.yaml@
Expose the control plane:
$ kubectl apply -n istio-system -f @samples/multicluster/expose-istiod.yaml@
Expose cluster services:
$ kubectl apply -n istio-system -f @samples/multicluster/expose-services.yaml@
Configure the VM namespace
Create the namespace that will host the virtual machine:
$ kubectl create namespace "${VM_NAMESPACE}"
Create a serviceaccount for the virtual machine:
$ kubectl create serviceaccount "${SERVICE_ACCOUNT}" -n "${VM_NAMESPACE}"
Create files to transfer to the virtual machine
First, create a template WorkloadGroup
for the VM(s):
$ cat <<EOF > workloadgroup.yaml
apiVersion: networking.istio.io/v1alpha3
kind: WorkloadGroup
metadata:
name: "${VM_APP}"
namespace: "${VM_NAMESPACE}"
spec:
metadata:
labels:
app: "${VM_APP}"
template:
serviceAccount: "${SERVICE_ACCOUNT}"
network: "${VM_NETWORK}"
EOF
First, create a template WorkloadGroup
for the VM(s):
This feature is actively in development and is considered experimental.
$ cat <<EOF > workloadgroup.yaml
apiVersion: networking.istio.io/v1alpha3
kind: WorkloadGroup
metadata:
name: "${VM_APP}"
namespace: "${VM_NAMESPACE}"
spec:
metadata:
labels:
app: "${VM_APP}"
template:
serviceAccount: "${SERVICE_ACCOUNT}"
network: "${VM_NETWORK}"
EOF
Then, to allow automated WorkloadEntry
creation, push the WorkloadGroup
to the cluster:
$ kubectl --namespace "${VM_NAMESPACE}" apply -f workloadgroup.yaml
Using the Automated WorkloadEntry
Creation feature, application health checks are also available. These share the same API and behavior as Kubernetes Readiness Probes.
For example, to configure a probe on the /ready
endpoint of your application:
$ cat <<EOF > workloadgroup.yaml
apiVersion: networking.istio.io/v1alpha3
kind: WorkloadGroup
metadata:
name: "${VM_APP}"
namespace: "${VM_NAMESPACE}"
spec:
metadata:
labels:
app: "${VM_APP}"
template:
serviceAccount: "${SERVICE_ACCOUNT}"
network: "${NETWORK}"
probe:
periodSeconds: 5
initialDelaySeconds: 1
httpGet:
port: 8080
path: /ready
EOF
With this configuration, the automatically generated WorkloadEntry
will not be marked “Ready” until the probe succeeds.
Before proceeding to generate the istio-token
, as part of istioctl x workload entry
, you should verify third party tokens are enabled in your cluster by following the steps describe here. If third party tokens are not enabled, you should add the option --set values.global.jwtPolicy=first-party-jwt
to the Istio install commands.
Next, use the istioctl x workload entry
command to generate:
cluster.env
: Contains metadata that identifies what namespace, service account, network CIDR and (optionally) what inbound ports to capture.istio-token
: A Kubernetes token used to get certs from the CA.mesh.yaml
: ProvidesProxyConfig
to configurediscoveryAddress
, health-checking probes, and some authentication options.root-cert.pem
: The root certificate used to authenticate.hosts
: An addendum to/etc/hosts
that the proxy will use to reach istiod for xDS.*
A sophisticated option involves configuring DNS within the virtual machine to reference an external DNS server. This option is beyond the scope of this guide.
$ istioctl x workload entry configure -f workloadgroup.yaml -o "${WORK_DIR}" --clusterID "${CLUSTER}"
This feature is actively in development and is considered experimental.
$ istioctl x workload entry configure -f workloadgroup.yaml -o "${WORK_DIR}" --clusterID "${CLUSTER}" --autoregister
Configure the virtual machine
Run the following commands on the virtual machine you want to add to the Istio mesh:
Securely transfer the files from
"${WORK_DIR}"
to the virtual machine. How you choose to securely transfer those files should be done with consideration for your information security policies. For convenience in this guide, transfer all of the required files to"${HOME}"
in the virtual machine.Install the root certificate at
/etc/certs
:$ sudo mkdir -p /etc/certs
$ sudo cp "${HOME}"/root-cert.pem /etc/certs/root-cert.pem
Install the token at
/var/run/secrets/tokens
:$ sudo mkdir -p /var/run/secrets/tokens
$ sudo cp "${HOME}"/istio-token /var/run/secrets/tokens/istio-token
Install the package containing the Istio virtual machine integration runtime:
$ curl -LO https://storage.googleapis.com/istio-release/releases/1.14.1/deb/istio-sidecar.deb
$ sudo dpkg -i istio-sidecar.deb
Note: only CentOS 8 is currently supported.
$ curl -LO https://storage.googleapis.com/istio-release/releases/1.14.1/rpm/istio-sidecar.rpm
$ sudo rpm -i istio-sidecar.rpm
Install
cluster.env
within the directory/var/lib/istio/envoy/
:$ sudo cp "${HOME}"/cluster.env /var/lib/istio/envoy/cluster.env
Install the Mesh Config to
/etc/istio/config/mesh
:$ sudo cp "${HOME}"/mesh.yaml /etc/istio/config/mesh
Add the istiod host to
/etc/hosts
:$ sudo sh -c 'cat $(eval echo ~$SUDO_USER)/hosts >> /etc/hosts'
Transfer ownership of the files in
/etc/certs/
and/var/lib/istio/envoy/
to the Istio proxy:$ sudo mkdir -p /etc/istio/proxy
$ sudo chown -R istio-proxy /var/lib/istio /etc/certs /etc/istio/proxy /etc/istio/config /var/run/secrets /etc/certs/root-cert.pem
Start Istio within the virtual machine
Start the Istio agent:
$ sudo systemctl start istio
Verify Istio Works Successfully
Check the log in
/var/log/istio/istio.log
. You should see entries similar to the following:$ 2020-08-21T01:32:17.748413Z info sds resource:default pushed key/cert pair to proxy
$ 2020-08-21T01:32:20.270073Z info sds resource:ROOTCA new connection
$ 2020-08-21T01:32:20.270142Z info sds Skipping waiting for gateway secret
$ 2020-08-21T01:32:20.270279Z info cache adding watcher for file ./etc/certs/root-cert.pem
$ 2020-08-21T01:32:20.270347Z info cache GenerateSecret from file ROOTCA
$ 2020-08-21T01:32:20.270494Z info sds resource:ROOTCA pushed root cert to proxy
$ 2020-08-21T01:32:20.270734Z info sds resource:default new connection
$ 2020-08-21T01:32:20.270763Z info sds Skipping waiting for gateway secret
$ 2020-08-21T01:32:20.695478Z info cache GenerateSecret default
$ 2020-08-21T01:32:20.695595Z info sds resource:default pushed key/cert pair to proxy
Create a Namespace to deploy a Pod-based Service:
$ kubectl create namespace sample
$ kubectl label namespace sample istio-injection=enabled
Deploy the
HelloWorld
Service:$ kubectl apply -n sample -f @samples/helloworld/helloworld.yaml@
Send requests from your Virtual Machine to the Service:
$ curl helloworld.sample.svc:5000/hello
Hello version: v1, instance: helloworld-v1-578dd69f69-fxwwk
Next Steps
For more information about virtual machines:
- Debugging Virtual Machines to troubleshoot issues with virtual machines.
- Bookinfo with a Virtual Machine to set up an example deployment of virtual machines.
Uninstall
Stop Istio on the virtual machine:
$ sudo systemctl stop istio
Then, remove the Istio-sidecar package:
$ sudo dpkg -r istio-sidecar
$ dpkg -s istio-sidecar
$ sudo rpm -e istio-sidecar
To uninstall Istio, run the following command:
$ kubectl delete -n istio-system -f @samples/multicluster/expose-istiod.yaml@
$ istioctl manifest generate | kubectl delete -f -
The control plane namespace (e.g., istio-system
) is not removed by default. If no longer needed, use the following command to remove it:
$ kubectl delete namespace istio-system