Canary Upgrades
Upgrading Istio can be done by first running a canary deployment of the new control plane, allowing you to monitor the effect of the upgrade with a small percentage of the workloads, before migrating all of the traffic to the new version. This is much safer than doing an in place upgrade and is the recommended upgrade method.
When installing Istio, the revision
installation setting can be used to deploy multiple independent control planes at the same time. A canary version of an upgrade can be started by installing the new Istio version’s control plane next to the old one, using a different revision
setting. Each revision is a full Istio control plane implementation with its own Deployment
, Service
, etc.
Before you upgrade
Before upgrading Istio, it is recommended to run the istioctl x precheck
command to make sure the upgrade is compatible with your environment.
$ istioctl x precheck
✔ No issues found when checking the cluster. Istio is safe to install or upgrade!
To get started, check out https://istio.io/latest/docs/setup/getting-started/
Control plane
To install a new revision called canary
, you would set the revision
field as follows:
In a production environment, a better revision name would correspond to the Istio version. However, you must replace .
characters in the revision name, for example, revision=1-6-8
for Istio 1.6.8
, because .
is not a valid revision name character.
$ istioctl install --set revision=canary
After running the command, you will have two control plane deployments and services running side-by-side:
$ kubectl get pods -n istio-system -l app=istiod
NAME READY STATUS RESTARTS AGE
istiod-786779888b-p9s5n 1/1 Running 0 114m
istiod-canary-6956db645c-vwhsk 1/1 Running 0 1m
$ kubectl get svc -n istio-system -l app=istiod
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istiod ClusterIP 10.32.5.247 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 33d
istiod-canary ClusterIP 10.32.6.58 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP,53/UDP,853/TCP 12m
You will also see that there are two sidecar injector configurations including the new revision.
$ kubectl get mutatingwebhookconfigurations
NAME WEBHOOKS AGE
istio-sidecar-injector 1 7m56s
istio-sidecar-injector-canary 1 3m18s
Due to a bug in the creation of the ValidatingWebhookConfiguration
during install, initial installations of Istio must not specify a revision. As a temporary workaround, for Istio resource validation to continue working after removing the non-revisioned Istio installation, the istiod
service must be manually pointed to the revision that should handle validation.
One way to accomplish this is to manually create a service called istiod
pointing to the target revision using this service as a template. Another option is to run the command below, where <REVISION>
is the name of the revision that should handle validation. This command creates an istiod
service pointed to the target revision.
$ kubectl get service -n istio-system -o json istiod-<REVISION> | jq '.metadata.name = "istiod" | del(.spec.clusterIP) | del(.spec.clusterIPs)' | kubectl apply -f -
Data plane
Unlike istiod, Istio gateways do not run revision-specific instances, but are instead in-place upgraded to use the new control plane revision. You can verify that the istio-ingress
gateway is using the canary
revision by running the following command:
$ istioctl proxy-status | grep $(kubectl -n istio-system get pod -l app=istio-ingressgateway -o jsonpath='{.items..metadata.name}') | awk '{print $7}'
istiod-canary-6956db645c-vwhsk
However, simply installing the new revision has no impact on the existing sidecar proxies. To upgrade these, you must configure them to point to the new istiod-canary
control plane. This is controlled during sidecar injection based on the namespace label istio.io/rev
.
To upgrade the namespace test-ns
, remove the istio-injection
label, and add the istio.io/rev
label to point to the canary
revision. The istio-injection
label must be removed because it takes precedence over the istio.io/rev
label for backward compatibility.
$ kubectl label namespace test-ns istio-injection- istio.io/rev=canary
After the namespace updates, you need to restart the pods to trigger re-injection. One way to do this is using:
$ kubectl rollout restart deployment -n test-ns
When the pods are re-injected, they will be configured to point to the istiod-canary
control plane. You can verify this by looking at the pod labels.
For example, the following command will show all the pods using the canary
revision:
$ kubectl get pods -n test-ns -l istio.io/rev=canary
To verify that the new pods in the test-ns
namespace are using the istiod-canary
service corresponding to the canary
revision, select one newly created pod and use the pod_name
in the following command:
$ istioctl proxy-status | grep ${pod_name} | awk '{print $7}'
istiod-canary-6956db645c-vwhsk
The output confirms that the pod is using istiod-canary
revision of the control plane.
Stable revision labels (experimental)
Manually relabeling namespaces when moving them to a new revision can be tedious and error-prone. Revision tags are a solution to this. Revision tags are stable identifiers that point to revisions and can be used to avoid relabeling namespaces. Rather than relabeling the namespace, a mesh operator can simply change the tag to point to a new revision. All namespaces with that tag will be updated at the same time.
Consider a cluster with two revisions installed, 1-7-6
and 1-8-0
. The cluster operator creates a revision tag prod
, pointed at the older, stable 1-7-6
version, and a revision tag canary
pointed at the newer 1-8-0
revision. That state could be reached via these commands:
$ istioctl x revision tag set prod --revision 1-7-6
$ istioctl x revision tag set canary --revision 1-8-0
Namespaces A and B pointed to 1-7-6, namespace C pointed to 1-8-0
After the operator is satisfied with the stability of the canary
tagged control planes, namespaces labeled istio.io/rev=prod
can be updated with one action by modifying the prod
revision tag to point to the newer 1-8-0
revision.
$ istioctl x revision tag set prod --revision 1-8-0
Now, the situation is as shown in the diagram below:
Namespaces A, B, and C pointed to 1-8-0
Restarting the injected workloads in namespaces A
and B
will result in those workloads using the 1.8.0
control plane.
Uninstall old control plane
After upgrading both the control plane and data plane, you can uninstall the old control plane. For example, the following command uninstalls a control plane of revision 1-6-5
:
$ istioctl x uninstall --revision 1-6-5
If the old control plane does not have a revision label, uninstall it using its original installation options, for example:
$ istioctl x uninstall -f manifests/profiles/default.yaml
Confirm that the old control plane has been removed and only the new one still exists in the cluster:
$ kubectl get pods -n istio-system -l app=istiod
NAME READY STATUS RESTARTS AGE
istiod-canary-55887f699c-t8bh8 1/1 Running 0 27m
Note that the above instructions only removed the resources for the specified control plane revision, but not cluster-scoped resources shared with other control planes. To uninstall Istio completely, refer to the uninstall guide.
Uninstall canary control plane
If you decide to rollback to the old control plane, instead of completing the canary upgrade, you can uninstall the canary revision using istioctl x uninstall --revision=canary
.
However, in this case you must first reinstall the gateway(s) for the previous revision manually, because the uninstall command will not automatically revert the previously in-place upgraded ones.
Make sure to use the istioctl
version corresponding to the old control plane to reinstall the old gateways and, to avoid downtime, make sure the old gateways are up and running before proceeding with the canary uninstall.