- Upgrade Calico on Kubernetes
- About upgrading Calico
- Host Endpoints
- Upgrading an installation that was installed using helm
- Upgrading an installation that uses the operator
- Upgrading an installation that uses manifests and the Kubernetes API datastore
- Upgrading an installation that uses an etcd datastore
- Upgrading if you have Application Layer Policy enabled
- Migrating to auto host endpoints
Upgrade Calico on Kubernetes
About upgrading Calico
This page describes how to upgrade to v3.26 from Calico v3.0 or later. The procedure varies by datastore type and install method.
If you are using Calico in etcd mode on a Kubernetes cluster, we recommend upgrading to the Kubernetes API datastore as discussed here.
If you have installed Calico using the calico.yaml
manifest, we recommend upgrading to the Calico operator, as discussed here.
Upgrading an installation that uses manifests and the Kubernetes API datastore
Upgrading an installation that connects directly to an etcd datastore
note
Do not use older versions of calicoctl
after the upgrade. This may result in unexpected behavior and data.
Host Endpoints
caution
If your cluster has host endpoints with interfaceName: *
you must prepare your cluster before upgrading. Failure to do so will result in an outage.
In versions of Calico prior to v3.14, all-interfaces host endpoints (host endpoints with interfaceName: *
) only supported pre-DNAT policy. The default behavior of all-interfaces host endpoints, in the absence of any policy, was to allow all traffic.
Beginning from v3.14, all-interfaces host endpoints support normal policy in addition to pre-DNAT policy. The support for normal policy includes a change in default behavior for all-interfaces host endpoints: in the absence of policy the default behavior is to drop traffic. This default behavior is consistent with “named” host endpoints (which specify a named interface such as “eth0”); named host endpoints drop traffic in the absence of policy.
Before upgrading to v3.26, you must ensure that global network policies are in place that select existing all-interfaces host endpoints and explicitly allow existing traffic flows. As a starting point, you can create an allow-all policy that selects existing all-interfaces host endpoints. First, we’ll add a label to the existing host endpoints. Get a list of the nodes that have an all-interfaces host endpoint:
calicoctl get hep -owide | grep | awk '"print $1"'
With the names of the all-interfaces host endpoints, we can label each host endpoint with a new label (for example, host-endpoint-upgrade: “”):
calicoctl get hep -owide | grep '*' | awk '"print $1"' \| xargs -I {} kubectl exec -i -n kube-system calicoctl -- /calicoctl label hostendpoint {} host-endpoint-upgrade=
Now that the nodes with an all-interfaces host endpoint are labeled with host-endpoint-upgrade, we can create a policy to log and allow all traffic going into or out of the host endpoints temporarily:
cat > allow-all-upgrade.yaml <<EOF
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: allow-all-upgrade
spec:
selector: has(host-endpoint-upgrade)
types:
- Ingress
- Egress
ingress:
- action: Log
- action: Allow
egress:
- action: Log
- action: Allow
EOF
Apply the policy:
calicoctl apply -f - < allow-all-upgrade.yaml
After applying this policy, all-interfaces host endpoints will log and allow all traffic through them. This policy will allow all traffic not accounted for by other policies. After upgrading, please review syslog logs for traffic going through the host endpoints and update the policy as needed to secure traffic to the host endpoints.
Upgrading an installation that was installed using helm
Prior to release v3.23, the Calico helm chart itself deployed the tigera-operator
namespace and required that the helm release was installed in the default
namespace. Newer releases properly defer creation of the tigera-operator
namespace to the user and allow installation of the chart into the tigera-operator
namespace.
When upgrading from a version of Calico v3.22 or lower to a version of Calico v3.23 or greater, you must complete the following steps to migrate ownership of the helm resources to the new chart location.
Upgrade from Calico versions prior to v3.23.0
Patch existing resources so that the new chart can assume ownership.
kubectl patch installation default --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch apiserver default --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch podsecuritypolicy tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch -n tigera-operator deployment tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch -n tigera-operator serviceaccount tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch clusterrole tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch clusterrolebinding tigera-operator tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
Apply the v3.26 CRDs:
kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/operator-crds.yaml
Install the helm chart in the
tigera-operator
namespace.helm install calico projectcalico/tigera-operator --version v3.26.4 --namespace tigera-operator
Once the install has succeeded, you can delete any old releases in the
default
namespace.kubectl delete secret -n default -l name=calico,owner=helm --dry-run
note
The above command uses —dry-run to avoid making changes to your cluster. We recommend reviewing the output and then re-running the command without —dry-run to commit to the changes.
All other upgrades
Apply the v3.26 CRDs:
kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/operator-crds.yaml
Run the helm upgrade:
helm upgrade calico projectcalico/tigera-operator
Upgrading an installation that uses the operator
Download the v3.26 operator manifest.
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/tigera-operator.yaml -O
Use the following command to initiate an upgrade.
kubectl replace -f tigera-operator.yaml
Upgrading an installation that uses manifests and the Kubernetes API datastore
Download the v3.26 manifest that corresponds to your original installation method.
Calico for policy and networking
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/calico.yaml -o upgrade.yaml
Calico for policy and flannel for networking
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/canal.yaml -o upgrade.yaml
Calico for policy (advanced)
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/calico-policy-only.yaml -o upgrade.yaml
note
If you manually modified the manifest, you must manually apply the same changes to the downloaded manifest.
Use the following command to initiate a rolling update.
kubectl apply --server-side --force-conflicts -f upgrade.yaml
Watch the status of the upgrade as follows.
watch kubectl get pods -n kube-system
Verify that the status of all Calico pods indicate
Running
.calico-node-hvvg8 2/2 Running 0 3m
calico-node-vm8kh 2/2 Running 0 3m
calico-node-w92wk 2/2 Running 0 3m
Remove any existing
calicoctl
instances, install the new calicoctl and configure it to connect to your datastore.Use the following command to check the Calico version number.
calicoctl version
It should return a
Cluster Version
ofv3.26.x
.If you have enable application layer policy, follow the instructions below to complete your upgrade. Skip this if you are not using Istio with Calico.
If you were upgrading from a version of Calico prior to v3.14 and followed the pre-upgrade steps for host endpoints above, review traffic logs from the temporary policy, add any global network policies needed to allow traffic, and delete the temporary network policy allow-all-upgrade.
Congratulations! You have upgraded to Calico v3.26.
Upgrading an installation that uses an etcd datastore
Download the v3.26 manifest that corresponds to your original installation method.
Calico for policy and networking
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/calico-etcd.yaml -o upgrade.yaml
Calico for policy and flannel for networking
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/canal-etcd.yaml -o upgrade.yaml
note
You must manually apply the changes you made to the manifest during installation to the downloaded v3.26 manifest. At a minimum, you must set the
etcd_endpoints
value.Use the following command to initiate a rolling update.
kubectl apply --server-side --force-conflicts -f upgrade.yaml
Watch the status of the upgrade as follows.
watch kubectl get pods -n kube-system
Verify that the status of all Calico pods indicate
Running
.calico-kube-controllers-6d4b9d6b5b-wlkfj 1/1 Running 0 3m
calico-node-hvvg8 1/2 Running 0 3m
calico-node-vm8kh 1/2 Running 0 3m
calico-node-w92wk 1/2 Running 0 3m
tip
The calico-node pods will report
1/2
in theREADY
column, as shown.Remove any existing
calicoctl
instances, install the new calicoctl and configure it to connect to your datastore.Use the following command to check the Calico version number.
calicoctl version
It should return a
Cluster Version
ofv3.26
.If you have enabled application layer policy, follow the instructions below to complete your upgrade. Skip this if you are not using Istio with Calico.
If you were upgrading from a version of Calico prior to v3.14 and followed the pre-upgrade steps for host endpoints above, review traffic logs from the temporary policy, add any global network policies needed to allow traffic, and delete the temporary network policy allow-all-upgrade.
Congratulations! You have upgraded to Calico v3.26.
Upgrading if you have Application Layer Policy enabled
Dikastes is versioned the same as the rest of Calico, but an upgraded calico-node
will still be able to work with a downlevel Dikastes so that you will not lose data plane connectivity during the upgrade. Once calico-node
is upgraded, you can begin redeploying your service pods with the updated version of Dikastes.
If you have enabled application layer policy, take the following steps to upgrade the Dikastes sidecars running in your application pods. Skip these steps if you are not using Istio with Calico.
Update the Istio sidecar injector template to use the new version of Dikastes. Replace
<your Istio version>
below with the full version string of your Istio install, for example1.4.2
.kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/alp/istio-inject-configmap-<your Istio version>.yaml
Once the new template is in place, newly created pods use the upgraded version of Dikastes. Perform a rolling update of each of your service deployments to get them on the new version of Dikastes.
Migrating to auto host endpoints
caution
Auto host endpoints have an allow-all profile attached which allows all traffic in the absence of network policy. This may result in unexpected behavior and data.
In order to migrate existing all-interfaces host endpoints to Calico-managed auto host endpoints:
Add any labels on existing all-interfaces host endpoints to their corresponding Kubernetes nodes. Calico manages labels on automatic host endpoints by syncing labels from their nodes. Any labels on existing all-interfaces host endpoints should be added to their respective nodes. For example, if your existing all-interface host endpoint for node node1 has the label environment: dev, then you must add that same label to its node:
kubectl label node node1 environment=dev
Enable auto host endpoints by following the enable automatic host endpoints how-to guide. Note that automatic host endpoints are created with a profile attached that allows all traffic in the absence of network policy.
calicoctl patch kubecontrollersconfiguration default --patch ={"spec": {"controllers": {"node": {"hostEndpoint": {"autoCreate": "Enabled"}}}}}
Delete old all-interfaces host endpoints. You can distinguish host endpoints managed by Calico from others in several ways. First, automatic host endpoints have the label projectcalico.org/created-by: calico-kube-controllers. Secondly, automatic host endpoints’ name have the suffix -auto-hep.
calicoctl delete hostendpoint <old_hostendpoint_name>