DaemonSet Upgrade Model

Background

In edge scenarios, the native DaemonSet upgrade model does not perfectly satisfy existing requirements. In the case of cloud-edge network disconnection, DaemonSet upgrade process may be blocked. In addition, the native upgrade model does not provide any upgrade operation API, and users cannot control the application upgrade on their own.

In order to address the above problems, we extend the native DaemonSet upgrade model by adding a custom controller daemonPodUpdater-controller, providing AdvancedRollingUpdate and OTA two upgrade model.

  • AdvancedRollingUpdate: Solve the DaemonSet upgrade process blocking problem which caused by node Not-Ready when the cloud-edge is disconnected. During AdvancedRollingUpdate upgrade, not-ready nodes will be ignored. And when Not-Ready nodes turn to Ready, upgrade process will be completed automatically.
  • OTA: Add pod status condition PodNeedUpgrade which indicates the upgrade availability information. YurtHub OTA component can use this condition to determine if a new version of DaemonSet application exists.

Configuration

  1. # example configuration for AdvancedRollingUpdate or OTA upgrade
  2. apiVersion: apps/v1
  3. kind: DaemonSet
  4. metadata:
  5. # ···
  6. annotations:
  7. # This annotation is the first prerequisite for using AdvancedRollingUpdate or OTA upgrade
  8. # and the only valid values are "AdvancedRollingUpdate" or "OTA".
  9. apps.openyurt.io/update-strategy: OTA
  10. # This annotation is used for rolling update and only works in AdvancedRollingUpdate mode.
  11. # The supported value is the same with native DaemonSet maxUnavailable, default to 10%.
  12. apps.openyurt.io/max-unavailable: 30%
  13. # ···
  14. spec:
  15. # ···
  16. # Set updateStrategy to "OnDelete" is another prerequisite for using AdvancedRollingUpdate or OTA upgrade.
  17. updateStrategy:
  18. type: OnDelete
  19. # ···

In short, if you wish to use AdvancedRollingUpdate or OTA upgrade, you need to set annotation apps.openyurt.io/update-strategy to “AdvancedRollingUpdate” or “OTA” and set .spec.updateStrategy.type to “OnDelete”.

Usage:

1)Install Yurt-Manager Component

daemonpodupdater controller is integrated within Yurt-Manager component, and it needs to be installed before using AdvancedRollingUpdate or OTA Upgrade Model, you can refer to Deploy OpenYurt for detailed operations.

2)AdvancedRollingUpdate Upgrade Model

  • Create daemonset instance
  1. cat <<EOF | kubectl apply -f -
  2. apiVersion: apps/v1
  3. kind: DaemonSet
  4. metadata:
  5. name: nginx-daemonset
  6. annotations:
  7. apps.openyurt.io/update-strategy: AdvancedRollingUpdate
  8. spec:
  9. selector:
  10. matchLabels:
  11. app: nginx
  12. updateStrategy:
  13. type: OnDelete
  14. template:
  15. metadata:
  16. labels:
  17. app: nginx
  18. spec:
  19. containers:
  20. - name: nginx
  21. image: nginx:1.19.4
  22. EOF
  • Get nginx-daemonset pods
  1. $ kubectl get pods | grep nginx-daemonset
  2. nginx-daemonset-bv5jg 1/1 Running 0 21m 10.244.2.2 openyurt-e2e-test-worker3 <none> <none>
  3. nginx-daemonset-fhsr6 1/1 Running 0 21m 10.244.1.2 openyurt-e2e-test-worker <none> <none>
  4. nginx-daemonset-lmmtd 1/1 Running 0 21m 10.244.3.2 openyurt-e2e-test-worker2 <none> <none>
  • Simulate cloud-edge network disconnection: assume that nodes openyurt-e2e-test-worker2 and openyurt-e2e-test-worker3 are disconnected from the cloud node. This example uses Kind to create the cluster, so the network disconnection can be simulated by removing the containers from the virtual bridge.
  1. $ docker network disconnect kind openyurt-e2e-test-worker2
  2. $ docker network disconnect kind openyurt-e2e-test-worker3
  3. $ kubectl get nodes -o wide
  4. AME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
  5. openyurt-e2e-test-control-plane Ready control-plane,master 36m v1.22.7 172.18.0.4 <none> Ubuntu 21.10 5.10.76-linuxkit containerd://1.5.10
  6. openyurt-e2e-test-worker Ready <none> 35m v1.22.7 172.18.0.2 <none> Ubuntu 21.10 5.10.76-linuxkit containerd://1.5.10
  7. openyurt-e2e-test-worker2 NotReady <none> 35m v1.22.7 172.18.0.3 <none> Ubuntu 21.10 5.10.76-linuxkit containerd://1.5.10
  8. openyurt-e2e-test-worker3 NotReady <none> 35m v1.22.7 172.18.0.5 <none> Ubuntu 21.10 5.10.76-linuxkit containerd://1.5.10
  • Update daemonset: change the container image from nginx:1.19.4 to nginx:1.19.5
  1. ***
  2. containers:
  3. - name: nginx
  4. image: nginx:1.19.5
  5. ***
  • Get pods: the old pod default/nginx-daemonset-fhsr6 on openyurt-e2e-test-worker node has been deleted and the new pod default/nginx-daemonset-slp5t has been created; the pods on the two disconnected nodes will not be upgraded temporarily
  1. nginx-daemonset-bv5jg 1/1 Running 0 33m 10.244.2.2 openyurt-e2e-test-worker3 <none> <none>
  2. nginx-daemonset-lmmtd 1/1 Running 0 33m 10.244.3.2 openyurt-e2e-test-worker2 <none> <none>
  3. nginx-daemonset-slp5t 1/1 Running 0 5m54s 10.244.1.3 openyurt-e2e-test-worker <none> <none>
  • Restore network connectivity of nodes
  1. $ docker network connect kind openyurt-e2e-test-worker2
  2. $ docker network connect kind openyurt-e2e-test-worker3
  3. $ kubectl get nodes -o wide
  4. NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
  5. openyurt-e2e-test-control-plane Ready control-plane,master 49m v1.22.7 172.18.0.4 <none> Ubuntu 21.10 5.10.76-linuxkit containerd://1.5.10
  6. openyurt-e2e-test-worker Ready <none> 48m v1.22.7 172.18.0.2 <none> Ubuntu 21.10 5.10.76-linuxkit containerd://1.5.10
  7. openyurt-e2e-test-worker2 Ready <none> 48m v1.22.7 172.18.0.3 <none> Ubuntu 21.10 5.10.76-linuxkit containerd://1.5.10
  8. openyurt-e2e-test-worker3 Ready <none> 48m v1.22.7 172.18.0.5 <none> Ubuntu 21.10 5.10.76-linuxkit containerd://1.5.10
  • Get pods: daemonset pods on all nodes have been upgraded
  1. nginx-daemonset-kbkf6 1/1 Running 0 88s 10.244.3.3 openyurt-e2e-test-worker2 <none> <none>
  2. nginx-daemonset-scgtv 1/1 Running 0 51s 10.244.2.3 openyurt-e2e-test-worker3 <none> <none>
  3. nginx-daemonset-slp5t 1/1 Running 0 11m 10.244.1.3 openyurt-e2e-test-worker <none> <none>
  • Check pods image version: all pods have been upgraded to nginx:1.19.5
  1. ***
  2. Containers:
  3. nginx:
  4. Container ID: containerd://f7d4b3f1257a0d1d8da862671c11cb094f9fba1ba0041b7a5f783d9c9e4d8449
  5. Image: nginx:1.19.5
  6. Image ID: docker.io/library/nginx@sha256:31de7d2fd0e751685e57339d2b4a4aa175aea922e592d36a7078d72db0a45639
  7. Port: <none>
  8. Host Port: <none>
  9. State: Running
  10. Started: Fri, 14 Oct 2022 14:21:25 +0800
  11. Ready: True
  12. Restart Count: 0
  13. Environment: <none>
  14. Mounts:
  15. /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wrhj8 (ro)
  16. ***

3)OTA Upgrade Model

OTA Upgrade API

YurtHub provides two REST APIs for OTA upgrades.

  1. GET /pods

    This API allows you to get information about the pods on the node.

  2. POST /openyurt.io/v1/namespaces/{ns}/pods/{podname}/upgrade

    This API allows you to specify and upgrade a DaemonSet Pod. The path parameters ns and podname represent the namespace and name of the pod, respectively.

OTA Upgrade Example

  • Create daemonset instance
  1. cat <<EOF | kubectl apply -f -
  2. apiVersion: apps/v1
  3. kind: DaemonSet
  4. metadata:
  5. name: nginx-daemonset
  6. annotations:
  7. apps.openyurt.io/update-strategy: OTA
  8. spec:
  9. selector:
  10. matchLabels:
  11. app: nginx
  12. updateStrategy:
  13. type: OnDelete
  14. template:
  15. metadata:
  16. labels:
  17. app: nginx
  18. spec:
  19. containers:
  20. - name: nginx
  21. image: nginx:1.19.4
  22. EOF
  23. # get nginx-daemonset pods
  24. $ kubectl get pods -o wide | grep nginx-daemonset
  25. nginx-daemonset-bwzss 1/1 Running 0 92s 10.244.3.4 openyurt-e2e-test-worker2 <none> <none>
  26. nginx-daemonset-ppf9p 1/1 Running 0 92s 10.244.1.4 openyurt-e2e-test-worker <none> <none>
  27. nginx-daemonset-rgp9h 1/1 Running 0 92s 10.244.2.4 openyurt-e2e-test-worker3 <none> <none>
  • Check pod status condition PodNeedUpgrade: take pod nginx-daemonset-bwzss on node openyurt-e2e-test-worker2 as an example
  1. $ kubectl describe pods nginx-daemonset-bwzss
  2. ***
  3. Conditions:
  4. Type Status
  5. PodNeedUpgrade False
  6. ***
  • Update daemonset: change the container image from nginx:1.19.4 to nginx:1.19.5
  1. ***
  2. containers:
  3. - name: nginx
  4. image: nginx:1.19.5
  5. ***
  • Check pod status condition PodNeedUpgrade again
  1. $ kubectl describe pods nginx-daemonset-bwzss
  2. ***
  3. Conditions:
  4. Type Status
  5. PodNeedUpgrade True
  6. ***
  • Execute OTA upgrade
  1. # enter edge node container of Kind cluster
  2. $ docker exec -it openyurt-e2e-test-worker2 /bin/bash
  3. # call Upgrade API, this upgrade API is only available on the edge nodes
  4. $ curl -X POST 127.0.0.1:10267/openyurt.io/v1/namespaces/default/pods/nginx-daemonset-bwzss/upgrade
  5. Start updating pod default/nginx-daemonset-bwzss
  • Check upgrade result: pod nginx-daemonset-bwzss on node openyurt-e2e-test-worker2 has been deleted and new pod nginx-daemonset-vrvhn has been created
  1. # check result
  2. $ kubectl get pods -o wide | grep nginx-daemonset
  3. nginx-daemonset-ppf9p 1/1 Running 0 15m 10.244.1.4 openyurt-e2e-test-worker <none> <none>
  4. nginx-daemonset-rgp9h 1/1 Running 0 15m 10.244.2.4 openyurt-e2e-test-worker3 <none> <none>
  5. nginx-daemonset-vrvhn 1/1 Running 0 63s 10.244.3.5 openyurt-e2e-test-worker2 <none> <none>
  6. # check pod container image
  7. $ kubectl describe pods nginx-daemonset-vrvhn
  8. ***
  9. Containers:
  10. nginx:
  11. Container ID: containerd://18df6aa88076639353ea0b3d87f340cd4c86ab27a7f154bce06345e9764c997a
  12. Image: nginx:1.19.5
  13. Image ID: docker.io/library/nginx@sha256:31de7d2fd0e751685e57339d2b4a4aa175aea922e592d36a7078d72db0a45639
  14. Port: <none>
  15. Host Port: <none>
  16. State: Running
  17. Started: Fri, 14 Oct 2022 16:25:20 +0800
  18. Ready: True
  19. Restart Count: 0
  20. Environment: <none>
  21. Mounts:
  22. /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p6kjh (ro)
  23. ***