Liveness and Readiness Probes

It is possible to configure Liveness and Readiness Probes in a similar fashion like it is possible to configure Liveness and Readiness Probes on Containers.

Liveness Probes will effectively stop the VirtualMachineInstance if they fail, which will allow higher level controllers, like VirtualMachine or VirtualMachineInstanceReplicaSet to spawn new instances, which will hopefully be responsive again.

Readiness Probes are an indicator for Services and Endpoints if the VirtualMachineInstance is ready to receive traffic from Services. If Readiness Probes fail, the VirtualMachineInstance will be removed from the Endpoints which back services until the probe recovers.

Watchdogs focus on ensuring that an Operating System is still responsive. They complement the probes which are more workload centric. Watchdogs require kernel support from the guest and additional tooling like the commonly used watchdog binary.

Define a HTTP Liveness Probe

The following VirtualMachineInstance configures a HTTP Liveness Probe via spec.livenessProbe.httpGet, which will query port 1500 of the VirtualMachineInstance, after an initial delay of 120 seconds. The VirtualMachineInstance itself installs and runs a minimal HTTP server on port 1500 via cloud-init.

  1. apiVersion: kubevirt.io/v1alpha3
  2. kind: VirtualMachineInstance
  3. metadata:
  4. labels:
  5. special: vmi-fedora
  6. name: vmi-fedora
  7. spec:
  8. domain:
  9. devices:
  10. disks:
  11. - disk:
  12. bus: virtio
  13. name: containerdisk
  14. - disk:
  15. bus: virtio
  16. name: cloudinitdisk
  17. resources:
  18. requests:
  19. memory: 1024M
  20. livenessProbe:
  21. initialDelaySeconds: 120
  22. periodSeconds: 20
  23. httpGet:
  24. port: 1500
  25. timeoutSeconds: 10
  26. terminationGracePeriodSeconds: 0
  27. volumes:
  28. - name: containerdisk
  29. registryDisk:
  30. image: registry:5000/kubevirt/fedora-cloud-registry-disk-demo:devel
  31. - cloudInitNoCloud:
  32. userData: |-
  33. #cloud-config
  34. password: fedora
  35. chpasswd: { expire: False }
  36. bootcmd:
  37. - setenforce 0
  38. - dnf install -y nmap-ncat
  39. - systemd-run --unit=httpserver nc -klp 1500 -e '/usr/bin/echo -e HTTP/1.1 200 OK\\nContent-Length: 12\\n\\nHello World!'
  40. name: cloudinitdisk

Define a TCP Liveness Probe

The following VirtualMachineInstance configures a TCP Liveness Probe via spec.livenessProbe.tcpSocket, which will query port 1500 of the VirtualMachineInstance, after an initial delay of 120 seconds. The VirtualMachineInstance itself installs and runs a minimal HTTP server on port 1500 via cloud-init.

  1. apiVersion: kubevirt.io/v1alpha3
  2. kind: VirtualMachineInstance
  3. metadata:
  4. labels:
  5. special: vmi-fedora
  6. name: vmi-fedora
  7. spec:
  8. domain:
  9. devices:
  10. disks:
  11. - disk:
  12. bus: virtio
  13. name: containerdisk
  14. - disk:
  15. bus: virtio
  16. name: cloudinitdisk
  17. resources:
  18. requests:
  19. memory: 1024M
  20. livenessProbe:
  21. initialDelaySeconds: 120
  22. periodSeconds: 20
  23. tcpSocket:
  24. port: 1500
  25. timeoutSeconds: 10
  26. terminationGracePeriodSeconds: 0
  27. volumes:
  28. - name: containerdisk
  29. registryDisk:
  30. image: registry:5000/kubevirt/fedora-cloud-registry-disk-demo:devel
  31. - cloudInitNoCloud:
  32. userData: |-
  33. #cloud-config
  34. password: fedora
  35. chpasswd: { expire: False }
  36. bootcmd:
  37. - setenforce 0
  38. - dnf install -y nmap-ncat
  39. - systemd-run --unit=httpserver nc -klp 1500 -e '/usr/bin/echo -e HTTP/1.1 200 OK\\nContent-Length: 12\\n\\nHello World!'
  40. name: cloudinitdisk

Define Readiness Probes

Readiness Probes are configured in a similar way like liveness probes. Instead of spec.livenessProbe, spec.readinessProbe needs to be filled:

  1. apiVersion: kubevirt.io/v1alpha3
  2. kind: VirtualMachineInstance
  3. metadata:
  4. labels:
  5. special: vmi-fedora
  6. name: vmi-fedora
  7. spec:
  8. domain:
  9. devices:
  10. disks:
  11. - disk:
  12. bus: virtio
  13. name: containerdisk
  14. - disk:
  15. bus: virtio
  16. name: cloudinitdisk
  17. resources:
  18. requests:
  19. memory: 1024M
  20. readinessProbe:
  21. httpGet:
  22. port: 1500
  23. initialDelaySeconds: 120
  24. periodSeconds: 20
  25. timeoutSeconds: 10
  26. failureThreshold: 3
  27. successThreshold: 3
  28. terminationGracePeriodSeconds: 0
  29. volumes:
  30. - name: containerdisk
  31. registryDisk:
  32. image: registry:5000/kubevirt/fedora-cloud-registry-disk-demo:devel
  33. - cloudInitNoCloud:
  34. userData: |-
  35. #cloud-config
  36. password: fedora
  37. chpasswd: { expire: False }
  38. bootcmd:
  39. - setenforce 0
  40. - dnf install -y nmap-ncat
  41. - systemd-run --unit=httpserver nc -klp 1500 -e '/usr/bin/echo -e HTTP/1.1 200 OK\\n\\nHello World!'
  42. name: cloudinitdisk

Note that in the case of Readiness Probes, it is also possible to set a failureThreshold and a successThreashold to only flip between ready and non-ready state if the probe succeeded or failed multiple times.

Dual-stack considerations

Some context is needed to understand the limitations imposed by a dual-stack network configuration on readiness - or liveness - probes. Users must be fully aware that a dual-stack configuration is currently only available when using a masquerade binding type. Furthermore, it must be recalled that accessing a VM using masquerade binding type is performed via the pod IP address; in dual-stack mode, both IPv4 and IPv6 addresses can be used to reach the VM.

Dual-stack networking configurations have a limitation when using HTTP / TCP probes - you cannot probe the VMI by its IPv6 address. The reason for this is the host field for both the HTTP and TCP probe actions default to the pod’s IP address, which is currently always the IPv4 address.

Since the pod’s IP address is not known before creating the VMI, it is not possible to pre-provision the probe’s host field.

Defining a Watchdog

A watchdog is a more VM centric approach where the responsiveness of the Operating System is focused on. One can configure the i6300esb watchdog device:

  1. ---
  2. apiVersion: kubevirt.io/v1
  3. kind: VirtualMachineInstance
  4. metadata:
  5. labels:
  6. special: vmi-with-watchdog
  7. name: vmi-with-watchdog
  8. spec:
  9. domain:
  10. devices:
  11. watchdog:
  12. name: mywatchdog
  13. i6300esb:
  14. action: "poweroff"
  15. disks:
  16. - disk:
  17. bus: virtio
  18. name: containerdisk
  19. machine:
  20. type: ""
  21. resources:
  22. requests:
  23. memory: 512M
  24. terminationGracePeriodSeconds: 0
  25. volumes:
  26. - containerDisk:
  27. image: quay.io/kubevirt/alpine-container-disk-demo
  28. name: containerdisk

The example above configures it with the poweroff action. It defines what will happen if the OS can’t respond anymore. Other possible actions are reset and shutdown. The Alpine VM in this example will have the device exposed as /dev/watchdog. This device can then be used by the watchdog binary. For example, if root executes this command inside the VM:

  1. watchdog -t 2000ms -T 4000ms /dev/watchdog

the watchdog will send a heartbeat every two seconds to /dev/watchdog and after four seconds without a heartbeat the defined action will be executed. In this case a hard poweroff.