VM

The following sections contain information useful in troubleshooting issues related to Harvester VM management.

VM Start Button is Not Visible

Issue Description

On rare occasions, the Start button is unavailable on the Harvester UI for VMs that are Off. Without that button, users are unable to start the VMs.

VM - 图1

VM General Operations

On the Harvester UI, the Stop button is visible after a VM is created and started.

VM - 图2

The Start button is visible after the VM is stopped.

VM - 图3

When the VM is powered off from inside the VM, both the Start and Restart buttons are visible.

VM - 图4

A Running VM

The objects vm, vmi, and pod, which are all related to the VM, exist. The status of all three objects is Running.

  1. # kubectl get vm
  2. NAME AGE STATUS READY
  3. vm8 7m25s Running True
  4. # kubectl get vmi
  5. NAME AGE PHASE IP NODENAME READY
  6. vm8 78s Running 10.52.0.199 harv41 True
  7. # kubectl get pod
  8. NAME READY STATUS RESTARTS AGE
  9. virt-launcher-vm8-tl46h 1/1 Running 0 80s

A VM Stopped Using the Harvester UI

Only the object vm exists and its status is Stopped. Both vmi and pod disappear.

  1. # kubectl get vm
  2. NAME AGE STATUS READY
  3. vm8 123m Stopped False
  4. # kubectl get vmi
  5. No resources found in default namespace.
  6. # kubectl get pod
  7. No resources found in default namespace.
  8. #

A VM Stopped Using the VM’s Poweroff Command

The objects vm, vmi, and pod, which are all related to the VM, exist. The status of vm is Stopped, while the status of pod is Completed.

  1. # kubectl get vm
  2. NAME AGE STATUS READY
  3. vm8 134m Stopped False
  4. # kubectl get vmi
  5. NAME AGE PHASE IP NODENAME READY
  6. vm8 2m49s Succeeded 10.52.0.199 harv41 False
  7. # kubectl get pod
  8. NAME READY STATUS RESTARTS AGE
  9. virt-launcher-vm8-tl46h 0/1 Completed 0 2m54s

Issue Analysis

When the issue occurs, the objects vm, vmi, and pod exist. The status of the objects is similar to that of A VM Stopped Using the VM’s Poweroff Command.

Example:

The VM ocffm031v000 is not ready (status: "False") because the virt-launcher pod is terminating (reason: "PodTerminating").

  1. - apiVersion: kubevirt.io/v1
  2. kind: VirtualMachine
  3. ...
  4. status:
  5. conditions:
  6. - lastProbeTime: "2023-07-20T08:37:37Z"
  7. lastTransitionTime: "2023-07-20T08:37:37Z"
  8. message: virt-launcher pod is terminating
  9. reason: PodTerminating
  10. status: "False"
  11. type: Ready

Similarly, the VMI (virtual machine instance) ocffm031v000 is not ready (status: "False") because the virt-launcher pod is terminating (reason: "PodTerminating").

  1. - apiVersion: kubevirt.io/v1
  2. kind: VirtualMachineInstance
  3. ...
  4. name: ocffm031v000
  5. ...
  6. status:
  7. activePods:
  8. ec36a1eb-84a5-4421-b57b-2c14c1975018: aibfredg02
  9. conditions:
  10. - lastProbeTime: "2023-07-20T08:37:37Z"
  11. lastTransitionTime: "2023-07-20T08:37:37Z"
  12. message: virt-launcher pod is terminating
  13. reason: PodTerminating
  14. status: "False"
  15. type: Ready

On the other hand, the pod virt-launcher-ocffm031v000-rrkss is not ready (status: "False") because the pod has run to completion (reason: "PodCompleted").

The underlying container 0d7a0f64f91438cb78f026853e6bebf502df1bdeb64878d351fa5756edc98deb is terminated, and the exitCode is 0.

  1. - apiVersion: v1
  2. kind: Pod
  3. ...
  4. name: virt-launcher-ocffm031v000-rrkss
  5. ...
  6. ownerReferences:
  7. - apiVersion: kubevirt.io/v1
  8. ...
  9. kind: VirtualMachineInstance
  10. name: ocffm031v000
  11. uid: 8d2cf524-7e73-4713-86f7-89e7399f25db
  12. uid: ec36a1eb-84a5-4421-b57b-2c14c1975018
  13. ...
  14. status:
  15. conditions:
  16. - lastProbeTime: "2023-07-18T13:48:56Z"
  17. lastTransitionTime: "2023-07-18T13:48:56Z"
  18. message: the virtual machine is not paused
  19. reason: NotPaused
  20. status: "True"
  21. type: kubevirt.io/virtual-machine-unpaused
  22. - lastProbeTime: "null"
  23. lastTransitionTime: "2023-07-18T13:48:55Z"
  24. reason: PodCompleted
  25. status: "True"
  26. type: Initialized
  27. - lastProbeTime: "null"
  28. lastTransitionTime: "2023-07-20T08:38:56Z"
  29. reason: PodCompleted
  30. status: "False"
  31. type: Ready
  32. - lastProbeTime: "null"
  33. lastTransitionTime: "2023-07-20T08:38:56Z"
  34. reason: PodCompleted
  35. status: "False"
  36. type: ContainersReady
  37. ...
  38. containerStatuses:
  39. - containerID: containerd://0d7a0f64f91438cb78f026853e6bebf502df1bdeb64878d351fa5756edc98deb
  40. image: registry.suse.com/suse/sles/15.4/virt-launcher:0.54.0-150400.3.3.2
  41. imageID: sha256:43bb08efdabb90913534b70ec7868a2126fc128887fb5c3c1b505ee6644453a2
  42. lastState: {}
  43. name: compute
  44. ready: false
  45. restartCount: 0
  46. started: false
  47. state:
  48. terminated:
  49. containerID: containerd://0d7a0f64f91438cb78f026853e6bebf502df1bdeb64878d351fa5756edc98deb
  50. exitCode: 0
  51. finishedAt: "2023-07-20T08:38:55Z"
  52. reason: Completed
  53. startedAt: "2023-07-18T13:50:17Z"

A critical difference is that the Stop and Start actions appear in the stateChangeRequests property of vm.

  1. status:
  2. conditions:
  3. ...
  4. printableStatus: Stopped
  5. stateChangeRequests:
  6. - action: Stop
  7. uid: 8d2cf524-7e73-4713-86f7-89e7399f25db
  8. - action: Start

Root Cause

The root cause of this issue is under investigation.

It is notable that the source code checks the status of vm and assumes that the object is starting. No Start and Restart operations are added to the object.

  1. func (vf *vmformatter) canStart(vm *kubevirtv1.VirtualMachine, vmi *kubevirtv1.VirtualMachineInstance) bool {
  2. if vf.isVMStarting(vm) {
  3. return false
  4. }
  5. ..
  6. }
  7. func (vf *vmformatter) canRestart(vm *kubevirtv1.VirtualMachine, vmi *kubevirtv1.VirtualMachineInstance) bool {
  8. if vf.isVMStarting(vm) {
  9. return false
  10. }
  11. ...
  12. }
  13. func (vf *vmformatter) isVMStarting(vm *kubevirtv1.VirtualMachine) bool {
  14. for _, req := range vm.Status.StateChangeRequests {
  15. if req.Action == kubevirtv1.StartRequest {
  16. return true
  17. }
  18. }
  19. return false
  20. }

Workaround

To address the issue, you can force delete the pod using the command kubectl delete pod virt-launcher-ocffm031v000-rrkss -n namespace --force.

After the pod is successfully deleted, the Start button becomes visible again on the Harvester UI.

https://github.com/harvester/harvester/issues/4659

VM Stuck in Starting State with Error Messsage not a device node

Impacted versions: v1.3.0

Issue Description

Some VMs may fail to start and then become unresponsive after the cluster or some nodes are restarted. On the Dashboard screen of the Harvester UI, the status of the affected VMs is stuck at Starting.

VM - 图5

Issue Analysis

The status of the pod related to the affected VM is CreateContainerError.

  1. $ kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. virt-launcher-vm1-w9bqs 0/2 CreateContainerError 0 9m39s

The phrase failed to generate spec: not a device node can be found in the following:

  1. $kubectl get pods -oyaml
  2. apiVersion: v1
  3. items:
  4. apiVersion: v1
  5. kind: Pod
  6. metadata:
  7. ...
  8. containerStatuses:
  9. - image: registry.suse.com/suse/sles/15.5/virt-launcher:1.1.0-150500.8.6.1
  10. imageID: ""
  11. lastState: {}
  12. name: compute
  13. ready: false
  14. restartCount: 0
  15. started: false
  16. state:
  17. waiting:
  18. message: 'failed to generate container "50f0ec402f6e266870eafb06611850a5a03b2a0a86fdd6e562959719ccc003b5"
  19. spec: failed to generate spec: not a device node'
  20. reason: CreateContainerError

kubelet.log file:

  1. file path: /var/lib/rancher/rke2/agent/logs/kubelet.log
  2. E0205 20:44:31.683371 2837 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"compute\" with CreateContainerError: \"failed t
  3. o generate container \\\"255d42ec2e01d45b4e2480d538ecc21865cf461dc7056bc159a80ee68c411349\\\" spec: failed to generate spec: not a device node\"" pod="default/virt-laun
  4. cher-caddytest-9tjzj" podUID=d512bf3e-f215-4128-960a-0658f7e63c7c

containerd.log file:

  1. file path: /var/lib/rancher/rke2/agent/containerd/containerd.log
  2. time="2024-02-21T11:24:00.140298800Z" level=error msg="CreateContainer within sandbox \"850958f388e63f14a683380b3c52e57db35f21c059c0d93666f4fdaafe337e56\" for &ContainerMetadata{Name:compute,Attempt:0,} failed" error="failed to generate container \"5ddad240be2731d5ea5210565729cca20e20694e364e72ba14b58127e231bc79\" spec: failed to generate spec: not a device node"

After adding debug information to containerd, it identifies the error message not a device node is upon the file pvc-3c1b28fb-*.

  1. time="2024-02-22T15:15:08.557487376Z" level=error msg="CreateContainer within sandbox \"d23af3219cb27228623cf8168ec27e64e836ed44f2b2f9cf784f0529a7f92e1e\" for &ContainerMetadata{Name:compute,Attempt:0,} failed" error="failed to generate container \"e4ed94fb5e9145e8716bcb87aae448300799f345197d52a617918d634d9ca3e1\" spec: failed to generate spec: get device path: /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-3c1b28fb-683e-4bf5-9869-c9107a0f1732/20291c6b-62c3-4456-be8a-fbeac118ec19 containerPath: /dev/disk-0 error: not a device node"

This is a CSI related file, but it is an empty file instead of the expected device file. Then the containerd denied the CreateContainer request.

  1. $ ls /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-3c1b28fb-683e-4bf5-9869-c9107a0f1732/ -alth
  2. total 8.0K
  3. drwxr-x--- 2 root root 4.0K Feb 22 15:10 .
  4. -rw-r--r-- 1 root root 0 Feb 22 14:28 aa851da3-cee1-45be-a585-26ae766c16ca
  5. -rw-r--r-- 1 root root 0 Feb 22 14:07 20291c6b-62c3-4456-be8a-fbeac118ec19
  6. drwxr-x--- 4 root root 4.0K Feb 22 14:06 ..
  7. -rw-r--r-- 1 root root 0 Feb 21 15:48 4333c9fd-c2c8-4da2-9b5a-1a310f80d9fd
  8. -rw-r--r-- 1 root root 0 Feb 21 09:18 becc0687-b6f5-433e-bfb7-756b00deb61b
  9. $file /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-3c1b28fb-683e-4bf5-9869-c9107a0f1732/20291c6b-62c3-4456-be8a-fbeac118ec19
  10. : empty

The output listed above directly contrasts with the following example, which shows the expected device file of a running VM.

  1. $ ls /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-732f8496-103b-4a08-83af-8325e1c314b7/ -alth
  2. total 8.0K
  3. drwxr-x--- 2 root root 4.0K Feb 21 10:53 .
  4. drwxr-x--- 4 root root 4.0K Feb 21 10:53 ..
  5. brw-rw---- 1 root root 8, 16 Feb 21 10:53 4883af80-c202-4529-a2c6-4e7f15fe5a9b

Root Cause

After the cluster or specific nodes are rebooted, the kubelet calls NodePublishVolume for the new pod without first calling NodeStageVolume. Moreover, the Longhorn CSI plugin bind mounts the regular file at the staging target path (previously used by the deleted pod) to the target path, and the operation is considered successful.

Workaround

Cluster level operation:

  1. Find the backing pods of the affected VMs and the related Longhorn volumes.

    1. $ kubectl get pods
    2. NAME READY STATUS RESTARTS AGE
    3. virt-launcher-vm1-nxfm4 0/2 CreateContainerError 0 7m11s
    4. $ kubectl get pvc -A
    5. NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    6. default vm1-disk-0-9gc6h Bound pvc-f1798969-5b72-4d76-9f0e-64854af7b59c 1Gi RWX longhorn-image-fxsqr 7d22h
  2. Stop the affected VMs from Harvester UI.

    The VM may stuck in Stopping, continue the next step.

  3. Delete the backing pods forcely.

    1. $ kubectl delete pod virt-launcher-vm1-nxfm4 --force
    2. Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
    3. pod "virt-launcher-vm1-nxfm4" force deleted

    The VM is off now.

    VM - 图6

Node level operation, node by node:

  1. Cordon a node.

  2. Unmout all the affected Longhorn volumes in this node.

    You need to ssh to this node and execute the sudo -i umount path command.

    1. $ umount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-f1798969-5b72-4d76-9f0e-64854af7b59c/dev/*
    2. umount: /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-f1798969-5b72-4d76-9f0e-64854af7b59c/dev/4b2ab666-27bd-4e3c-a218-fb3d48a72e69: not mounted.
    3. umount: /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-f1798969-5b72-4d76-9f0e-64854af7b59c/dev/6aaf2bbe-f688-4dcd-855a-f9e2afa18862: not mounted.
    4. umount: /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-f1798969-5b72-4d76-9f0e-64854af7b59c/dev/91488f09-ff22-45f4-afc0-ca97f67555e7: not mounted.
    5. umount: /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-f1798969-5b72-4d76-9f0e-64854af7b59c/dev/bb4d0a15-737d-41c0-946c-85f4a56f072f: not mounted.
    6. umount: /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-f1798969-5b72-4d76-9f0e-64854af7b59c/dev/d2a54e32-4edc-4ad8-a748-f7ef7a2cacab: not mounted.
  3. Uncordon this node.

  4. Start the affected VMs from harvester UI.

    Wait some time, the VM will run successfully.

    VM - 图7

    The newly generated csi file is an expected device file.

    1. $ ls /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-f1798969-5b72-4d76-9f0e-64854af7b59c/ -alth
    2. ...
    3. brw-rw---- 1 root root 8, 64 Mar 6 11:47 7beb531d-a781-4775-ba5e-8773773d77f1

https://github.com/harvester/harvester/issues/5109

https://github.com/longhorn/longhorn/issues/8009