- OKD Virtualization runbooks
- CDIDataImportCronOutdated
- CDIDataVolumeUnusualRestartCount
- CDINotReady
- CDIOperatorDown
- CDIStorageProfilesIncomplete
- CnaoDown
- HPPNotReady
- HPPOperatorDown
- HPPSharingPoolPathWithOS
- KubeMacPoolDown
- KubeMacPoolDuplicateMacsFound
- KubeVirtComponentExceedsRequestedCPU
- KubeVirtComponentExceedsRequestedMemory
- KubevirtHyperconvergedClusterOperatorCRModification
- KubevirtHyperconvergedClusterOperatorInstallationNotCompletedAlert
- KubevirtHyperconvergedClusterOperatorUSModification
- KubevirtVmHighMemoryUsage
- KubeVirtVMIExcessiveMigrations
- KubeVirtVMStuckInErrorState
- KubeVirtVMStuckInMigratingState
- KubeVirtVMStuckInStartingState
- LowKVMNodesCount
- LowReadyVirtControllersCount
- LowReadyVirtOperatorsCount
- LowVirtAPICount
- LowVirtControllersCount
- LowVirtOperatorCount
- NetworkAddonsConfigNotReady
- NoLeadingVirtOperator
- NoReadyVirtController
- NoReadyVirtOperator
- OrphanedVirtualMachineInstances
- OutdatedVirtualMachineInstanceWorkloads
- SSPCommonTemplatesModificationReverted
- SSPFailingToReconcile
- SSPHighRateRejectedVms
- SSPOperatorDown
- SSPTemplateValidatorDown
- VirtAPIDown
- VirtApiRESTErrorsBurst
- VirtApiRESTErrorsHigh
- VirtControllerDown
- VirtControllerRESTErrorsBurst
- VirtControllerRESTErrorsHigh
- VirtHandlerDaemonSetRolloutFailing
- VirtHandlerRESTErrorsBurst
- VirtHandlerRESTErrorsHigh
- VirtOperatorDown
- VirtOperatorRESTErrorsBurst
- VirtOperatorRESTErrorsHigh
- VMCannotBeEvicted
OKD Virtualization runbooks
You can use the procedures in these runbooks to diagnose and resolve issues that trigger OKD Virtualization alerts.
OKD Virtualization alerts are displayed on the Virtualization > Overview page.
CDIDataImportCronOutdated
Meaning
This alert fires when DataImportCron
cannot poll or import the latest disk image versions.
DataImportCron
polls disk images, checking for the latest versions, and imports the images as persistent volume claims (PVCs). This process ensures that PVCs are updated to the latest version so that they can be used as reliable clone sources or golden images for virtual machines (VMs).
For golden images, latest refers to the latest operating system of the distribution. For other disk images, latest refers to the latest hash of the image that is available.
Impact
VMs might be created from outdated disk images.
VMs might fail to start because no source PVC is available for cloning.
Diagnosis
Check the cluster for a default storage class:
$ oc get sc
The output displays the storage classes with
(default)
beside the name of the default storage class. You must set a default storage class, either on the cluster or in theDataImportCron
specification, in order for theDataImportCron
to poll and import golden images. If no storage class is defined, the DataVolume controller fails to create PVCs and the following event is displayed:DataVolume.storage spec is missing accessMode and no storageClass to choose profile
.Obtain the
DataImportCron
namespace and name:$ oc get dataimportcron -A -o json | jq -r '.items[] | \
select(.status.conditions[] | select(.type == "UpToDate" and \
.status == "False")) | .metadata.namespace + "/" + .metadata.name'
If a default storage class is not defined on the cluster, check the
DataImportCron
specification for a default storage class:$ oc get dataimportcron <dataimportcron> -o yaml | \
grep -B 5 storageClassName
Example output
url: docker://.../cdi-func-test-tinycore
storage:
resources:
requests:
storage: 5Gi
storageClassName: rook-ceph-block
Obtain the name of the
DataVolume
associated with theDataImportCron
object:$ oc -n <namespace> get dataimportcron <dataimportcron> -o json | \
jq .status.lastImportedPVC.name
Check the
DataVolume
log for error messages:$ oc -n <namespace> get dv <datavolume> -o yaml
Set the
CDI_NAMESPACE
environment variable:$ export CDI_NAMESPACE="$(oc get deployment -A | \
grep cdi-operator | awk '{print $1}')"
Check the
cdi-deployment
log for error messages:$ oc logs -n $CDI_NAMESPACE deployment/cdi-deployment
Mitigation
Set a default storage class, either on the cluster or in the
DataImportCron
specification, to poll and import golden images. The updated Containerized Data Importer (CDI) will resolve the issue within a few seconds.If the issue does not resolve itself, delete the data volumes associated with the affected
DataImportCron
objects. The CDI will recreate the data volumes with the default storage class.If your cluster is installed in a restricted network environment, disable the
enableCommonBootImageImport
feature gate in order to opt out of automatic updates:$ oc patch hco kubevirt-hyperconverged -n $CDI_NAMESPACE --type json \
-p '[{"op": "replace", "path": \
"/spec/featureGates/enableCommonBootImageImport", "value": false}]'
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
CDIDataVolumeUnusualRestartCount
Meaning
This alert fires when a DataVolume
object restarts more than three times.
Impact
Data volumes are responsible for importing and creating a virtual machine disk on a persistent volume claim. If a data volume restarts more than three times, these operations are unlikely to succeed. You must diagnose and resolve the issue.
Diagnosis
Obtain the name and namespace of the data volume:
$ oc get dv -A -o json | jq -r '.items[] | \
select(.status.restartCount>3)' | jq '.metadata.name, .metadata.namespace'
Check the status of the pods associated with the data volume:
$ oc get pods -n <namespace> -o json | jq -r '.items[] | \
select(.metadata.ownerReferences[] | \
select(.name=="<dv_name>")).metadata.name'
Obtain the details of the pods:
$ oc -n <namespace> describe pods <pod>
Check the pod logs for error messages:
$ oc -n <namespace> describe logs <pod>
Mitigation
Delete the data volume, resolve the issue, and create a new data volume.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
CDINotReady
Meaning
This alert fires when the Containerized Data Importer (CDI) is in a degraded state:
Not progressing
Not available to use
Impact
CDI is not usable, so users cannot build virtual machine disks on persistent volume claims (PVCs) using CDI’s data volumes. CDI components are not ready and they stopped progressing towards a ready state.
Diagnosis
Set the
CDI_NAMESPACE
environment variable:$ export CDI_NAMESPACE="$(oc get deployment -A | \
grep cdi-operator | awk '{print $1}')"
Check the CDI deployment for components that are not ready:
$ oc -n $CDI_NAMESPACE get deploy -l cdi.kubevirt.io
Check the details of the failing pod:
$ oc -n $CDI_NAMESPACE describe pods <pod>
Check the logs of the failing pod:
$ oc -n $CDI_NAMESPACE logs <pod>
Mitigation
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
CDIOperatorDown
Meaning
This alert fires when the Containerized Data Importer (CDI) Operator is down. The CDI Operator deploys and manages the CDI infrastructure components, such as data volume and persistent volume claim (PVC) controllers. These controllers help users build virtual machine disks on PVCs.
Impact
The CDI components might fail to deploy or to stay in a required state. The CDI installation might not function correctly.
Diagnosis
Set the
CDI_NAMESPACE
environment variable:$ export CDI_NAMESPACE="$(oc get deployment -A | grep cdi-operator | \
awk '{print $1}')"
Check whether the
cdi-operator
pod is currently running:$ oc -n $CDI_NAMESPACE get pods -l name=cdi-operator
Obtain the details of the
cdi-operator
pod:$ oc -n $CDI_NAMESPACE describe pods -l name=cdi-operator
Check the log of the
cdi-operator
pod for errors:$ oc -n $CDI_NAMESPACE logs -l name=cdi-operator
Mitigation
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
CDIStorageProfilesIncomplete
Meaning
This alert fires when a Containerized Data Importer (CDI) storage profile is incomplete.
If a storage profile is incomplete, the CDI cannot infer persistent volume claim (PVC) fields, such as volumeMode
and accessModes
, which are required to create a virtual machine (VM) disk.
Impact
The CDI cannot create a VM disk on the PVC.
Diagnosis
Identify the incomplete storage profile:
$ oc get storageprofile <storage_class>
Mitigation
Add the missing storage profile information as in the following example:
$ oc patch storageprofile local --type=merge -p '{"spec": \
{"claimPropertySets": [{"accessModes": ["ReadWriteOnce"], \
"volumeMode": "Filesystem"}]}}'
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
CnaoDown
Meaning
This alert fires when the Cluster Network Addons Operator (CNAO) is down. The CNAO deploys additional networking components on top of the cluster.
Impact
If the CNAO is not running, the cluster cannot reconcile changes to virtual machine components. As a result, the changes might fail to take effect.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get deployment -A | \
grep cluster-network-addons-operator | awk '{print $1}')"
Check the status of the
cluster-network-addons-operator
pod:$ oc -n $NAMESPACE get pods -l name=cluster-network-addons-operator
Check the
cluster-network-addons-operator
logs for error messages:$ oc -n $NAMESPACE logs -l name=cluster-network-addons-operator
Obtain the details of the
cluster-network-addons-operator
pods:$ oc -n $NAMESPACE describe pods -l name=cluster-network-addons-operator
Mitigation
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
HPPNotReady
Meaning
This alert fires when a hostpath provisioner (HPP) installation is in a degraded state.
The HPP dynamically provisions hostpath volumes to provide storage for persistent volume claims (PVCs).
Impact
HPP is not usable. Its components are not ready and they are not progressing towards a ready state.
Diagnosis
Set the
HPP_NAMESPACE
environment variable:$ export HPP_NAMESPACE="$(oc get deployment -A | \
grep hostpath-provisioner-operator | awk '{print $1}')"
Check for HPP components that are currently not ready:
$ oc -n $HPP_NAMESPACE get all -l k8s-app=hostpath-provisioner
Obtain the details of the failing pod:
$ oc -n $HPP_NAMESPACE describe pods <pod>
Check the logs of the failing pod:
$ oc -n $HPP_NAMESPACE logs <pod>
Mitigation
Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
HPPOperatorDown
Meaning
This alert fires when the hostpath provisioner (HPP) Operator is down.
The HPP Operator deploys and manages the HPP infrastructure components, such as the daemon set that provisions hostpath volumes.
Impact
The HPP components might fail to deploy or to remain in the required state. As a result, the HPP installation might not work correctly in the cluster.
Diagnosis
Configure the
HPP_NAMESPACE
environment variable:$ HPP_NAMESPACE="$(oc get deployment -A | grep \
hostpath-provisioner-operator | awk '{print $1}')"
Check whether the
hostpath-provisioner-operator
pod is currently running:$ oc -n $HPP_NAMESPACE get pods -l name=hostpath-provisioner-operator
Obtain the details of the
hostpath-provisioner-operator
pod:$ oc -n $HPP_NAMESPACE describe pods -l name=hostpath-provisioner-operator
Check the log of the
hostpath-provisioner-operator
pod for errors:$ oc -n $HPP_NAMESPACE logs -l name=hostpath-provisioner-operator
Mitigation
Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
HPPSharingPoolPathWithOS
Meaning
This alert fires when the hostpath provisioner (HPP) shares a file system with other critical components, such as kubelet
or the operating system (OS).
HPP dynamically provisions hostpath volumes to provide storage for persistent volume claims (PVCs).
Impact
A shared hostpath pool puts pressure on the node’s disks. The node might have degraded performance and stability.
Diagnosis
Configure the
HPP_NAMESPACE
environment variable:$ export HPP_NAMESPACE="$(oc get deployment -A | \
grep hostpath-provisioner-operator | awk '{print $1}')"
Obtain the status of the
hostpath-provisioner-csi
daemon set pods:$ oc -n $HPP_NAMESPACE get pods | grep hostpath-provisioner-csi
Check the
hostpath-provisioner-csi
logs to identify the shared pool and path:$ oc -n $HPP_NAMESPACE logs <csi_daemonset> -c hostpath-provisioner
Example output
I0208 15:21:03.769731 1 utils.go:221] pool (<legacy, csi-data-dir>/csi),
shares path with OS which can lead to node disk pressure
Mitigation
Using the data obtained in the Diagnosis section, try to prevent the pool path from being shared with the OS. The specific steps vary based on the node and other circumstances.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
KubeMacPoolDown
Meaning
KubeMacPool
is down. KubeMacPool
is responsible for allocating MAC addresses and preventing MAC address conflicts.
Impact
If KubeMacPool
is down, VirtualMachine
objects cannot be created.
Diagnosis
Set the
KMP_NAMESPACE
environment variable:$ export KMP_NAMESPACE="$(oc get pod -A --no-headers -l \
control-plane=mac-controller-manager | awk '{print $1}')"
Set the
KMP_NAME
environment variable:$ export KMP_NAME="$(oc get pod -A --no-headers -l \
control-plane=mac-controller-manager | awk '{print $2}')"
Obtain the
KubeMacPool-manager
pod details:$ oc describe pod -n $KMP_NAMESPACE $KMP_NAME
Check the
KubeMacPool-manager
logs for error messages:$ oc logs -n $KMP_NAMESPACE $KMP_NAME
Mitigation
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
KubeMacPoolDuplicateMacsFound
Meaning
This alert fires when KubeMacPool
detects duplicate MAC addresses.
KubeMacPool
is responsible for allocating MAC addresses and preventing MAC address conflicts. When KubeMacPool
starts, it scans the cluster for the MAC addresses of virtual machines (VMs) in managed namespaces.
Impact
Duplicate MAC addresses on the same LAN might cause network issues.
Diagnosis
Obtain the namespace and the name of the
kubemacpool-mac-controller
pod:$ oc get pod -A -l control-plane=mac-controller-manager --no-headers \
-o custom-columns=":metadata.namespace,:metadata.name"
Obtain the duplicate MAC addresses from the
kubemacpool-mac-controller
logs:$ oc logs -n <namespace> <kubemacpool_mac_controller> | \
grep "already allocated"
Example output
mac address 02:00:ff:ff:ff:ff already allocated to
vm/kubemacpool-test/testvm, br1,
conflict with: vm/kubemacpool-test/testvm2, br1
Mitigation
Update the VMs to remove the duplicate MAC addresses.
Restart the
kubemacpool-mac-controller
pod:$ oc delete pod -n <namespace> <kubemacpool_mac_controller>
KubeVirtComponentExceedsRequestedCPU
Meaning
This alert fires when a component’s CPU usage exceeds the requested limit.
Impact
Usage of CPU resources is not optimal and the node might be overloaded.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the component’s CPU request limit:
$ oc -n $NAMESPACE get deployment <component> -o yaml | grep requests: -A 2
Check the actual CPU usage by using a PromQL query:
node_namespace_pod_container:container_cpu_usage_seconds_total:sum_rate
{namespace="$NAMESPACE",container="<component>"}
See the Prometheus documentation for more information.
Mitigation
Update the CPU request limit in the HCO
custom resource.
KubeVirtComponentExceedsRequestedMemory
Meaning
This alert fires when a component’s memory usage exceeds the requested limit.
Impact
Usage of memory resources is not optimal and the node might be overloaded.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the component’s memory request limit:
$ oc -n $NAMESPACE get deployment <component> -o yaml | \
grep requests: -A 2
Check the actual memory usage by using a PromQL query:
container_memory_usage_bytes{namespace="$NAMESPACE",container="<component>"}
See the Prometheus documentation for more information.
Mitigation
Update the memory request limit in the HCO
custom resource.
KubevirtHyperconvergedClusterOperatorCRModification
Meaning
This alert fires when an operand of the HyperConverged Cluster Operator (HCO) is changed by someone or something other than HCO.
HCO configures OKD Virtualization and its supporting operators in an opinionated way and overwrites its operands when there is an unexpected change to them. Users must not modify the operands directly. The HyperConverged
custom resource is the source of truth for the configuration.
Impact
Changing the operands manually causes the cluster configuration to fluctuate and might lead to instability.
Diagnosis
Check the
component_name
value in the alert details to determine the operand kind (kubevirt
) and the operand name (kubevirt-kubevirt-hyperconverged
) that are being changed:Labels
alertname=KubevirtHyperconvergedClusterOperatorCRModification
component_name=kubevirt/kubevirt-kubevirt-hyperconverged
severity=warning
Mitigation
Do not change the HCO operands directly. Use HyperConverged
objects to configure the cluster.
The alert resolves itself after 10 minutes if the operands are not changed manually.
KubevirtHyperconvergedClusterOperatorInstallationNotCompletedAlert
Meaning
This alert fires when the HyperConverged Cluster Operator (HCO) runs for more than an hour without a HyperConverged
custom resource (CR).
This alert has the following causes:
During the installation process, you installed the HCO but you did not create the
HyperConverged
CR.During the uninstall process, you removed the
HyperConverged
CR before uninstalling the HCO and the HCO is still running.
Mitigation
The mitigation depends on whether you are installing or uninstalling the HCO:
Complete the installation by creating a
HyperConverged
CR with its default values:$ cat <<EOF | oc apply -f -
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: hco-operatorgroup
namespace: kubevirt-hyperconverged
spec: {}
EOF
Uninstall the HCO. If the uninstall process continues to run, you must resolve that issue in order to cancel the alert.
KubevirtHyperconvergedClusterOperatorUSModification
Meaning
This alert fires when a JSON Patch annotation is used to change an operand of the HyperConverged Cluster Operator (HCO).
HCO configures OKD Virtualization and its supporting operators in an opinionated way and overwrites its operands when there is an unexpected change to them. Users must not modify the operands directly.
However, if a change is required and it is not supported by the HCO API, you can force HCO to set a change in an operator by using JSON Patch annotations. These changes are not reverted by HCO during its reconciliation process.
Impact
Incorrect use of JSON Patch annotations might lead to unexpected results or an unstable environment.
Upgrading a system with JSON Patch annotations is dangerous because the structure of the component custom resources might change.
Diagnosis
Check the
annotation_name
in the alert details to identify the JSON Patch annotation:Labels
alertname=KubevirtHyperconvergedClusterOperatorUSModification
annotation_name=kubevirt.kubevirt.io/jsonpatch
severity=info
Mitigation
It is best to use the HCO API to change an operand. However, if the change can only be done with a JSON Patch annotation, proceed with caution.
Remove JSON Patch annotations before upgrade to avoid potential issues.
KubevirtVmHighMemoryUsage
Meaning
This alert fires when a container hosting a virtual machine (VM) has less than 20 MB free memory.
Impact
The virtual machine running inside the container is terminated by the runtime if the container’s memory limit is exceeded.
Diagnosis
Obtain the
virt-launcher
pod details:$ oc get pod <virt-launcher> -o yaml
Identify
compute
container processes with high memory usage in thevirt-launcher
pod:$ oc exec -it <virt-launcher> -c compute -- top
Mitigation
Increase the memory limit in the
VirtualMachine
specification as in the following example:spec:
running: false
template:
metadata:
labels:
kubevirt.io/vm: vm-name
spec:
domain:
resources:
limits:
memory: 200Mi
requests:
memory: 128Mi
KubeVirtVMIExcessiveMigrations
Meaning
This alert fires when a virtual machine instance (VMI) live migrates more than 12 times over a period of 24 hours.
This migration rate is abnormally high, even during an upgrade. This alert might indicate a problem in the cluster infrastructure, such as network disruptions or insufficient resources.
Impact
A virtual machine (VM) that migrates too frequently might experience degraded performance because memory page faults occur during the transition.
Diagnosis
Verify that the worker node has sufficient resources:
$ oc get nodes -l node-role.kubernetes.io/worker= -o json | \
jq .items[].status.allocatable
Example output
{
"cpu": "3500m",
"devices.kubevirt.io/kvm": "1k",
"devices.kubevirt.io/sev": "0",
"devices.kubevirt.io/tun": "1k",
"devices.kubevirt.io/vhost-net": "1k",
"ephemeral-storage": "38161122446",
"hugepages-1Gi": "0",
"hugepages-2Mi": "0",
"memory": "7000128Ki",
"pods": "250"
}
Check the status of the worker node:
$ oc get nodes -l node-role.kubernetes.io/worker= -o json | \
jq .items[].status.conditions
Example output
{
"lastHeartbeatTime": "2022-05-26T07:36:01Z",
"lastTransitionTime": "2022-05-23T08:12:02Z",
"message": "kubelet has sufficient memory available",
"reason": "KubeletHasSufficientMemory",
"status": "False",
"type": "MemoryPressure"
},
{
"lastHeartbeatTime": "2022-05-26T07:36:01Z",
"lastTransitionTime": "2022-05-23T08:12:02Z",
"message": "kubelet has no disk pressure",
"reason": "KubeletHasNoDiskPressure",
"status": "False",
"type": "DiskPressure"
},
{
"lastHeartbeatTime": "2022-05-26T07:36:01Z",
"lastTransitionTime": "2022-05-23T08:12:02Z",
"message": "kubelet has sufficient PID available",
"reason": "KubeletHasSufficientPID",
"status": "False",
"type": "PIDPressure"
},
{
"lastHeartbeatTime": "2022-05-26T07:36:01Z",
"lastTransitionTime": "2022-05-23T08:24:15Z",
"message": "kubelet is posting ready status",
"reason": "KubeletReady",
"status": "True",
"type": "Ready"
}
Log in to the worker node and verify that the
kubelet
service is running:$ systemctl status kubelet
Check the
kubelet
journal log for error messages:$ journalctl -r -u kubelet
Mitigation
Ensure that the worker nodes have sufficient resources (CPU, memory, disk) to run VM workloads without interruption.
If the problem persists, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
KubeVirtVMStuckInErrorState
Meaning
This alert fires when a virtual machine (VM) is in an error state for more than 5 minutes.
Error states:
CrashLoopBackOff
Unknown
Unschedulable
ErrImagePull
ImagePullBackOff
PvcNotFound
DataVolumeError
This alert might indicate an issue with the VM configuration, such as a missing persistent volume claim, or a problem in the cluster infrastructure, such as network disruptions or insufficient node resources.
Impact
There is no immediate impact. However, if this alert persists, you must investigate the root cause and resolve the issue.
Diagnosis
Check the virtual machine instance (VMI) details:
$ oc describe vmi <vmi> -n <namespace>
Example output
Name: testvmi-hxghp
Namespace: kubevirt-test-default1
Labels: name=testvmi-hxghp
Annotations: kubevirt.io/latest-observed-api-version: v1
kubevirt.io/storage-observed-api-version: v1alpha3
API Version: kubevirt.io/v1
Kind: VirtualMachineInstance
...
Spec:
Domain:
...
Resources:
Requests:
Cpu: 5000000Gi
Memory: 5130000240Mi
...
Status:
...
Conditions:
Last Probe Time: 2022-10-03T11:11:07Z
Last Transition Time: 2022-10-03T11:11:07Z
Message: Guest VM is not reported as running
Reason: GuestNotRunning
Status: False
Type: Ready
Last Probe Time: <nil>
Last Transition Time: 2022-10-03T11:11:07Z
Message: 0/2 nodes are available: 2 Insufficient cpu, 2
Insufficient memory.
Reason: Unschedulable
Status: False
Type: PodScheduled
Guest OS Info:
Phase: Scheduling
Phase Transition Timestamps:
Phase: Pending
Phase Transition Timestamp: 2022-10-03T11:11:07Z
Phase: Scheduling
Phase Transition Timestamp: 2022-10-03T11:11:07Z
Qos Class: Burstable
Runtime User: 0
Virtual Machine Revision Name: revision-start-vm-3503e2dc-27c0-46ef-9167-7ae2e7d93e6e-1
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 27s virtualmachine-controller Created virtual
machine pod virt-launcher-testvmi-hxghp-xh9qn
Check the node resources:
$ oc get nodes -l node-role.kubernetes.io/worker= -o json | jq '.items | \
.[].status.allocatable'
Example output
{
"cpu": "5",
"devices.kubevirt.io/kvm": "1k",
"devices.kubevirt.io/sev": "0",
"devices.kubevirt.io/tun": "1k",
"devices.kubevirt.io/vhost-net": "1k",
"ephemeral-storage": "33812468066",
"hugepages-1Gi": "0",
"hugepages-2Mi": "128Mi",
"memory": "3783496Ki",
"pods": "110"
}
Check the node for error conditions:
$ oc get nodes -l node-role.kubernetes.io/worker= -o json | jq '.items | \
.[].status.conditions'
Example output
[
{
"lastHeartbeatTime": "2022-10-03T11:13:34Z",
"lastTransitionTime": "2022-10-03T10:14:20Z",
"message": "kubelet has sufficient memory available",
"reason": "KubeletHasSufficientMemory",
"status": "False",
"type": "MemoryPressure"
},
{
"lastHeartbeatTime": "2022-10-03T11:13:34Z",
"lastTransitionTime": "2022-10-03T10:14:20Z",
"message": "kubelet has no disk pressure",
"reason": "KubeletHasNoDiskPressure",
"status": "False",
"type": "DiskPressure"
},
{
"lastHeartbeatTime": "2022-10-03T11:13:34Z",
"lastTransitionTime": "2022-10-03T10:14:20Z",
"message": "kubelet has sufficient PID available",
"reason": "KubeletHasSufficientPID",
"status": "False",
"type": "PIDPressure"
},
{
"lastHeartbeatTime": "2022-10-03T11:13:34Z",
"lastTransitionTime": "2022-10-03T10:14:30Z",
"message": "kubelet is posting ready status",
"reason": "KubeletReady",
"status": "True",
"type": "Ready"
}
]
Mitigation
Try to identify and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
KubeVirtVMStuckInMigratingState
Meaning
This alert fires when a virtual machine (VM) is in a migrating state for more than 5 minutes.
This alert might indicate a problem in the cluster infrastructure, such as network disruptions or insufficient node resources.
Impact
There is no immediate impact. However, if this alert persists, you must investigate the root cause and resolve the issue.
Diagnosis
Check the node resources:
$ oc get nodes -l node-role.kubernetes.io/worker= -o json | jq '.items | \
.[].status.allocatable'
Example output
{
"cpu": "5",
"devices.kubevirt.io/kvm": "1k",
"devices.kubevirt.io/sev": "0",
"devices.kubevirt.io/tun": "1k",
"devices.kubevirt.io/vhost-net": "1k",
"ephemeral-storage": "33812468066",
"hugepages-1Gi": "0",
"hugepages-2Mi": "128Mi",
"memory": "3783496Ki",
"pods": "110"
}
Check the node status conditions:
$ oc get nodes -l node-role.kubernetes.io/worker= -o json | jq '.items | \
.[].status.conditions'
Example output
[
{
"lastHeartbeatTime": "2022-10-03T11:13:34Z",
"lastTransitionTime": "2022-10-03T10:14:20Z",
"message": "kubelet has sufficient memory available",
"reason": "KubeletHasSufficientMemory",
"status": "False",
"type": "MemoryPressure"
},
{
"lastHeartbeatTime": "2022-10-03T11:13:34Z",
"lastTransitionTime": "2022-10-03T10:14:20Z",
"message": "kubelet has no disk pressure",
"reason": "KubeletHasNoDiskPressure",
"status": "False",
"type": "DiskPressure"
},
{
"lastHeartbeatTime": "2022-10-03T11:13:34Z",
"lastTransitionTime": "2022-10-03T10:14:20Z",
"message": "kubelet has sufficient PID available",
"reason": "KubeletHasSufficientPID",
"status": "False",
"type": "PIDPressure"
},
{
"lastHeartbeatTime": "2022-10-03T11:13:34Z",
"lastTransitionTime": "2022-10-03T10:14:30Z",
"message": "kubelet is posting ready status",
"reason": "KubeletReady",
"status": "True",
"type": "Ready"
}
]
Mitigation
Check the migration configuration of the virtual machine to ensure that it is appropriate for the workload.
You set a cluster-wide migration configuration by editing the MigrationConfiguration
stanza of the KubeVirt
custom resource.
You set a migration configuration for a specific scope by creating a migration policy.
You can determine whether a VM is bound to a migration policy by viewing its vm.Status.MigrationState.MigrationPolicyName
parameter.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
KubeVirtVMStuckInStartingState
Meaning
This alert fires when a virtual machine (VM) is in a starting state for more than 5 minutes.
This alert might indicate an issue in the VM configuration, such as a misconfigured priority class or a missing network device.
Impact
There is no immediate impact. However, if this alert persists, you must investigate the root cause and resolve the issue.
Diagnosis
Check the virtual machine instance (VMI) details for error conditions:
$ oc describe vmi <vmi> -n <namespace>
Example output
Name: testvmi-ldgrw
Namespace: kubevirt-test-default1
Labels: name=testvmi-ldgrw
Annotations: kubevirt.io/latest-observed-api-version: v1
kubevirt.io/storage-observed-api-version: v1alpha3
API Version: kubevirt.io/v1
Kind: VirtualMachineInstance
...
Spec:
...
Networks:
Name: default
Pod:
Priority Class Name: non-preemtible
Termination Grace Period Seconds: 0
Status:
Conditions:
Last Probe Time: 2022-10-03T11:08:30Z
Last Transition Time: 2022-10-03T11:08:30Z
Message: virt-launcher pod has not yet been scheduled
Reason: PodNotExists
Status: False
Type: Ready
Last Probe Time: <nil>
Last Transition Time: 2022-10-03T11:08:30Z
Message: failed to create virtual machine pod: pods
"virt-launcher-testvmi-ldgrw-" is forbidden: no PriorityClass with name
non-preemtible was found
Reason: FailedCreate
Status: False
Type: Synchronized
Guest OS Info:
Phase: Pending
Phase Transition Timestamps:
Phase: Pending
Phase Transition Timestamp: 2022-10-03T11:08:30Z
Runtime User: 0
Virtual Machine Revision Name:
revision-start-vm-6f01a94b-3260-4c5a-bbe5-dc98d13e6bea-1
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 8s (x13 over 28s) virtualmachine-controller Error
creating pod: pods "virt-launcher-testvmi-ldgrw-" is forbidden: no
PriorityClass with name non-preemtible was found
Mitigation
Ensure that the VM is configured correctly and has the required resources.
A Pending
state indicates that the VM has not yet been scheduled. Check the following possible causes:
The
virt-launcher
pod is not scheduled.Topology hints for the VMI are not up to date.
Data volume is not provisioned or ready.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
LowKVMNodesCount
Meaning
This alert fires when fewer than two nodes in the cluster have KVM resources.
Impact
The cluster must have at least two nodes with KVM resources for live migration.
Virtual machines cannot be scheduled or run if no nodes have KVM resources.
Diagnosis
Identify the nodes with KVM resources:
$ oc get nodes -o jsonpath='{.items[*].status.allocatable}' | \
grep devices.kubevirt.io/kvm
Mitigation
Install KVM on the nodes without KVM resources.
LowReadyVirtControllersCount
Meaning
This alert fires when one or more virt-controller
pods are running, but none of these pods has been in the Ready
state for the past 5 minutes.
A virt-controller
device monitors the custom resource definitions (CRDs) of a virtual machine instance (VMI) and manages the associated pods. The device creates pods for VMIs and manages their lifecycle. The device is critical for cluster-wide virtualization functionality.
Impact
This alert indicates that a cluster-level failure might occur. Actions related to VM lifecycle management, such as launching a new VMI or shutting down an existing VMI, will fail.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Verify a
virt-controller
device is available:$ oc get deployment -n $NAMESPACE virt-controller \
-o jsonpath='{.status.readyReplicas}'
Check the status of the
virt-controller
deployment:$ oc -n $NAMESPACE get deploy virt-controller -o yaml
Obtain the details of the
virt-controller
deployment to check for status conditions, such as crashing pods or failures to pull images:$ oc -n $NAMESPACE describe deploy virt-controller
Check if any problems occurred with the nodes. For example, they might be in a
NotReady
state:$ oc get nodes
Mitigation
This alert can have multiple causes, including the following:
The cluster has insufficient memory.
The nodes are down.
The API server is overloaded. For example, the scheduler might be under a heavy load and therefore not completely available.
There are network issues.
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
LowReadyVirtOperatorsCount
Meaning
This alert fires when one or more virt-operator
pods are running, but none of these pods has been in a Ready
state for the last 10 minutes.
The virt-operator
is the first Operator to start in a cluster. The virt-operator
deployment has a default replica of two virt-operator
pods.
Its primary responsibilities include the following:
Installing, live-updating, and live-upgrading a cluster
Monitoring the lifecycle of top-level controllers, such as
virt-controller
,virt-handler
,virt-launcher
, and managing their reconciliationCertain cluster-wide tasks, such as certificate rotation and infrastructure management
Impact
A cluster-level failure might occur. Critical cluster-wide management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might become unavailable. Such a state also triggers the NoReadyVirtOperator
alert.
The virt-operator
is not directly responsible for virtual machines (VMs) in the cluster. Therefore, its temporary unavailability does not significantly affect VM workloads.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Obtain the name of the
virt-operator
deployment:$ oc -n $NAMESPACE get deploy virt-operator -o yaml
Obtain the details of the
virt-operator
deployment:$ oc -n $NAMESPACE describe deploy virt-operator
Check for node issues, such as a
NotReady
state:$ oc get nodes
Mitigation
Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
LowVirtAPICount
Meaning
This alert fires when only one available virt-api
pod is detected during a 60-minute period, although at least two nodes are available for scheduling.
Impact
An API call outage might occur during node eviction because the virt-api
pod becomes a single point of failure.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the number of available
virt-api
pods:$ oc get deployment -n $NAMESPACE virt-api \
-o jsonpath='{.status.readyReplicas}'
Check the status of the
virt-api
deployment for error conditions:$ oc -n $NAMESPACE get deploy virt-api -o yaml
Check the nodes for issues such as nodes in a
NotReady
state:$ oc get nodes
Mitigation
Try to identify the root cause and to resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
LowVirtControllersCount
Meaning
This alert fires when a low number of virt-controller
pods is detected. At least one virt-controller
pod must be available in order to ensure high availability. The default number of replicas is 2.
A virt-controller
device monitors the custom resource definitions (CRDs) of a virtual machine instance (VMI) and manages the associated pods. The device create pods for VMIs and manages the lifecycle of the pods. The device is critical for cluster-wide virtualization functionality.
Impact
The responsiveness of OKD Virtualization might become negatively affected. For example, certain requests might be missed.
In addition, if another virt-launcher
instance terminates unexpectedly, OKD Virtualization might become completely unresponsive.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Verify that running
virt-controller
pods are available:$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-controller
Check the
virt-launcher
logs for error messages:$ oc -n $NAMESPACE logs <virt-launcher>
Obtain the details of the
virt-launcher
pod to check for status conditions such as unexpected termination or aNotReady
state.$ oc -n $NAMESPACE describe pod/<virt-launcher>
Mitigation
This alert can have a variety of causes, including:
Not enough memory on the cluster
Nodes are down
The API server is overloaded. For example, the scheduler might be under a heavy load and therefore not completely available.
Networking issues
Identify the root cause and fix it, if possible.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
LowVirtOperatorCount
Meaning
This alert fires when only one virt-operator
pod in a Ready
state has been running for the last 60 minutes.
The virt-operator
is the first Operator to start in a cluster. Its primary responsibilities include the following:
Installing, live-updating, and live-upgrading a cluster
Monitoring the lifecycle of top-level controllers, such as
virt-controller
,virt-handler
,virt-launcher
, and managing their reconciliationCertain cluster-wide tasks, such as certificate rotation and infrastructure management
Impact
The virt-operator
cannot provide high availability (HA) for the deployment. HA requires two or more virt-operator
pods in a Ready
state. The default deployment is two pods.
The virt-operator
is not directly responsible for virtual machines (VMs) in the cluster. Therefore, its decreased availability does not significantly affect VM workloads.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the states of the
virt-operator
pods:$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
Review the logs of the affected
virt-operator
pods:$ oc -n $NAMESPACE logs <virt-operator>
Obtain the details of the affected
virt-operator
pods:$ oc -n $NAMESPACE describe pod <virt-operator>
Mitigation
Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the Diagnosis procedure.
NetworkAddonsConfigNotReady
Meaning
This alert fires when the NetworkAddonsConfig
custom resource (CR) of the Cluster Network Addons Operator (CNAO) is not ready.
CNAO deploys additional networking components on the cluster. This alert indicates that one of the deployed components is not ready.
Impact
Network functionality is affected.
Diagnosis
Check the status conditions of the
NetworkAddonsConfig
CR to identify the deployment or daemon set that is not ready:$ oc get networkaddonsconfig \
-o custom-columns="":.status.conditions[*].message
Example output
DaemonSet "cluster-network-addons/macvtap-cni" update is being processed...
Check the component’s pod for errors:
$ oc -n cluster-network-addons get daemonset <pod> -o yaml
Check the component’s logs:
$ oc -n cluster-network-addons logs <pod>
Check the component’s details for error conditions:
$ oc -n cluster-network-addons describe <pod>
Mitigation
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
NoLeadingVirtOperator
Meaning
This alert fires when no virt-operator
pod with a leader lease has been detected for 10 minutes, although the virt-operator
pods are in a Ready
state. The alert indicates that no leader pod is available.
The virt-operator
is the first Operator to start in a cluster. Its primary responsibilities include the following:
Installing, live updating, and live upgrading a cluster
Monitoring the lifecycle of top-level controllers, such as
virt-controller
,virt-handler
,virt-launcher
, and managing their reconciliationCertain cluster-wide tasks, such as certificate rotation and infrastructure management
The virt-operator
deployment has a default replica of 2 pods, with one pod holding a leader lease.
Impact
This alert indicates a failure at the level of the cluster. As a result, critical cluster-wide management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might not be available.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get kubevirt -A -o \
custom-columns="":.metadata.namespace)"
Obtain the status of the
virt-operator
pods:$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
Check the
virt-operator
pod logs to determine the leader status:$ oc -n $NAMESPACE logs | grep lead
Leader pod example:
{"component":"virt-operator","level":"info","msg":"Attempting to acquire
leader status","pos":"application.go:400","timestamp":"2021-11-30T12:15:18.635387Z"}
I1130 12:15:18.635452 1 leaderelection.go:243] attempting to acquire
leader lease <namespace>/virt-operator...
I1130 12:15:19.216582 1 leaderelection.go:253] successfully acquired
lease <namespace>/virt-operator
{"component":"virt-operator","level":"info","msg":"Started leading",
"pos":"application.go:385","timestamp":"2021-11-30T12:15:19.216836Z"}
Non-leader pod example:
{"component":"virt-operator","level":"info","msg":"Attempting to acquire
leader status","pos":"application.go:400","timestamp":"2021-11-30T12:15:20.533696Z"}
I1130 12:15:20.533792 1 leaderelection.go:243] attempting to acquire
leader lease <namespace>/virt-operator...
Obtain the details of the affected
virt-operator
pods:$ oc -n $NAMESPACE describe pod <virt-operator>
Mitigation
Based on the information obtained during the diagnosis procedure, try to find the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
NoReadyVirtController
Meaning
This alert fires when no available virt-controller
devices have been detected for 5 minutes.
The virt-controller
devices monitor the custom resource definitions of virtual machine instances (VMIs) and manage the associated pods. The devices create pods for VMIs and manage the lifecycle of the pods.
Therefore, virt-controller
devices are critical for all cluster-wide virtualization functionality.
Impact
Any actions related to VM lifecycle management fail. This notably includes launching a new VMI or shutting down an existing VMI.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Verify the number of
virt-controller
devices:$ oc get deployment -n $NAMESPACE virt-controller \
-o jsonpath='{.status.readyReplicas}'
Check the status of the
virt-controller
deployment:$ oc -n $NAMESPACE get deploy virt-controller -o yaml
Obtain the details of the
virt-controller
deployment to check for status conditions such as crashing pods or failure to pull images:$ oc -n $NAMESPACE describe deploy virt-controller
Obtain the details of the
virt-controller
pods:$ get pods -n $NAMESPACE | grep virt-controller
Check the logs of the
virt-controller
pods for error messages:$ oc logs -n $NAMESPACE <virt-controller>
Check the nodes for problems, such as a
NotReady
state:$ oc get nodes
Mitigation
Based on the information obtained during the diagnosis procedure, try to find the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
NoReadyVirtOperator
Meaning
This alert fires when no virt-operator
pod in a Ready
state has been detected for 10 minutes.
The virt-operator
is the first Operator to start in a cluster. Its primary responsibilities include the following:
Installing, live-updating, and live-upgrading a cluster
Monitoring the life cycle of top-level controllers, such as
virt-controller
,virt-handler
,virt-launcher
, and managing their reconciliationCertain cluster-wide tasks, such as certificate rotation and infrastructure management
The default deployment is two virt-operator
pods.
Impact
This alert indicates a cluster-level failure. Critical cluster management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might not be not available.
The virt-operator
is not directly responsible for virtual machines in the cluster. Therefore, its temporary unavailability does not significantly affect workloads.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Obtain the name of the
virt-operator
deployment:$ oc -n $NAMESPACE get deploy virt-operator -o yaml
Generate the description of the
virt-operator
deployment:$ oc -n $NAMESPACE describe deploy virt-operator
Check for node issues, such as a
NotReady
state:$ oc get nodes
Mitigation
Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the Diagnosis procedure.
OrphanedVirtualMachineInstances
Meaning
This alert fires when a virtual machine instance (VMI), or virt-launcher
pod, runs on a node that does not have a running virt-handler
pod. Such a VMI is called orphaned.
Impact
Orphaned VMIs cannot be managed.
Diagnosis
Check the status of the
virt-handler
pods to view the nodes on which they are running:$ oc get pods --all-namespaces -o wide -l kubevirt.io=virt-handler
Check the status of the VMIs to identify VMIs running on nodes that do not have a running
virt-handler
pod:$ oc get vmis --all-namespaces
Check the status of the
virt-handler
daemon:$ oc get daemonset virt-handler --all-namespaces
Example output
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE ...
virt-handler 2 2 2 2 2 ...
The daemon set is considered healthy if the
Desired
,Ready
, andAvailable
columns contain the same value.If the
virt-handler
daemon set is not healthy, check thevirt-handler
daemon set for pod deployment issues:$ oc get daemonset virt-handler --all-namespaces -o yaml | jq .status
Check the nodes for issues such as a
NotReady
status:$ oc get nodes
Check the
spec.workloads
stanza of theKubeVirt
custom resource (CR) for a workloads placement policy:$ oc get kubevirt kubevirt --all-namespaces -o yaml
Mitigation
If a workloads placement policy is configured, add the node with the VMI to the policy.
Possible causes for the removal of a virt-handler
pod from a node include changes to the node’s taints and tolerations or to a pod’s scheduling rules.
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
OutdatedVirtualMachineInstanceWorkloads
Meaning
This alert fires when running virtual machine instances (VMIs) in outdated virt-launcher
pods are detected 24 hours after the OpenShift Virtualization control plane has been updated.
Impact
Outdated VMIs might not have access to new OKD Virtualization features.
Outdated VMIs will not receive the security fixes associated with the virt-launcher
pod update.
Diagnosis
Identify the outdated VMIs:
$ oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces
Check the
KubeVirt
custom resource (CR) to determine whetherworkloadUpdateMethods
is configured in theworkloadUpdateStrategy
stanza:$ oc get kubevirt kubevirt --all-namespaces -o yaml
Check each outdated VMI to determine whether it is live-migratable:
$ oc get vmi <vmi> -o yaml
Example output
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
...
status:
conditions:
- lastProbeTime: null
lastTransitionTime: null
message: cannot migrate VMI which does not use masquerade
to connect to the pod network
reason: InterfaceNotLiveMigratable
status: "False"
type: LiveMigratable
Mitigation
Configuring automated workload updates
Update the HyperConverged
CR to enable automatic workload updates.
Stopping a VM associated with a non-live-migratable VMI
If a VMI is not live-migratable and if
runStrategy: always
is set in the correspondingVirtualMachine
object, you can update the VMI by manually stopping the virtual machine (VM):$ virctl stop --namespace <namespace> <vm>
A new VMI spins up immediately in an updated virt-launcher
pod to replace the stopped VMI. This is the equivalent of a restart action.
Manually stopping a live-migratable VM is destructive and not recommended because it interrupts the workload. |
Migrating a live-migratable VMI
If a VMI is live-migratable, you can update it by creating a VirtualMachineInstanceMigration
object that targets a specific running VMI. The VMI is migrated into an updated virt-launcher
pod.
Create a
VirtualMachineInstanceMigration
manifest and save it asmigration.yaml
:apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceMigration
metadata:
name: <migration_name>
namespace: <namespace>
spec:
vmiName: <vmi_name>
Create a
VirtualMachineInstanceMigration
object to trigger the migration:$ oc create -f migration.yaml
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
SSPCommonTemplatesModificationReverted
Meaning
This alert fires when the Scheduling, Scale, and Performance (SSP) Operator reverts changes to common templates as part of its reconciliation procedure.
The SSP Operator deploys and reconciles the common templates and the Template Validator. If a user or script changes a common template, the changes are reverted by the SSP Operator.
Impact
Changes to common templates are overwritten.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
awk '{print $1}')"
Check the
ssp-operator
logs for templates with reverted changes:$ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator | \
grep 'common template' -C 3
Mitigation
Try to identify and resolve the cause of the changes.
Ensure that changes are made only to copies of templates, and not to the templates themselves.
SSPFailingToReconcile
Meaning
This alert fires when the reconcile cycle of the Scheduling, Scale and Performance (SSP) Operator fails repeatedly, although the SSP Operator is running.
The SSP Operator is responsible for deploying and reconciling the common templates and the Template Validator.
Impact
Dependent components might not be deployed. Changes in the components might not be reconciled. As a result, the common templates or the Template Validator might not be updated or reset if they fail.
Diagnosis
Export the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
awk '{print $1}')"
Obtain the details of the
ssp-operator
pods:$ oc -n $NAMESPACE describe pods -l control-plane=ssp-operator
Check the
ssp-operator
logs for errors:$ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator
Obtain the status of the
virt-template-validator
pods:$ oc -n $NAMESPACE get pods -l name=virt-template-validator
Obtain the details of the
virt-template-validator
pods:$ oc -n $NAMESPACE describe pods -l name=virt-template-validator
Check the
virt-template-validator
logs for errors:$ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator
Mitigation
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
SSPHighRateRejectedVms
Meaning
This alert fires when a user or script attempts to create or modify a large number of virtual machines (VMs), using an invalid configuration.
Impact
The VMs are not created or modified. As a result, the environment might not behave as expected.
Diagnosis
Export the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
awk '{print $1}')"
Check the
virt-template-validator
logs for errors that might indicate the cause:$ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator
Example output
{"component":"kubevirt-template-validator","level":"info","msg":"evalution
summary for ubuntu-3166wmdbbfkroku0:\nminimal-required-memory applied: FAIL,
value 1073741824 is lower than minimum [2147483648]\n\nsucceeded=false",
"pos":"admission.go:25","timestamp":"2021-09-28T17:59:10.934470Z"}
Mitigation
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
SSPOperatorDown
Meaning
This alert fires when all the Scheduling, Scale and Performance (SSP) Operator pods are down.
The SSP Operator is responsible for deploying and reconciling the common templates and the Template Validator.
Impact
Dependent components might not be deployed. Changes in the components might not be reconciled. As a result, the common templates and/or the Template Validator might not be updated or reset if they fail.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
awk '{print $1}')"
Check the status of the
ssp-operator
pods.$ oc -n $NAMESPACE get pods -l control-plane=ssp-operator
Obtain the details of the
ssp-operator
pods:$ oc -n $NAMESPACE describe pods -l control-plane=ssp-operator
Check the
ssp-operator
logs for error messages:$ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator
Mitigation
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
SSPTemplateValidatorDown
Meaning
This alert fires when all the Template Validator pods are down.
The Template Validator checks virtual machines (VMs) to ensure that they do not violate their templates.
Impact
VMs are not validated against their templates. As a result, VMs might be created with specifications that do not match their respective workloads.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
awk '{print $1}')"
Obtain the status of the
virt-template-validator
pods:$ oc -n $NAMESPACE get pods -l name=virt-template-validator
Obtain the details of the
virt-template-validator
pods:$ oc -n $NAMESPACE describe pods -l name=virt-template-validator
Check the
virt-template-validator
logs for error messages:$ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator
Mitigation
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
VirtAPIDown
Meaning
This alert fires when all the API Server pods are down.
Impact
OKD Virtualization objects cannot send API calls.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the status of the
virt-api
pods:$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
Check the status of the
virt-api
deployment:$ oc -n $NAMESPACE get deploy virt-api -o yaml
Check the
virt-api
deployment details for issues such as crashing pods or image pull failures:$ oc -n $NAMESPACE describe deploy virt-api
Check for issues such as nodes in a
NotReady
state:$ oc get nodes
Mitigation
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
VirtApiRESTErrorsBurst
Meaning
More than 80% of REST calls have failed in the virt-api
pods in the last 5 minutes.
Impact
A very high rate of failed REST calls to virt-api
might lead to slow response and execution of API calls, and potentially to API calls being completely dismissed.
However, currently running virtual machine workloads are not likely to be affected.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Obtain the list of
virt-api
pods on your deployment:$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
Check the
virt-api
logs for error messages:$ oc logs -n $NAMESPACE <virt-api>
Obtain the details of the
virt-api
pods:$ oc describe -n $NAMESPACE <virt-api>
Check if any problems occurred with the nodes. For example, they might be in a
NotReady
state:$ oc get nodes
Check the status of the
virt-api
deployment:$ oc -n $NAMESPACE get deploy virt-api -o yaml
Obtain the details of the
virt-api
deployment:$ oc -n $NAMESPACE describe deploy virt-api
Mitigation
Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
VirtApiRESTErrorsHigh
Meaning
More than 5% of REST calls have failed in the virt-api
pods in the last 60 minutes.
Impact
A high rate of failed REST calls to virt-api
might lead to slow response and execution of API calls.
However, currently running virtual machine workloads are not likely to be affected.
Diagnosis
Set the
NAMESPACE
environment variable as follows:$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the status of the
virt-api
pods:$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
Check the
virt-api
logs:$ oc logs -n $NAMESPACE <virt-api>
Obtain the details of the
virt-api
pods:$ oc describe -n $NAMESPACE <virt-api>
Check if any problems occurred with the nodes. For example, they might be in a
NotReady
state:$ oc get nodes
Check the status of the
virt-api
deployment:$ oc -n $NAMESPACE get deploy virt-api -o yaml
Obtain the details of the
virt-api
deployment:$ oc -n $NAMESPACE describe deploy virt-api
Mitigation
Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
VirtControllerDown
Meaning
No running virt-controller
pod has been detected for 5 minutes.
Impact
Any actions related to virtual machine (VM) lifecycle management fail. This notably includes launching a new virtual machine instance (VMI) or shutting down an existing VMI.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the status of the
virt-controller
deployment:$ oc get deployment -n $NAMESPACE virt-controller -o yaml
Review the logs of the
virt-controller
pod:$ oc get logs <virt-controller>
Mitigation
This alert can have a variety of causes, including the following:
Node resource exhaustion
Not enough memory on the cluster
Nodes are down
The API server is overloaded. For example, the scheduler might be under a heavy load and therefore not completely available.
Networking issues
Identify the root cause and fix it, if possible.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
VirtControllerRESTErrorsBurst
Meaning
More than 80% of REST calls in virt-controller
pods failed in the last 5 minutes.
The virt-controller
has likely fully lost the connection to the API server.
This error is frequently caused by one of the following problems:
The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
The
virt-controller
pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact
Status updates are not propagated and actions like migrations cannot take place. However, running workloads are not impacted.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
List the available
virt-controller
pods:$ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-controller
Check the
virt-controller
logs for error messages when connecting to the API server:$ oc logs -n $NAMESPACE <virt-controller>
Mitigation
If the
virt-controller
pod cannot connect to the API server, delete the pod to force a restart:$ oc delete -n $NAMESPACE <virt-controller>
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
VirtControllerRESTErrorsHigh
Meaning
More than 5% of REST calls failed in virt-controller
in the last 60 minutes.
This is most likely because virt-controller
has partially lost connection to the API server.
This error is frequently caused by one of the following problems:
The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
The
virt-controller
pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact
Node-related actions, such as starting and migrating, and scheduling virtual machines, are delayed. Running workloads are not affected, but reporting their current status might be delayed.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
List the available
virt-controller
pods:$ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-controller
Check the
virt-controller
logs for error messages when connecting to the API server:$ oc logs -n $NAMESPACE <virt-controller>
Mitigation
If the
virt-controller
pod cannot connect to the API server, delete the pod to force a restart:$ oc delete -n $NAMESPACE <virt-controller>
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
VirtHandlerDaemonSetRolloutFailing
Meaning
The virt-handler
daemon set has failed to deploy on one or more worker nodes after 15 minutes.
Impact
This alert is a warning. It does not indicate that all virt-handler
daemon sets have failed to deploy. Therefore, the normal lifecycle of virtual machines is not affected unless the cluster is overloaded.
Diagnosis
Identify worker nodes that do not have a running virt-handler
pod:
Export the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the status of the
virt-handler
pods to identify pods that have not deployed:$ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-handler
Obtain the name of the worker node of the
virt-handler
pod:$ oc -n $NAMESPACE get pod <virt-handler> -o jsonpath='{.spec.nodeName}'
Mitigation
If the virt-handler
pods failed to deploy because of insufficient resources, you can delete other pods on the affected worker node.
VirtHandlerRESTErrorsBurst
Meaning
More than 80% of REST calls failed in virt-handler
in the last 5 minutes. This alert usually indicates that the virt-handler
pods cannot connect to the API server.
This error is frequently caused by one of the following problems:
The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
The
virt-handler
pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact
Status updates are not propagated and node-related actions, such as migrations, fail. However, running workloads on the affected node are not impacted.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the status of the
virt-handler
pod:$ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-handler
Check the
virt-handler
logs for error messages when connecting to the API server:$ oc logs -n $NAMESPACE <virt-handler>
Mitigation
If the
virt-handler
cannot connect to the API server, delete the pod to force a restart:$ oc delete -n $NAMESPACE <virt-handler>
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
VirtHandlerRESTErrorsHigh
Meaning
More than 5% of REST calls failed in virt-handler
in the last 60 minutes. This alert usually indicates that the virt-handler
pods have partially lost connection to the API server.
This error is frequently caused by one of the following problems:
The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
The
virt-handler
pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact
Node-related actions, such as starting and migrating workloads, are delayed on the node that virt-handler
is running on. Running workloads are not affected, but reporting their current status might be delayed.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the status of the
virt-handler
pod:$ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-handler
Check the
virt-handler
logs for error messages when connecting to the API server:$ oc logs -n $NAMESPACE <virt-handler>
Mitigation
If the
virt-handler
cannot connect to the API server, delete the pod to force a restart:$ oc delete -n $NAMESPACE <virt-handler>
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
VirtOperatorDown
Meaning
This alert fires when no virt-operator
pod in the Running
state has been detected for 10 minutes.
The virt-operator
is the first Operator to start in a cluster. Its primary responsibilities include the following:
Installing, live-updating, and live-upgrading a cluster
Monitoring the life cycle of top-level controllers, such as
virt-controller
,virt-handler
,virt-launcher
, and managing their reconciliationCertain cluster-wide tasks, such as certificate rotation and infrastructure management
The virt-operator
deployment has a default replica of 2 pods.
Impact
This alert indicates a failure at the level of the cluster. Critical cluster-wide management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might not be available.
The virt-operator
is not directly responsible for virtual machines (VMs) in the cluster. Therefore, its temporary unavailability does not significantly affect VM workloads.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the status of the
virt-operator
deployment:$ oc -n $NAMESPACE get deploy virt-operator -o yaml
Obtain the details of the
virt-operator
deployment:$ oc -n $NAMESPACE describe deploy virt-operator
Check the status of the
virt-operator
pods:$ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-operator
Check for node issues, such as a
NotReady
state:$ oc get nodes
Mitigation
Based on the information obtained during the diagnosis procedure, try to find the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
VirtOperatorRESTErrorsBurst
Meaning
This alert fires when more than 80% of the REST calls in the virt-operator
pods failed in the last 5 minutes. This usually indicates that the virt-operator
pods cannot connect to the API server.
This error is frequently caused by one of the following problems:
The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
The
virt-operator
pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact
Cluster-level actions, such as upgrading and controller reconciliation, might not be available.
However, workloads such as virtual machines (VMs) and VM instances (VMIs) are not likely to be affected.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the status of the
virt-operator
pods:$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
Check the
virt-operator
logs for error messages when connecting to the API server:$ oc -n $NAMESPACE logs <virt-operator>
Obtain the details of the
virt-operator
pod:$ oc -n $NAMESPACE describe pod <virt-operator>
Mitigation
If the
virt-operator
pod cannot connect to the API server, delete the pod to force a restart:$ oc delete -n $NAMESPACE <virt-operator>
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
VirtOperatorRESTErrorsHigh
Meaning
This alert fires when more than 5% of the REST calls in virt-operator
pods failed in the last 60 minutes. This usually indicates the virt-operator
pods cannot connect to the API server.
This error is frequently caused by one of the following problems:
The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
The
virt-operator
pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact
Cluster-level actions, such as upgrading and controller reconciliation, might be delayed.
However, workloads such as virtual machines (VMs) and VM instances (VMIs) are not likely to be affected.
Diagnosis
Set the
NAMESPACE
environment variable:$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the status of the
virt-operator
pods:$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
Check the
virt-operator
logs for error messages when connecting to the API server:$ oc -n $NAMESPACE logs <virt-operator>
Obtain the details of the
virt-operator
pod:$ oc -n $NAMESPACE describe pod <virt-operator>
Mitigation
If the
virt-operator
pod cannot connect to the API server, delete the pod to force a restart:$ oc delete -n $NAMESPACE <virt-operator>
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
VMCannotBeEvicted
Meaning
This alert fires when the eviction strategy of a virtual machine (VM) is set to LiveMigration
but the VM is not migratable.
Impact
Non-migratable VMs prevent node eviction. This condition affects operations such as node drain and updates.
Diagnosis
Check the VMI configuration to determine whether the value of
evictionStrategy
isLiveMigrate
:$ oc get vmis -o yaml
Check for a
False
status in theLIVE-MIGRATABLE
column to identify VMIs that are not migratable:$ oc get vmis -o wide
Obtain the details of the VMI and check
spec.conditions
to identify the issue:$ oc get vmi <vmi> -o yaml
Example output
status:
conditions:
- lastProbeTime: null
lastTransitionTime: null
message: cannot migrate VMI which does not use masquerade to connect
to the pod network
reason: InterfaceNotLiveMigratable
status: "False"
type: LiveMigratable
Mitigation
Set the evictionStrategy
of the VMI to shutdown
or resolve the issue that prevents the VMI from migrating.