OKD Virtualization cluster checkup framework
OKD Virtualization includes the following predefined checkups that can be used for cluster maintenance and troubleshooting:
Verifies network connectivity and measures latency between two virtual machines (VMs) that are attached to a secondary network interface.
Verifies that a node can run a VM with a Data Plane Development Kit (DPDK) workload with zero packet loss.
The OKD Virtualization cluster checkup framework is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
About the OKD Virtualization cluster checkup framework
A checkup is an automated test workload that allows you to verify if a specific cluster functionality works as expected. The cluster checkup framework uses native Kubernetes resources to configure and execute the checkup.
By using predefined checkups, cluster administrators and developers can improve cluster maintainability, troubleshoot unexpected behavior, minimize errors, and save time. They can also review the results of the checkup and share them with experts for further analysis. Vendors can write and publish checkups for features or services that they provide and verify that their customer environments are configured correctly.
Running a predefined checkup in an existing namespace involves setting up a service account for the checkup, creating the Role
and RoleBinding
objects for the service account, enabling permissions for the checkup, and creating the input config map and the checkup job. You can run a checkup multiple times.
You must always:
|
Running a latency checkup
You use a predefined checkup to verify network connectivity and measure latency between two virtual machines (VMs) that are attached to a secondary network interface. The latency checkup uses the ping utility.
You run a latency checkup by performing the following steps:
Create a service account, roles, and rolebindings to provide cluster access permissions to the latency checkup.
Create a config map to provide the input to run the checkup and to store the results.
Create a job to run the checkup.
Review the results in the config map.
Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job.
When you are finished, delete the latency checkup resources.
Prerequisites
You installed the OpenShift CLI (
oc
).The cluster has at least two worker nodes.
You configured a network attachment definition for a namespace.
Procedure
Create a
ServiceAccount
,Role
, andRoleBinding
manifest for the latency checkup:Example role manifest file
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vm-latency-checkup-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: kubevirt-vm-latency-checker
rules:
- apiGroups: ["kubevirt.io"]
resources: ["virtualmachineinstances"]
verbs: ["get", "create", "delete"]
- apiGroups: ["subresources.kubevirt.io"]
resources: ["virtualmachineinstances/console"]
verbs: ["get"]
- apiGroups: ["k8s.cni.cncf.io"]
resources: ["network-attachment-definitions"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubevirt-vm-latency-checker
subjects:
- kind: ServiceAccount
name: vm-latency-checkup-sa
roleRef:
kind: Role
name: kubevirt-vm-latency-checker
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: kiagnose-configmap-access
rules:
- apiGroups: [ "" ]
resources: [ "configmaps" ]
verbs: ["get", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kiagnose-configmap-access
subjects:
- kind: ServiceAccount
name: vm-latency-checkup-sa
roleRef:
kind: Role
name: kiagnose-configmap-access
apiGroup: rbac.authorization.k8s.io
Apply the
ServiceAccount
,Role
, andRoleBinding
manifest:$ oc apply -n <target_namespace> -f <latency_sa_roles_rolebinding>.yaml (1)
1 <target_namespace>
is the namespace where the checkup is to be run. This must be an existing namespace where theNetworkAttachmentDefinition
object resides.Create a
ConfigMap
manifest that contains the input parameters for the checkup:Example input config map
apiVersion: v1
kind: ConfigMap
metadata:
name: kubevirt-vm-latency-checkup-config
labels:
kiagnose/checkup-type: kubevirt-vm-latency
data:
spec.timeout: 5m
spec.param.networkAttachmentDefinitionNamespace: <target_namespace>
spec.param.networkAttachmentDefinitionName: "blue-network" (1)
spec.param.maxDesiredLatencyMilliseconds: "10" (2)
spec.param.sampleDurationSeconds: "5" (3)
spec.param.sourceNode: "worker1" (4)
spec.param.targetNode: "worker2" (5)
1 The name of the NetworkAttachmentDefinition
object.2 Optional: The maximum desired latency, in milliseconds, between the virtual machines. If the measured latency exceeds this value, the checkup fails. 3 Optional: The duration of the latency check, in seconds. 4 Optional: When specified, latency is measured from this node to the target node. If the source node is specified, the spec.param.targetNode
field cannot be empty.5 Optional: When specified, latency is measured from the source node to this node. Apply the config map manifest in the target namespace:
$ oc apply -n <target_namespace> -f <latency_config_map>.yaml
Create a
Job
manifest to run the checkup:Example job manifest
apiVersion: batch/v1
kind: Job
metadata:
name: kubevirt-vm-latency-checkup
labels:
kiagnose/checkup-type: kubevirt-vm-latency
spec:
backoffLimit: 0
template:
spec:
serviceAccountName: vm-latency-checkup-sa
restartPolicy: Never
containers:
- name: vm-latency-checkup
image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup-rhel9:v4.0
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
runAsNonRoot: true
seccompProfile:
type: "RuntimeDefault"
env:
- name: CONFIGMAP_NAMESPACE
value: <target_namespace>
- name: CONFIGMAP_NAME
value: kubevirt-vm-latency-checkup-config
- name: POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
Apply the
Job
manifest:$ oc apply -n <target_namespace> -f <latency_job>.yaml
Wait for the job to complete:
$ oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6m
Review the results of the latency checkup by running the following command. If the maximum measured latency is greater than the value of the
spec.param.maxDesiredLatencyMilliseconds
attribute, the checkup fails and returns an error.$ oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yaml
Example output config map (success)
apiVersion: v1
kind: ConfigMap
metadata:
name: kubevirt-vm-latency-checkup-config
namespace: <target_namespace>
labels:
kiagnose/checkup-type: kubevirt-vm-latency
data:
spec.timeout: 5m
spec.param.networkAttachmentDefinitionNamespace: <target_namespace>
spec.param.networkAttachmentDefinitionName: "blue-network"
spec.param.maxDesiredLatencyMilliseconds: "10"
spec.param.sampleDurationSeconds: "5"
spec.param.sourceNode: "worker1"
spec.param.targetNode: "worker2"
status.succeeded: "true"
status.failureReason: ""
status.completionTimestamp: "2022-01-01T09:00:00Z"
status.startTimestamp: "2022-01-01T09:00:07Z"
status.result.avgLatencyNanoSec: "177000"
status.result.maxLatencyNanoSec: "244000" (1)
status.result.measurementDurationSec: "5"
status.result.minLatencyNanoSec: "135000"
status.result.sourceNode: "worker1"
status.result.targetNode: "worker2"
1 The maximum measured latency in nanoseconds. Optional: To view the detailed job log in case of checkup failure, use the following command:
$ oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace>
Delete the job and config map that you previously created by running the following commands:
$ oc delete job -n <target_namespace> kubevirt-vm-latency-checkup
$ oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-config
Optional: If you do not plan to run another checkup, delete the roles manifest:
$ oc delete -f <latency_sa_roles_rolebinding>.yaml
DPDK checkup
Use a predefined checkup to verify that your OKD cluster node can run a virtual machine (VM) with a Data Plane Development Kit (DPDK) workload with zero packet loss. The DPDK checkup runs traffic between a traffic generator and a VM running a test DPDK application.
You run a DPDK checkup by performing the following steps:
Create a service account, role, and role bindings for the DPDK checkup.
Create a config map to provide the input to run the checkup and to store the results.
Create a job to run the checkup.
Review the results in the config map.
Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job.
When you are finished, delete the DPDK checkup resources.
Prerequisites
You have installed the OpenShift CLI (
oc
).The cluster is configured to run DPDK applications.
The project is configured to run DPDK applications.
Procedure
Create a
ServiceAccount
,Role
, andRoleBinding
manifest for the DPDK checkup:Example service account, role, and rolebinding manifest file
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: dpdk-checkup-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: kiagnose-configmap-access
rules:
- apiGroups: [ "" ]
resources: [ "configmaps" ]
verbs: [ "get", "update" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kiagnose-configmap-access
subjects:
- kind: ServiceAccount
name: dpdk-checkup-sa
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kiagnose-configmap-access
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: kubevirt-dpdk-checker
rules:
- apiGroups: [ "kubevirt.io" ]
resources: [ "virtualmachineinstances" ]
verbs: [ "create", "get", "delete" ]
- apiGroups: [ "subresources.kubevirt.io" ]
resources: [ "virtualmachineinstances/console" ]
verbs: [ "get" ]
- apiGroups: [ "" ]
resources: [ "configmaps" ]
verbs: [ "create", "delete" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubevirt-dpdk-checker
subjects:
- kind: ServiceAccount
name: dpdk-checkup-sa
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubevirt-dpdk-checker
Apply the
ServiceAccount
,Role
, andRoleBinding
manifest:$ oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yaml
Create a
ConfigMap
manifest that contains the input parameters for the checkup:Example input config map
apiVersion: v1
kind: ConfigMap
metadata:
name: dpdk-checkup-config
labels:
kiagnose/checkup-type: kubevirt-dpdk
data:
spec.timeout: 10m
spec.param.networkAttachmentDefinitionName: <network_name> (1)
spec.param.trafficGenContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.3.1 (2)
spec.param.vmUnderTestContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.3.1" (3)
1 The name of the NetworkAttachmentDefinition
object.2 The container disk image for the traffic generator. In this example, the image is pulled from the upstream Project Quay Container Registry. 3 The container disk image for the VM under test. In this example, the image is pulled from the upstream Project Quay Container Registry. Apply the
ConfigMap
manifest in the target namespace:$ oc apply -n <target_namespace> -f <dpdk_config_map>.yaml
Create a
Job
manifest to run the checkup:Example job manifest
apiVersion: batch/v1
kind: Job
metadata:
name: dpdk-checkup
labels:
kiagnose/checkup-type: kubevirt-dpdk
spec:
backoffLimit: 0
template:
spec:
serviceAccountName: dpdk-checkup-sa
restartPolicy: Never
containers:
- name: dpdk-checkup
image: registry.redhat.io/container-native-virtualization/kubevirt-dpdk-checkup-rhel9:v4.0
imagePullPolicy: Always
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
runAsNonRoot: true
seccompProfile:
type: "RuntimeDefault"
env:
- name: CONFIGMAP_NAMESPACE
value: <target-namespace>
- name: CONFIGMAP_NAME
value: dpdk-checkup-config
- name: POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
Apply the
Job
manifest:$ oc apply -n <target_namespace> -f <dpdk_job>.yaml
Wait for the job to complete:
$ oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10m
Review the results of the checkup by running the following command:
$ oc get configmap dpdk-checkup-config -n <target_namespace> -o yaml
Example output config map (success)
apiVersion: v1
kind: ConfigMap
metadata:
name: dpdk-checkup-config
labels:
kiagnose/checkup-type: kubevirt-dpdk
data:
spec.timeout: 10m
spec.param.NetworkAttachmentDefinitionName: "dpdk-network-1"
spec.param.trafficGenContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.2.0"
spec.param.vmUnderTestContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.2.0"
status.succeeded: "true" (1)
status.failureReason: "" (2)
status.startTimestamp: "2023-07-31T13:14:38Z" (3)
status.completionTimestamp: "2023-07-31T13:19:41Z" (4)
status.result.trafficGenSentPackets: "480000000" (5)
status.result.trafficGenOutputErrorPackets: "0" (6)
status.result.trafficGenInputErrorPackets: "0" (7)
status.result.trafficGenActualNodeName: worker-dpdk1 (8)
status.result.vmUnderTestActualNodeName: worker-dpdk2 (9)
status.result.vmUnderTestReceivedPackets: "480000000" (10)
status.result.vmUnderTestRxDroppedPackets: "0" (11)
status.result.vmUnderTestTxDroppedPackets: "0" (12)
1 Specifies if the checkup is successful ( true
) or not (false
).2 The reason for failure if the checkup fails. 3 The time when the checkup started, in RFC 3339 time format. 4 The time when the checkup has completed, in RFC 3339 time format. 5 The number of packets sent from the traffic generator. 6 The number of error packets sent from the traffic generator. 7 The number of error packets received by the traffic generator. 8 The node on which the traffic generator VM was scheduled. 9 The node on which the VM under test was scheduled. 10 The number of packets received on the VM under test. 11 The ingress traffic packets that were dropped by the DPDK application. 12 The egress traffic packets that were dropped from the DPDK application. Delete the job and config map that you previously created by running the following commands:
$ oc delete job -n <target_namespace> dpdk-checkup
$ oc delete config-map -n <target_namespace> dpdk-checkup-config
Optional: If you do not plan to run another checkup, delete the
ServiceAccount
,Role
, andRoleBinding
manifest:$ oc delete -f <dpdk_sa_roles_rolebinding>.yaml
DPDK checkup config map parameters
The following table shows the mandatory and optional parameters that you can set in the data
stanza of the input ConfigMap
manifest when you run a cluster DPDK readiness checkup:
Parameter | Description | Is Mandatory |
---|---|---|
| The time, in minutes, before the checkup fails. | True |
| The name of the | True |
| The container disk image for the traffic generator. The default value is | False |
| The node on which the traffic generator VM is to be scheduled. The node should be configured to allow DPDK traffic. | False |
| The number of packets per second, in kilo (k) or million(m). The default value is 8m. | False |
| The container disk image for the VM under test. The default value is | False |
| The node on which the VM under test is to be scheduled. The node should be configured to allow DPDK traffic. | False |
| The duration, in minutes, for which the traffic generator runs. The default value is 5 minutes. | False |
| The maximum bandwidth of the SR-IOV NIC. The default value is 10Gbps. | False |
| When set to | False |
Building a container disk image for Fedora virtual machines
You can build a custom Fedora 8 OS image in qcow2
format and use it to create a container disk image. You can store the container disk image in a registry that is accessible from your cluster and specify the image location in the spec.param.vmContainerDiskImage
attribute of the DPDK checkup config map.
To build a container disk image, you must create an image builder virtual machine (VM). The image builder VM is a Fedora 8 VM that can be used to build custom Fedora images.
Prerequisites
The image builder VM must run Fedora 8.7 and must have a minimum of 2 CPU cores, 4 GiB RAM, and 20 GB of free space in the
/var
directory.You have installed the image builder tool and its CLI (
composer-cli
) on the VM.You have installed the
virt-customize
tool:# dnf install libguestfs-tools
You have installed the Podman CLI tool (
podman
).
Procedure
Verify that you can build a Fedora 8.7 image:
# composer-cli distros list
To run the
composer-cli
commands as non-root, add your user to theweldr
orroot
groups:# usermod -a -G weldr user
$ newgrp weldr
Enter the following command to create an image blueprint file in TOML format that contains the packages to be installed, kernel customizations, and the services to be disabled during boot time:
$ cat << EOF > dpdk-vm.toml
name = "dpdk_image"
description = "Image to use with the DPDK checkup"
version = "0.0.1"
distro = "rhel-87"
[[packages]]
name = "dpdk"
[[packages]]
name = "dpdk-tools"
[[packages]]
name = "driverctl"
[[packages]]
name = "tuned-profiles-cpu-partitioning"
[customizations.kernel]
append = "default_hugepagesz=1GB hugepagesz=1G hugepages=8 isolcpus=2-7"
[customizations.services]
disabled = ["NetworkManager-wait-online", "sshd"]
EOF
Push the blueprint file to the image builder tool by running the following command:
# composer-cli blueprints push dpdk-vm.toml
Generate the system image by specifying the blueprint name and output file format. The Universally Unique Identifier (UUID) of the image is displayed when you start the compose process.
# composer-cli compose start dpdk_image qcow2
Wait for the compose process to complete. The compose status must show
FINISHED
before you can continue to the next step.# composer-cli compose status
Enter the following command to download the
qcow2
image file by specifying its UUID:# composer-cli compose image <UUID>
Create the customization scripts by running the following commands:
$ cat <<EOF >customize-vm
echo isolated_cores=2-7 > /etc/tuned/cpu-partitioning-variables.conf
tuned-adm profile cpu-partitioning
echo "options vfio enable_unsafe_noiommu_mode=1" > /etc/modprobe.d/vfio-noiommu.conf
EOF
$ cat <<EOF >first-boot
driverctl set-override 0000:06:00.0 vfio-pci
driverctl set-override 0000:07:00.0 vfio-pci
mkdir /mnt/huge
mount /mnt/huge --source nodev -t hugetlbfs -o pagesize=1GB
EOF
Use the
virt-customize
tool to customize the image generated by the image builder tool:$ virt-customize -a <UUID>.qcow2 --run=customize-vm --firstboot=first-boot --selinux-relabel
To create a Dockerfile that contains all the commands to build the container disk image, enter the following command:
$ cat << EOF > Dockerfile
FROM scratch
COPY <uuid>-disk.qcow2 /disk/
EOF
where:
<uuid>-disk.qcow2
Specifies the name of the custom image in
qcow2
format.Build and tag the container by running the following command:
$ podman build . -t dpdk-rhel:latest
Push the container disk image to a registry that is accessible from your cluster by running the following command:
$ podman push dpdk-rhel:latest
Provide a link to the container disk image in the
spec.param.vmContainerDiskImage
attribute in the DPDK checkup config map.