Connecting a virtual machine to an SR-IOV network
You can connect a virtual machine (VM) to a Single Root I/O Virtualization (SR-IOV) network by performing the following steps:
Configure an SR-IOV network device.
Configure an SR-IOV network.
Connect the VM to the SR-IOV network.
Prerequisites
You must have enabled global SR-IOV and VT-d settings in the firmware for the host.
You must have installed the SR-IOV Network Operator.
Configuring SR-IOV network devices
The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io
CustomResourceDefinition to OKD. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR).
When applying the configuration specified in a It might take several minutes for a configuration change to apply. |
Prerequisites
You installed the OpenShift CLI (
oc
).You have access to the cluster as a user with the
cluster-admin
role.You have installed the SR-IOV Network Operator.
You have enough available nodes in your cluster to handle the evicted workload from drained nodes.
You have not selected any control plane nodes for SR-IOV network device configuration.
Procedure
Create an
SriovNetworkNodePolicy
object, and then save the YAML in the<name>-sriov-node-network.yaml
file. Replace<name>
with the name for this configuration.apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
name: <name> (1)
namespace: openshift-sriov-network-operator (2)
spec:
resourceName: <sriov_resource_name> (3)
nodeSelector:
feature.node.kubernetes.io/network-sriov.capable: "true" (4)
priority: <priority> (5)
mtu: <mtu> (6)
numVfs: <num> (7)
nicSelector: (8)
vendor: "<vendor_code>" (9)
deviceID: "<device_id>" (10)
pfNames: ["<pf_name>", ...] (11)
rootDevices: ["<pci_bus_id>", "..."] (12)
deviceType: vfio-pci (13)
isRdma: false (14)
1 Specify a name for the CR object. 2 Specify the namespace where the SR-IOV Operator is installed. 3 Specify the resource name of the SR-IOV device plugin. You can create multiple SriovNetworkNodePolicy
objects for a resource name.4 Specify the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes. 5 Optional: Specify an integer value between 0
and99
. A smaller number gets higher priority, so a priority of10
is higher than a priority of99
. The default value is99
.6 Optional: Specify a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models. 7 Specify the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 128
.8 The nicSelector
mapping selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters. It is recommended to identify the Ethernet adapter with enough precision to minimize the possibility of selecting an Ethernet device unintentionally. If you specifyrootDevices
, you must also specify a value forvendor
,deviceID
, orpfNames
. If you specify bothpfNames
androotDevices
at the same time, ensure that they point to an identical device.9 Optional: Specify the vendor hex code of the SR-IOV network device. The only allowed values are either 8086
or15b3
.10 Optional: Specify the device hex code of SR-IOV network device. The only allowed values are 158b
,1015
,1017
.11 Optional: The parameter accepts an array of one or more physical function (PF) names for the Ethernet device. 12 The parameter accepts an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format: 0000:02:00.1
.13 The vfio-pci
driver type is required for virtual functions in OKD Virtualization.14 Optional: Specify whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set isRdma
tofalse
. The default value isfalse
.If
isRDMA
flag is set totrue
, you can continue to use the RDMA enabled VF as a normal network device. A device can be used in either mode.Optional: Label the SR-IOV capable cluster nodes with
SriovNetworkNodePolicy.Spec.NodeSelector
if they are not already labeled. For more information about labeling nodes, see “Understanding how to update labels on nodes”.Create the
SriovNetworkNodePolicy
object:$ oc create -f <name>-sriov-node-network.yaml
where
<name>
specifies the name for this configuration.After applying the configuration update, all the pods in
sriov-network-operator
namespace transition to theRunning
status.To verify that the SR-IOV network device is configured, enter the following command. Replace
<node_name>
with the name of a node with the SR-IOV network device that you just configured.$ oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'
Configuring SR-IOV additional network
You can configure an additional network that uses SR-IOV hardware by creating an SriovNetwork
object.
When you create an SriovNetwork
object, the SR-IOV Network Operator automatically creates a NetworkAttachmentDefinition
object.
Do not modify or delete an |
Prerequisites
Install the OpenShift CLI (
oc
).Log in as a user with
cluster-admin
privileges.
Procedure
- Create the following
SriovNetwork
object, and then save the YAML in the<name>-sriov-network.yaml
file. Replace<name>
with a name for this additional network.
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetwork
metadata:
name: <name> (1)
namespace: openshift-sriov-network-operator (2)
spec:
resourceName: <sriov_resource_name> (3)
networkNamespace: <target_namespace> (4)
vlan: <vlan> (5)
spoofChk: "<spoof_check>" (6)
linkState: <link_state> (7)
maxTxRate: <max_tx_rate> (8)
minTxRate: <min_rx_rate> (9)
vlanQoS: <vlan_qos> (10)
trust: "<trust_vf>" (11)
capabilities: <capabilities> (12)
1 | Replace <name> with a name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. | ||
2 | Specify the namespace where the SR-IOV Network Operator is installed. | ||
3 | Replace <sriov_resource_name> with the value for the .spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. | ||
4 | Replace <target_namespace> with the target namespace for the SriovNetwork. Only pods or virtual machines in the target namespace can attach to the SriovNetwork. | ||
5 | Optional: Replace <vlan> with a Virtual LAN (VLAN) ID for the additional network. The integer value must be from 0 to 4095 . The default value is 0 . | ||
6 | Optional: Replace <spoof_check> with the spoof check mode of the VF. The allowed values are the strings “on” and “off” .
| ||
7 | Optional: Replace <link_state> with the link state of virtual function (VF). Allowed value are enable , disable and auto . | ||
8 | Optional: Replace <max_tx_rate> with a maximum transmission rate, in Mbps, for the VF. | ||
9 | Optional: Replace <min_tx_rate> with a minimum transmission rate, in Mbps, for the VF. This value should always be less than or equal to Maximum transmission rate.
| ||
10 | Optional: Replace <vlan_qos> with an IEEE 802.1p priority level for the VF. The default value is 0 . | ||
11 | Optional: Replace <trust_vf> with the trust mode of the VF. The allowed values are the strings “on” and “off” .
| ||
12 | Optional: Replace <capabilities> with the capabilities to configure for this network. |
To create the object, enter the following command. Replace
<name>
with a name for this additional network.$ oc create -f <name>-sriov-network.yaml
Optional: To confirm that the
NetworkAttachmentDefinition
object associated with theSriovNetwork
object that you created in the previous step exists, enter the following command. Replace<namespace>
with the namespace you specified in theSriovNetwork
object.$ oc get net-attach-def -n <namespace>
Connecting a virtual machine to an SR-IOV network
You can connect the virtual machine (VM) to the SR-IOV network by including the network details in the VM configuration.
Procedure
Include the SR-IOV network details in the
spec.domain.devices.interfaces
andspec.networks
of the VM configuration:kind: VirtualMachine
# ...
spec:
domain:
devices:
interfaces:
- name: <default> (1)
masquerade: {} (2)
- name: <nic1> (3)
sriov: {}
networks:
- name: <default> (4)
pod: {}
- name: <nic1> (5)
multus:
networkName: <sriov-network> (6)
# ...
1 A unique name for the interface that is connected to the pod network. 2 The masquerade
binding to the default pod network.3 A unique name for the SR-IOV interface. 4 The name of the pod network interface. This must be the same as the interfaces.name
that you defined earlier.5 The name of the SR-IOV interface. This must be the same as the interfaces.name
that you defined earlier.6 The name of the SR-IOV network attachment definition. Apply the virtual machine configuration:
$ oc apply -f <vm-sriov.yaml> (1)
1 The name of the virtual machine YAML file.
Configuring a cluster for DPDK workloads
You can use the following procedure to configure an OKD cluster to run Data Plane Development Kit (DPDK) workloads.
Configuring a cluster for DPDK workloads is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
Prerequisites
You have access to the cluster as a user with
cluster-admin
permissions.You have installed the OpenShift CLI (
oc
).You have installed the SR-IOV Network Operator.
You have installed the Node Tuning Operator.
Procedure
Map your compute nodes topology to determine which Non-Uniform Memory Access (NUMA) CPUs are isolated for DPDK applications and which ones are reserved for the operating system (OS).
Label a subset of the compute nodes with a custom role; for example,
worker-dpdk
:$ oc label node <node_name> node-role.kubernetes.io/worker-dpdk=""
Create a new
MachineConfigPool
manifest that contains theworker-dpdk
label in thespec.machineConfigSelector
object:Example
MachineConfigPool
manifestapiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
name: worker-dpdk
labels:
machineconfiguration.openshift.io/role: worker-dpdk
spec:
machineConfigSelector:
matchExpressions:
- key: machineconfiguration.openshift.io/role
operator: In
values:
- worker
- worker-dpdk
nodeSelector:
matchLabels:
node-role.kubernetes.io/worker-dpdk: ""
Create a
PerformanceProfile
manifest that applies to the labeled nodes and the machine config pool that you created in the previous steps. The performance profile specifies the CPUs that are isolated for DPDK applications and the CPUs that are reserved for house keeping.Example
PerformanceProfile
manifestapiVersion: performance.openshift.io/v2
kind: PerformanceProfile
metadata:
name: profile-1
spec:
cpu:
isolated: 4-39,44-79
reserved: 0-3,40-43
hugepages:
defaultHugepagesSize: 1G
pages:
- count: 32
node: 0
size: 1G
net:
userLevelNetworking: true
nodeSelector:
node-role.kubernetes.io/worker-dpdk: ""
numa:
topologyPolicy: single-numa-node
The compute nodes automatically restart after you apply the
MachineConfigPool
andPerformanceProfile
manifests.Retrieve the name of the generated
RuntimeClass
resource from thestatus.runtimeClass
field of thePerformanceProfile
object:$ oc get performanceprofiles.performance.openshift.io profile-1 -o=jsonpath='{.status.runtimeClass}{"\n"}'
Set the previously obtained
RuntimeClass
name as the default container runtime class for thevirt-launcher
pods by adding the following annotation to theHyperConverged
custom resource (CR):$ oc annotate --overwrite -n openshift-cnv hco kubevirt-hyperconverged \
kubevirt.kubevirt.io/jsonpatch='[{"op": "add", "path": "/spec/configuration/defaultRuntimeClass", "value": <runtimeclass_name>}]'
Adding the annotation to the
HyperConverged
CR changes a global setting that affects all VMs that are created after the annotation is applied. Setting this annotation breaches support of the OKD Virtualization instance and must be used only on test clusters. For best performance, apply for a support exception.Create an
SriovNetworkNodePolicy
object with thespec.deviceType
field set tovfio-pci
:Example
SriovNetworkNodePolicy
manifestapiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
name: policy-1
namespace: openshift-sriov-network-operator
spec:
resourceName: intel_nics_dpdk
deviceType: vfio-pci
mtu: 9000
numVfs: 4
priority: 99
nicSelector:
vendor: "8086"
deviceID: "1572"
pfNames:
- eno3
rootDevices:
- "0000:19:00.2"
nodeSelector:
feature.node.kubernetes.io/network-sriov.capable: "true"
Additional resources
Configuring a project for DPDK workloads
You can configure the project to run DPDK workloads on SR-IOV hardware.
Prerequisites
- Your cluster is configured to run DPDK workloads.
Procedure
Create a namespace for your DPDK applications:
$ oc create ns dpdk-checkup-ns
Create an
SriovNetwork
object that references theSriovNetworkNodePolicy
object. When you create anSriovNetwork
object, the SR-IOV Network Operator automatically creates aNetworkAttachmentDefinition
object.Example
SriovNetwork
manifestapiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetwork
metadata:
name: dpdk-sriovnetwork
namespace: openshift-sriov-network-operator
spec:
ipam: |
{
"type": "host-local",
"subnet": "10.56.217.0/24",
"rangeStart": "10.56.217.171",
"rangeEnd": "10.56.217.181",
"routes": [{
"dst": "0.0.0.0/0"
}],
"gateway": "10.56.217.1"
}
networkNamespace: dpdk-checkup-ns (1)
resourceName: intel_nics_dpdk (2)
spoofChk: "off"
trust: "on"
vlan: 1019
1 The namespace where the NetworkAttachmentDefinition
object is deployed.2 The value of the spec.resourceName
attribute of theSriovNetworkNodePolicy
object that was created when configuring the cluster for DPDK workloads.Optional: Run the virtual machine latency checkup to verify that the network is properly configured.
Optional: Run the DPDK checkup to verify that the namespace is ready for DPDK workloads.
Additional resources
Configuring a virtual machine for DPDK workloads
You can run Data Packet Development Kit (DPDK) workloads on virtual machines (VMs) to achieve lower latency and higher throughput for faster packet processing in the user space. DPDK uses the SR-IOV network for hardware-based I/O sharing.
Prerequisites
Your cluster is configured to run DPDK workloads.
You have created and configured the project in which the VM will run.
Procedure
Edit the
VirtualMachine
manifest to include information about the SR-IOV network interface, CPU topology, CRI-O annotations, and huge pages:Example
VirtualMachine
manifestapiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: rhel-dpdk-vm
spec:
running: true
template:
metadata:
annotations:
cpu-load-balancing.crio.io: disable (1)
cpu-quota.crio.io: disable (2)
irq-load-balancing.crio.io: disable (3)
spec:
nodeSelector:
node-role.kubernetes.io/worker-dpdk: "" (4)
domain:
cpu:
sockets: 1 (5)
cores: 5 (6)
threads: 2
dedicatedCpuPlacement: true
isolateEmulatorThread: true
interfaces:
- masquerade: {}
name: default
- model: virtio
name: nic-east
pciAddress: '0000:07:00.0'
sriov: {}
networkInterfaceMultiqueue: true
rng: {}
memory:
hugepages:
pageSize: 1Gi (7)
resources:
requests:
memory: 8Gi
networks:
- name: default
pod: {}
- multus:
networkName: dpdk-net (8)
name: nic-east
# ...
1 This annotation specifies that load balancing is disabled for CPUs that are used by the container. 2 This annotation specifies that the CPU quota is disabled for CPUs that are used by the container. 3 This annotation specifies that Interrupt Request (IRQ) load balancing is disabled for CPUs that are used by the container. 4 The label that is used in the MachineConfigPool
andPerformanceProfile
manifests that were created when configuring the cluster for DPDK workloads.5 The number of sockets inside the VM. This field must be set to 1
for the CPUs to be scheduled from the same Non-Uniform Memory Access (NUMA) node.6 The number of cores inside the VM. This must be a value greater than or equal to 1
. In this example, the VM is scheduled with 5 hyper-threads or 10 CPUs.7 The size of the huge pages. The possible values for x86-64 architecture are 1Gi and 2Mi. In this example, the request is for 8 huge pages of size 1Gi. 8 The name of the SR-IOV NetworkAttachmentDefinition
object.Save and exit the editor.
Apply the
VirtualMachine
manifest:$ oc apply -f <file_name>.yaml
Configure the guest operating system. The following example shows the configuration steps for Fedora 8 OS:
Configure isolated VM CPUs and specify huge pages by using the GRUB bootloader command-line interface. In the following example, 8 1G huge pages are specified. The first two CPUs (0 and 1) are set aside for house keeping tasks and the rest are isolated for the DPDK application.
$ grubby --update-kernel=ALL --args="default_hugepagesz=1GB hugepagesz=1G hugepages=8 isolcpus=2-9"
To achieve low-latency tuning by using the
cpu-partitioning
profile in the TuneD application, run the following commands:$ dnf install -y tuned-profiles-cpu-partitioning
$ echo isolated_cores=2-9 > /etc/tuned/cpu-partitioning-variables.conf
$ tuned-adm profile cpu-partitioning
Override the SR-IOV NIC driver by using the
driverctl
device driver control utility:$ dnf install -y driverctl
$ driverctl set-override 0000:07:00.0 vfio-pci
Restart the VM to apply the changes.