Postinstallation network configuration
By default, OKD Virtualization is installed with a single, internal pod network.
After you install OKD Virtualization, you can install networking Operators and configure additional networks.
Installing networking Operators
You must install the Kubernetes NMState Operator to configure a Linux bridge network for live migration or external access to virtual machines (VMs).
You can install the SR-IOV Operator to manage SR-IOV network devices and network attachments.
Installing the Kubernetes NMState Operator by using the web console
You can install the Kubernetes NMState Operator by using the web console. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes.
Prerequisites
- You are logged in as a user with
cluster-admin
privileges.
Procedure
Select Operators → OperatorHub.
In the search field below All Items, enter
nmstate
and click Enter to search for the Kubernetes NMState Operator.Click on the Kubernetes NMState Operator search result.
Click on Install to open the Install Operator window.
Click Install to install the Operator.
After the Operator finishes installing, click View Operator.
Under Provided APIs, click Create Instance to open the dialog box for creating an instance of
kubernetes-nmstate
.In the Name field of the dialog box, ensure the name of the instance is
nmstate.
The name restriction is a known issue. The instance is a singleton for the entire cluster.
Accept the default settings and click Create to create the instance.
Summary
Once complete, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes.
Installing the SR-IOV Network Operator
As a cluster administrator, you can install the Single Root I/O Virtualization (SR-IOV) Network Operator by using the OKD CLI or the web console.
CLI: Installing the SR-IOV Network Operator
As a cluster administrator, you can install the Operator using the CLI.
Prerequisites
A cluster installed on bare-metal hardware with nodes that have hardware that supports SR-IOV.
Install the OpenShift CLI (
oc
).An account with
cluster-admin
privileges.
Procedure
To create the
openshift-sriov-network-operator
namespace, enter the following command:$ cat << EOF| oc create -f -
apiVersion: v1
kind: Namespace
metadata:
name: openshift-sriov-network-operator
annotations:
workload.openshift.io/allowed: management
EOF
To create an OperatorGroup CR, enter the following command:
$ cat << EOF| oc create -f -
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: sriov-network-operators
namespace: openshift-sriov-network-operator
spec:
targetNamespaces:
- openshift-sriov-network-operator
EOF
Subscribe to the SR-IOV Network Operator.
Run the following command to get the OKD major and minor version. It is required for the
channel
value in the next step.$ OC_VERSION=$(oc version -o yaml | grep openshiftVersion | \
grep -o '[0-9]*[.][0-9]*' | head -1)
To create a Subscription CR for the SR-IOV Network Operator, enter the following command:
$ cat << EOF| oc create -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: sriov-network-operator-subscription
namespace: openshift-sriov-network-operator
spec:
channel: "${OC_VERSION}"
name: sriov-network-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF
To verify that the Operator is installed, enter the following command:
$ oc get csv -n openshift-sriov-network-operator \
-o custom-columns=Name:.metadata.name,Phase:.status.phase
Example output
Name Phase
sriov-network-operator.4.0-202310121402 Succeeded
Web console: Installing the SR-IOV Network Operator
As a cluster administrator, you can install the Operator using the web console.
Prerequisites
A cluster installed on bare-metal hardware with nodes that have hardware that supports SR-IOV.
Install the OpenShift CLI (
oc
).An account with
cluster-admin
privileges.
Procedure
Install the SR-IOV Network Operator:
In the OKD web console, click Operators → OperatorHub.
Select SR-IOV Network Operator from the list of available Operators, and then click Install.
On the Install Operator page, under Installed Namespace, select Operator recommended Namespace.
Click Install.
Verify that the SR-IOV Network Operator is installed successfully:
Navigate to the Operators → Installed Operators page.
Ensure that SR-IOV Network Operator is listed in the openshift-sriov-network-operator project with a Status of InstallSucceeded.
During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.
If the Operator does not appear as installed, to troubleshoot further:
Inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.
Navigate to the Workloads → Pods page and check the logs for pods in the
openshift-sriov-network-operator
project.Check the namespace of the YAML file. If the annotation is missing, you can add the annotation
workload.openshift.io/allowed=management
to the Operator namespace with the following command:$ oc annotate ns/openshift-sriov-network-operator workload.openshift.io/allowed=management
For single-node OpenShift clusters, the annotation
workload.openshift.io/allowed=management
is required for the namespace.
Configuring a Linux bridge network
After you install the Kubernetes NMState Operator, you can configure a Linux bridge network for live migration or external access to virtual machines (VMs).
Creating a Linux bridge NNCP
You can create a NodeNetworkConfigurationPolicy
(NNCP) manifest for a Linux bridge network.
Prerequisites
- You have installed the Kubernetes NMState Operator.
Procedure
Create the
NodeNetworkConfigurationPolicy
manifest. This example includes sample values that you must replace with your own information.apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: br1-eth1-policy (1)
spec:
desiredState:
interfaces:
- name: br1 (2)
description: Linux bridge with eth1 as a port (3)
type: linux-bridge (4)
state: up (5)
ipv4:
enabled: false (6)
bridge:
options:
stp:
enabled: false (7)
port:
- name: eth1 (8)
1 Name of the policy. 2 Name of the interface. 3 Optional: Human-readable description of the interface. 4 The type of interface. This example creates a bridge. 5 The requested state for the interface after creation. 6 Disables IPv4 in this example. 7 Disables STP in this example. 8 The node NIC to which the bridge is attached.
Creating a Linux bridge NAD by using the web console
You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines by using the OKD web console.
A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN.
Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported. |
Procedure
In the web console, click Networking → NetworkAttachmentDefinitions.
Click Create Network Attachment Definition.
The network attachment definition must be in the same namespace as the pod or virtual machine.
Enter a unique Name and optional Description.
Select CNV Linux bridge from the Network Type list.
Enter the name of the bridge in the Bridge Name field.
Optional: If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field.
Optional: Select MAC Spoof Check to enable MAC spoof filtering. This feature provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod.
Click Create.
Next steps
Configuring a network for live migration
After you have configured a Linux bridge network, you can configure a dedicated network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration.
Configuring a dedicated secondary network for live migration
To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the NetworkAttachmentDefinition
object to the HyperConverged
custom resource (CR).
Prerequisites
You installed the OpenShift CLI (
oc
).You logged in to the cluster as a user with the
cluster-admin
role.Each node has at least two Network Interface Cards (NICs).
The NICs for live migration are connected to the same VLAN.
Procedure
Create a
NetworkAttachmentDefinition
manifest according to the following example:Example configuration file
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: my-secondary-network (1)
namespace: kubevirt-hyperconverged (2)
spec:
config: '{
"cniVersion": "0.3.1",
"name": "migration-bridge",
"type": "macvlan",
"master": "eth1", (2)
"mode": "bridge",
"ipam": {
"type": "whereabouts", (3)
"range": "10.200.5.0/24" (4)
}
}'
1 Specify the name of the NetworkAttachmentDefinition
object.2 Specify the name of the NIC to be used for live migration. 3 Specify the name of the CNI plugin that provides the network for the NAD. 4 Specify an IP address range for the secondary network. This range must not overlap the IP addresses of the main network. Open the
HyperConverged
CR in your default editor by running the following command:oc edit hyperconverged kubevirt-hyperconverged -n kubevirt-hyperconverged
Add the name of the
NetworkAttachmentDefinition
object to thespec.liveMigrationConfig
stanza of theHyperConverged
CR:Example
HyperConverged
manifestapiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
spec:
liveMigrationConfig:
completionTimeoutPerGiB: 800
network: <network> (1)
parallelMigrationsPerCluster: 5
parallelOutboundMigrationsPerNode: 2
progressTimeout: 150
# ...
1 Specify the name of the Multus NetworkAttachmentDefinition
object to be used for live migrations.Save your changes and exit the editor. The
virt-handler
pods restart and connect to the secondary network.
Verification
When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata.
$ oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'
Selecting a dedicated network by using the web console
You can select a dedicated network for live migration by using the OKD web console.
Prerequisites
- You configured a Multus network for live migration.
Procedure
Navigate to Virtualization > Overview in the OKD web console.
Click the Settings tab and then click Live migration.
Select the network from the Live migration network list.
Configuring an SR-IOV network
After you install the SR-IOV Operator, you can configure an SR-IOV network.
Configuring SR-IOV network devices
The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io
CustomResourceDefinition to OKD. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR).
When applying the configuration specified in a It might take several minutes for a configuration change to apply. |
Prerequisites
You installed the OpenShift CLI (
oc
).You have access to the cluster as a user with the
cluster-admin
role.You have installed the SR-IOV Network Operator.
You have enough available nodes in your cluster to handle the evicted workload from drained nodes.
You have not selected any control plane nodes for SR-IOV network device configuration.
Procedure
Create an
SriovNetworkNodePolicy
object, and then save the YAML in the<name>-sriov-node-network.yaml
file. Replace<name>
with the name for this configuration.apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
name: <name> (1)
namespace: openshift-sriov-network-operator (2)
spec:
resourceName: <sriov_resource_name> (3)
nodeSelector:
feature.node.kubernetes.io/network-sriov.capable: "true" (4)
priority: <priority> (5)
mtu: <mtu> (6)
numVfs: <num> (7)
nicSelector: (8)
vendor: "<vendor_code>" (9)
deviceID: "<device_id>" (10)
pfNames: ["<pf_name>", ...] (11)
rootDevices: ["<pci_bus_id>", "..."] (12)
deviceType: vfio-pci (13)
isRdma: false (14)
1 Specify a name for the CR object. 2 Specify the namespace where the SR-IOV Operator is installed. 3 Specify the resource name of the SR-IOV device plugin. You can create multiple SriovNetworkNodePolicy
objects for a resource name.4 Specify the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes. 5 Optional: Specify an integer value between 0
and99
. A smaller number gets higher priority, so a priority of10
is higher than a priority of99
. The default value is99
.6 Optional: Specify a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models. 7 Specify the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 128
.8 The nicSelector
mapping selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters. It is recommended to identify the Ethernet adapter with enough precision to minimize the possibility of selecting an Ethernet device unintentionally. If you specifyrootDevices
, you must also specify a value forvendor
,deviceID
, orpfNames
. If you specify bothpfNames
androotDevices
at the same time, ensure that they point to an identical device.9 Optional: Specify the vendor hex code of the SR-IOV network device. The only allowed values are either 8086
or15b3
.10 Optional: Specify the device hex code of SR-IOV network device. The only allowed values are 158b
,1015
,1017
.11 Optional: The parameter accepts an array of one or more physical function (PF) names for the Ethernet device. 12 The parameter accepts an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format: 0000:02:00.1
.13 The vfio-pci
driver type is required for virtual functions in OKD Virtualization.14 Optional: Specify whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set isRdma
tofalse
. The default value isfalse
.If
isRDMA
flag is set totrue
, you can continue to use the RDMA enabled VF as a normal network device. A device can be used in either mode.Optional: Label the SR-IOV capable cluster nodes with
SriovNetworkNodePolicy.Spec.NodeSelector
if they are not already labeled. For more information about labeling nodes, see “Understanding how to update labels on nodes”.Create the
SriovNetworkNodePolicy
object:$ oc create -f <name>-sriov-node-network.yaml
where
<name>
specifies the name for this configuration.After applying the configuration update, all the pods in
sriov-network-operator
namespace transition to theRunning
status.To verify that the SR-IOV network device is configured, enter the following command. Replace
<node_name>
with the name of a node with the SR-IOV network device that you just configured.$ oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'
Next steps
Enabling load balancer service creation by using the web console
You can enable the creation of load balancer services for a virtual machine (VM) by using the OKD web console.
Prerequisites
You have configured a load balancer for the cluster.
You are logged in as a user with the
cluster-admin
role.
Procedure
Navigate to Virtualization → Overview.
On the Settings tab, click Cluster.
Expand General settings and SSH configuration.
Set SSH over LoadBalancer service to on.