Specifying nodes for OKD Virtualization components

Specify the nodes where you want to deploy OKD Virtualization Operators, workloads, and controllers by configuring node placement rules.

You can configure node placement for some components after installing OKD Virtualization, but there must not be virtual machines present if you want to configure node placement for workloads.

About node placement for virtualization components

You might want to customize where OKD Virtualization deploys its components to ensure that:

  • Virtual machines only deploy on nodes that are intended for virtualization workloads.

  • Operators only deploy on infrastructure nodes.

  • Certain nodes are unaffected by OKD Virtualization. For example, you have workloads unrelated to virtualization running on your cluster, and you want those workloads to be isolated from OKD Virtualization.

How to apply node placement rules to virtualization components

You can specify node placement rules for a component by editing the corresponding object directly or by using the web console.

  • For the OKD Virtualization Operators that Operator Lifecycle Manager (OLM) deploys, edit the OLM Subscription object directly. Currently, you cannot configure node placement rules for the Subscription object by using the web console.

  • For components that the OKD Virtualization Operators deploy, edit the HyperConverged object directly or configure it by using the web console during OKD Virtualization installation.

  • For the hostpath provisioner, edit the HostPathProvisioner object directly or configure it by using the web console.

    You must schedule the hostpath provisioner and the virtualization components on the same nodes. Otherwise, virtualization pods that use the hostpath provisioner cannot run.

Depending on the object, you can use one or more of the following rule types:

nodeSelector

Allows pods to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.

affinity

Enables you to use more expressive syntax to set rules that match nodes with pods. Affinity also allows for more nuance in how the rules are applied. For example, you can specify that a rule is a preference, rather than a hard requirement, so that pods are still scheduled if the rule is not satisfied.

tolerations

Allows pods to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts pods that tolerate the taint.

Node placement in the OLM Subscription object

To specify the nodes where OLM deploys the OKD Virtualization Operators, edit the Subscription object during OKD Virtualization installation. You can include node placement rules in the spec.config field, as shown in the following example:

  1. apiVersion: operators.coreos.com/v1alpha1
  2. kind: Subscription
  3. metadata:
  4. name: hco-operatorhub
  5. namespace: openshift-cnv
  6. spec:
  7. source: redhat-operators
  8. sourceNamespace: openshift-marketplace
  9. name: kubevirt-hyperconverged
  10. startingCSV: kubevirt-hyperconverged-operator.v4.11.0
  11. channel: "stable"
  12. config: (1)
1The config field supports nodeSelector and tolerations, but it does not support affinity.

Node placement in the HyperConverged object

To specify the nodes where OKD Virtualization deploys its components, you can include the nodePlacement object in the HyperConverged Cluster custom resource (CR) file that you create during OKD Virtualization installation. You can include nodePlacement under the spec.infra and spec.workloads fields, as shown in the following example:

  1. apiVersion: hco.kubevirt.io/v1beta1
  2. kind: HyperConverged
  3. metadata:
  4. name: kubevirt-hyperconverged
  5. namespace: openshift-cnv
  6. spec:
  7. infra:
  8. nodePlacement: (1)
  9. ...
  10. workloads:
  11. nodePlacement:
  12. ...
1The nodePlacement fields support nodeSelector, affinity, and tolerations fields.

Node placement in the HostPathProvisioner object

You can configure node placement rules in the spec.workload field of the HostPathProvisioner object that you create when you install the hostpath provisioner.

  1. apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
  2. kind: HostPathProvisioner
  3. metadata:
  4. name: hostpath-provisioner
  5. spec:
  6. imagePullPolicy: IfNotPresent
  7. pathConfig:
  8. path: "</path/to/backing/directory>"
  9. useNamingPrefix: false
  10. workload: (1)
1The workload field supports nodeSelector, affinity, and tolerations fields.

Additional resources

Example manifests

The following example YAML files use nodePlacement, affinity, and tolerations objects to customize node placement for OKD Virtualization components.

Operator Lifecycle Manager Subscription object

Example: Node placement with nodeSelector in the OLM Subscription object

In this example, nodeSelector is configured so that OLM places the OKD Virtualization Operators on nodes that are labeled with example.io/example-infra-key = example-infra-value.

  1. apiVersion: operators.coreos.com/v1alpha1
  2. kind: Subscription
  3. metadata:
  4. name: hco-operatorhub
  5. namespace: openshift-cnv
  6. spec:
  7. source: redhat-operators
  8. sourceNamespace: openshift-marketplace
  9. name: kubevirt-hyperconverged
  10. startingCSV: kubevirt-hyperconverged-operator.v4.11.0
  11. channel: "stable"
  12. config:
  13. nodeSelector:
  14. example.io/example-infra-key: example-infra-value

Example: Node placement with tolerations in the OLM Subscription object

In this example, nodes that are reserved for OLM to deploy OKD Virtualization Operators are labeled with the key=virtualization:NoSchedule taint. Only pods with the matching tolerations are scheduled to these nodes.

  1. apiVersion: operators.coreos.com/v1alpha1
  2. kind: Subscription
  3. metadata:
  4. name: hco-operatorhub
  5. namespace: openshift-cnv
  6. spec:
  7. source: redhat-operators
  8. sourceNamespace: openshift-marketplace
  9. name: kubevirt-hyperconverged
  10. startingCSV: kubevirt-hyperconverged-operator.v4.11.0
  11. channel: "stable"
  12. config:
  13. tolerations:
  14. - key: "key"
  15. operator: "Equal"
  16. value: "virtualization"
  17. effect: "NoSchedule"

HyperConverged object

Example: Node placement with nodeSelector in the HyperConverged Cluster CR

In this example, nodeSelector is configured so that infrastructure resources are placed on nodes that are labeled with example.io/example-infra-key = example-infra-value and workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value.

  1. apiVersion: hco.kubevirt.io/v1beta1
  2. kind: HyperConverged
  3. metadata:
  4. name: kubevirt-hyperconverged
  5. namespace: openshift-cnv
  6. spec:
  7. infra:
  8. nodePlacement:
  9. nodeSelector:
  10. example.io/example-infra-key: example-infra-value
  11. workloads:
  12. nodePlacement:
  13. nodeSelector:
  14. example.io/example-workloads-key: example-workloads-value

Example: Node placement with affinity in the HyperConverged Cluster CR

In this example, affinity is configured so that infrastructure resources are placed on nodes that are labeled with example.io/example-infra-key = example-value and workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value. Nodes that have more than eight CPUs are preferred for workloads, but if they are not available, pods are still scheduled.

  1. apiVersion: hco.kubevirt.io/v1beta1
  2. kind: HyperConverged
  3. metadata:
  4. name: kubevirt-hyperconverged
  5. namespace: openshift-cnv
  6. spec:
  7. infra:
  8. nodePlacement:
  9. affinity:
  10. nodeAffinity:
  11. requiredDuringSchedulingIgnoredDuringExecution:
  12. nodeSelectorTerms:
  13. - matchExpressions:
  14. - key: example.io/example-infra-key
  15. operator: In
  16. values:
  17. - example-infra-value
  18. workloads:
  19. nodePlacement:
  20. affinity:
  21. nodeAffinity:
  22. requiredDuringSchedulingIgnoredDuringExecution:
  23. nodeSelectorTerms:
  24. - matchExpressions:
  25. - key: example.io/example-workloads-key
  26. operator: In
  27. values:
  28. - example-workloads-value
  29. preferredDuringSchedulingIgnoredDuringExecution:
  30. - weight: 1
  31. preference:
  32. matchExpressions:
  33. - key: example.io/num-cpus
  34. operator: Gt
  35. values:
  36. - 8

Example: Node placement with tolerations in the HyperConverged Cluster CR

In this example, nodes that are reserved for OKD Virtualization components are labeled with the key=virtualization:NoSchedule taint. Only pods with the matching tolerations are scheduled to these nodes.

  1. apiVersion: hco.kubevirt.io/v1beta1
  2. kind: HyperConverged
  3. metadata:
  4. name: kubevirt-hyperconverged
  5. namespace: openshift-cnv
  6. spec:
  7. workloads:
  8. nodePlacement:
  9. tolerations:
  10. - key: "key"
  11. operator: "Equal"
  12. value: "virtualization"
  13. effect: "NoSchedule"

HostPathProvisioner object

Example: Node placement with nodeSelector in the HostPathProvisioner object

In this example, nodeSelector is configured so that workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value.

  1. apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
  2. kind: HostPathProvisioner
  3. metadata:
  4. name: hostpath-provisioner
  5. spec:
  6. imagePullPolicy: IfNotPresent
  7. pathConfig:
  8. path: "</path/to/backing/directory>"
  9. useNamingPrefix: false
  10. workload:
  11. nodeSelector:
  12. example.io/example-workloads-key: example-workloads-value