- Specifying nodes for OKD Virtualization components
- About node placement for virtualization components
- Example manifests
Specifying nodes for OKD Virtualization components
- About node placement for virtualization components
- Example manifests
Specify the nodes where you want to deploy OKD Virtualization Operators, workloads, and controllers by configuring node placement rules.
You can configure node placement for some components after installing OKD Virtualization, but there must not be virtual machines present if you want to configure node placement for workloads. |
About node placement for virtualization components
You might want to customize where OKD Virtualization deploys its components to ensure that:
Virtual machines only deploy on nodes that are intended for virtualization workloads.
Operators only deploy on infrastructure nodes.
Certain nodes are unaffected by OKD Virtualization. For example, you have workloads unrelated to virtualization running on your cluster, and you want those workloads to be isolated from OKD Virtualization.
How to apply node placement rules to virtualization components
You can specify node placement rules for a component by editing the corresponding object directly or by using the web console.
For the OKD Virtualization Operators that Operator Lifecycle Manager (OLM) deploys, edit the OLM
Subscription
object directly. Currently, you cannot configure node placement rules for theSubscription
object by using the web console.For components that the OKD Virtualization Operators deploy, edit the
HyperConverged
object directly or configure it by using the web console during OKD Virtualization installation.For the hostpath provisioner, edit the
HostPathProvisioner
object directly or configure it by using the web console.You must schedule the hostpath provisioner and the virtualization components on the same nodes. Otherwise, virtualization pods that use the hostpath provisioner cannot run.
Depending on the object, you can use one or more of the following rule types:
nodeSelector
Allows pods to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
affinity
Enables you to use more expressive syntax to set rules that match nodes with pods. Affinity also allows for more nuance in how the rules are applied. For example, you can specify that a rule is a preference, rather than a hard requirement, so that pods are still scheduled if the rule is not satisfied.
tolerations
Allows pods to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts pods that tolerate the taint.
Node placement in the OLM Subscription object
To specify the nodes where OLM deploys the OKD Virtualization Operators, edit the Subscription
object during OKD Virtualization installation. You can include node placement rules in the spec.config
field, as shown in the following example:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: hco-operatorhub
namespace: openshift-cnv
spec:
source: redhat-operators
sourceNamespace: openshift-marketplace
name: kubevirt-hyperconverged
startingCSV: kubevirt-hyperconverged-operator.v4.9.1
channel: "stable"
config: (1)
1 | The config field supports nodeSelector and tolerations , but it does not support affinity . |
Node placement in the HyperConverged object
To specify the nodes where OKD Virtualization deploys its components, you can include the nodePlacement
object in the HyperConverged Cluster custom resource (CR) file that you create during OKD Virtualization installation. You can include nodePlacement
under the spec.infra
and spec.workloads
fields, as shown in the following example:
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
namespace: openshift-cnv
spec:
infra:
nodePlacement: (1)
...
workloads:
nodePlacement:
...
1 | The nodePlacement fields support nodeSelector , affinity , and tolerations fields. |
Node placement in the HostPathProvisioner object
You can configure node placement rules in the spec.workload
field of the HostPathProvisioner
object that you create when you install the hostpath provisioner.
apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
kind: HostPathProvisioner
metadata:
name: hostpath-provisioner
spec:
imagePullPolicy: IfNotPresent
pathConfig:
path: "</path/to/backing/directory>"
useNamingPrefix: false
workload: (1)
1 | The workload field supports nodeSelector , affinity , and tolerations fields. |
Additional resources
Node placement rules
Installing OKD Virtualization
Configuring the hostpath provisioner
Example manifests
The following example YAML files use nodePlacement
, affinity
, and tolerations
objects to customize node placement for OKD Virtualization components.
Operator Lifecycle Manager Subscription object
Example: Node placement with nodeSelector in the OLM Subscription object
In this example, nodeSelector
is configured so that OLM places the OKD Virtualization Operators on nodes that are labeled with example.io/example-infra-key = example-infra-value
.
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: hco-operatorhub
namespace: openshift-cnv
spec:
source: redhat-operators
sourceNamespace: openshift-marketplace
name: kubevirt-hyperconverged
startingCSV: kubevirt-hyperconverged-operator.v4.9.1
channel: "stable"
config:
nodeSelector:
example.io/example-infra-key: example-infra-value
Example: Node placement with tolerations in the OLM Subscription object
In this example, nodes that are reserved for OLM to deploy OKD Virtualization Operators are labeled with the key=virtualization:NoSchedule
taint. Only pods with the matching tolerations are scheduled to these nodes.
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: hco-operatorhub
namespace: openshift-cnv
spec:
source: redhat-operators
sourceNamespace: openshift-marketplace
name: kubevirt-hyperconverged
startingCSV: kubevirt-hyperconverged-operator.v4.9.1
channel: "stable"
config:
tolerations:
- key: "key"
operator: "Equal"
value: "virtualization"
effect: "NoSchedule"
HyperConverged object
Example: Node placement with nodeSelector in the HyperConverged Cluster CR
In this example, nodeSelector
is configured so that infrastructure resources are placed on nodes that are labeled with example.io/example-infra-key = example-infra-value
and workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value
.
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
namespace: openshift-cnv
spec:
infra:
nodePlacement:
nodeSelector:
example.io/example-infra-key: example-infra-value
workloads:
nodePlacement:
nodeSelector:
example.io/example-workloads-key: example-workloads-value
Example: Node placement with affinity in the HyperConverged Cluster CR
In this example, affinity
is configured so that infrastructure resources are placed on nodes that are labeled with example.io/example-infra-key = example-value
and workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value
. Nodes that have more than eight CPUs are preferred for workloads, but if they are not available, pods are still scheduled.
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
namespace: openshift-cnv
spec:
infra:
nodePlacement:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: example.io/example-infra-key
operator: In
values:
- example-infra-value
workloads:
nodePlacement:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: example.io/example-workloads-key
operator: In
values:
- example-workloads-value
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: example.io/num-cpus
operator: Gt
values:
- 8
Example: Node placement with tolerations in the HyperConverged Cluster CR
In this example, nodes that are reserved for OKD Virtualization components are labeled with the key=virtualization:NoSchedule
taint. Only pods with the matching tolerations are scheduled to these nodes.
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
namespace: openshift-cnv
spec:
workloads:
nodePlacement:
tolerations:
- key: "key"
operator: "Equal"
value: "virtualization"
effect: "NoSchedule"
HostPathProvisioner object
Example: Node placement with nodeSelector in the HostPathProvisioner object
In this example, nodeSelector
is configured so that workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value
.
apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
kind: HostPathProvisioner
metadata:
name: hostpath-provisioner
spec:
imagePullPolicy: IfNotPresent
pathConfig:
path: "</path/to/backing/directory>"
useNamingPrefix: false
workload:
nodeSelector:
example.io/example-workloads-key: example-workloads-value