Configuring local storage for virtual machines
Configure storage for your virtual machines. When configuring local storage, use the hostpath provisioner (HPP).
About the hostpath provisioner (HPP)
When you install the OKD Virtualization Operator, the Hostpath Provisioner Operator is automatically installed. The HPP is a local storage provisioner designed for OKD Virtualization that is created by the Hostpath Provisioner Operator. To use the HPP, you must create a HPP custom resource.
In OKD Virtualization 4.10, the HPP Operator configures the Kubernetes CSI driver. The Operator also recognizes the existing (legacy) format of the custom resource. The legacy HPP and the CSI host path driver are supported in parallel for a number of releases. However, at some point, the legacy HPP will no longer be supported. If you use the HPP, plan to create a storage class for the CSI driver as part of your migration strategy. |
If you upgrade to OKD Virtualization version 4.10 on an existing cluster, the HPP Operator is upgraded and the system performs the following actions:
The CSI driver is installed.
The CSI driver is configured with the contents of your legacy custom resource.
If you install OKD Virtualization version 4.10 on a new cluster, you must perform the following actions:
Create the HPP custom resource including a
storagePools
stanza in the HPP custom resource.Create a storage class for the CSI driver.
Create the HPP custom resource with a storage pool
Storage pools allow you to specify the name and path that are used by the CSI driver.
Procedure
Create a YAML file for the HPP custom resource with a
storagePools
stanza in the YAML. For example:$ touch hostpathprovisioner_cr.yaml
Edit the file. For example:
apiVersion: hostpathprovisioner.kubevirt.io.v1beta1
kind: HostPathProvisioner
metadata:
name: hostpath-provisioner
spec:
imagePullPolicy: IfNotPresent
storagePools: (1)
- name: <any_name>
path: "</var/myvolumes>" (2)
workload:
nodeSelector:
kubernetes.io/os: linux
1 The storagePools
stanza is an array to which you can add multiple entries.2 Create directories under this node path. Read/write access is required. Ensure that the node-level directory ( /var/myvolumes
) is not on the same partition as the operating system. If it is on the same partition as the operating system, users can potentially fill the operating system partition and impact performance or cause the node to become unstable or unusable.Save the file and exit.
Creating a storage class
When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class.
In order to use the host path provisioner (HPP) you must create an associated storage class for the CSI driver with the storagePools
stanza.
You cannot update a |
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned. To solve this problem, use the Kubernetes pod scheduler to bind the PVC to a PV on the correct node. By using the |
Creating a storage class for the CSI driver with the storagePools stanza
Use this procedure to create a storage class for use with the HPP CSI driver implementation. You must create this storage class to use HPP in OKD Virtualization 4.10 and later.
Procedure
Create a YAML file for defining the storage class. For example:
$ touch <storageclass_csi>.yaml
Edit the file. For example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: hostpath-csi (1)
provisioner: kubevirt.io.hostpath-provisioner (2)
reclaimPolicy: Delete (3)
volumeBindingMode: WaitForFirstConsumer (4)
parameters:
storagePool: <any_name> (5)
1 Assign any meaningful name to the storage class. In this example, csi
is used to specify that the class is using the CSI provisioner instead of the legacy provisioner. Choosing descriptive names for storage classes, based on legacy or CSI driver provisioning, eases implementation of your migration strategy.2 The legacy provisioner uses kubevirt.io/hostpath-provisioner
. The CSI driver useskubevirt.io.hostpath-provisioner
.3 The two possible reclaimPolicy
values areDelete
andRetain
. If you do not specify a value, the storage class defaults toDelete
.4 The volumeBindingMode
parameter determines when dynamic provisioning and volume binding occur. SpecifyWaitForFirstConsumer
to delay the binding and provisioning of a PV until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod’s scheduling requirements.5 <any_name>
must match the name of the storage pool, which you define in the HPP custom resource.Save the file and exit.
Create the
StorageClass
object:$ oc create -f <storageclass_csi>.yaml
Creating a storage class for the legacy hostpath provisioner
Use this procedure to create a storage class for the legacy hostpath provisioner (HPP). You do not need to explicitly add a storagePool
parameter.
Procedure
Create a YAML file for defining the storage class. For example:
$ touch storageclass.yaml
Edit the file. For example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: hostpath-provisioner (1)
provisioner: kubevirt.io/hostpath-provisioner
reclaimPolicy: Delete (2)
volumeBindingMode: WaitForFirstConsumer (3)
1 Assign any meaningful name to the storage class. In this example, csi
is used to specify that the class is using the CSI provisioner, instead of the legacy provisioner. Choosing descriptive names for storage classes, based on legacy or CSI driver provisioning, eases implementation of your migration strategy.2 The two possible reclaimPolicy
values areDelete
andRetain
. If you do not specify a value, the storage class defaults toDelete
.3 The volumeBindingMode
value determines when dynamic provisioning and volume binding occur. Specify theWaitForFirstConsumer
value to delay the binding and provisioning of a PV until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod’s scheduling requirements.Save the file and exit.
Create the
StorageClass
object:$ oc create -f storageclass.yaml
Additional resources
In addition to configuring a basic storage pool for use with the HPP, you have the option of creating single storage pools with the pvcTemplate
specification as well as multiple storage pools.
Creating a storage pool using a pvcTemplate specification in a host path provisioner (HPP) custom resource.
If you have a single large persistent volume (PV) on your node, you might want to virtually divide the volume and use one partition to store only the HPP volumes. By defining a storage pool using a pvcTemplate
specification in the HPP custom resource, you can virtually split the PV into multiple smaller volumes, providing more flexibility in data allocation.
The pvcTemplate
matches the spec
portion of a persistent volume claim (PVC). For example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "iso-pvc"
labels:
app: containerized-data-importer
annotations:
cdi.kubevirt.io/storage.import.endpoint: "http://cdi-file-host.cdi:80/tinyCore.iso.tar"
spec: (1)
volumeMode: Block
storageClassName: <any_storage_class>
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
1 | A pvcTemplate is the spec (specification) section of a PVC |
The Operator creates a PVC from the PVC template for each node containing the HPP CSI driver. The PVC created from the PVC template consumes the single large PV, allowing the HPP to create smaller dynamic volumes.
You can create any combination of storage pools. You can combine standard storage pools with storage pools that use PVC templates in the storagePools
stanza.
Procedure
Create a YAML file for the CSI custom resource specifying a single
pvcTemplate
storage pool. For example:$ touch hostpathprovisioner_cr_pvc.yaml
Edit the file. For example:
apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
kind: HostPathProvisioner
metadata:
name: hostpath-provisioner
spec:
imagePullPolicy: IfNotPresent
storagePools: (1)
- name: <any_name>
path: "</var/myvolumes>" (2)
pvcTemplate:
volumeMode: Block (3)
storageClassName: <any_storage_class> (4)
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi (5)
workload:
nodeSelector:
kubernetes.io/os: linux
1 The storagePools
stanza is an array to which you can add multiple entries.2 Create directories under this node path. Read/write access is required. Ensure that the node-level directory ( /var/myvolumes
) is not on the same partition as the operating system. If it is, users of the volumes can potentially fill the operating system partition and cause the node to impact performance, become unstable, or become unusable.3 volumeMode
parameter is optional and can be eitherBlock
orFilesystem
but must match the provisioned volume format, if used. The default value isFilesystem
. If thevolumeMode
isblock
, the mounting pod creates an XFS file system on the block volume before mounting it.4 If the storageClassName
parameter is omitted, the default storage class is used to create PVCs. If you omitstorageClassName
, ensure that the HPP storage class is not the default storage class.5 You can specify statically or dynamically provisioned storage. In either case, ensure the requested storage size is appropriate for the volume you want to virtually divide or the PVC cannot be bound to the large PV. If the storage class you are using uses dynamically provisioned storage, pick an allocation size that matches the size of a typical request. Save the file and exit.
Additional resources