Configuring local storage for virtual machines
- About the hostpath provisioner
- Configuring SELinux for the hostpath provisioner on Fedora CoreOS (FCOS) 8
- Using the hostpath provisioner to enable local storage
- Creating a storage class
You can configure local storage for your virtual machines by using the hostpath provisioner feature.
About the hostpath provisioner
The hostpath provisioner is a local storage provisioner designed for OKD Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.
When you install the OKD Virtualization Operator, the hostpath provisioner Operator is automatically installed. To use it, you must:
Configure SELinux:
If you use Fedora CoreOS (FCOS) 8 workers, you must create a
MachineConfig
object on each node.Otherwise, apply the SELinux label
container_file_t
to the persistent volume (PV) backing directory on each node.
Create a
HostPathProvisioner
custom resource.Create a
StorageClass
object for the hostpath provisioner.
The hostpath provisioner Operator deploys the provisioner as a DaemonSet on each node when you create its custom resource. In the custom resource file, you specify the backing directory for the persistent volumes that the hostpath provisioner creates.
Configuring SELinux for the hostpath provisioner on Fedora CoreOS (FCOS) 8
You must configure SELinux before you create the HostPathProvisioner
custom resource. To configure SELinux on Fedora CoreOS (FCOS) 8 workers, you must create a MachineConfig
object on each node.
Prerequisites
Create a backing directory on each node for the persistent volumes (PVs) that the hostpath provisioner creates.
The backing directory must not be located in the filesystem’s root directory because the
/
partition is read-only on FCOS. For example, you can use/var/<directory_name>
but not/<directory_name>
.
Procedure
Create the
MachineConfig
file. For example:$ touch machineconfig.yaml
Edit the file, ensuring that you include the directory where you want the hostpath provisioner to create PVs. For example:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 50-set-selinux-for-hostpath-provisioner
labels:
machineconfiguration.openshift.io/role: worker
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- contents: |
[Unit]
Description=Set SELinux chcon for hostpath provisioner
Before=kubelet.service
[Service]
ExecStart=/usr/bin/chcon -Rt container_file_t <backing_directory_path> (1)
[Install]
WantedBy=multi-user.target
enabled: true
name: hostpath-provisioner.service
1 Specify the backing directory where you want the provisioner to create PVs. This directory must not be located in the filesystem’s root directory ( /
).Create the
MachineConfig
object:$ oc create -f machineconfig.yaml -n <namespace>
Using the hostpath provisioner to enable local storage
To deploy the hostpath provisioner and enable your virtual machines to use local storage, first create a HostPathProvisioner
custom resource.
Prerequisites
Create a backing directory on each node for the persistent volumes (PVs) that the hostpath provisioner creates.
The backing directory must not be located in the filesystem’s root directory because the
/
partition is read-only on Fedora CoreOS (FCOS). For example, you can use/var/<directory_name>
but not/<directory_name>
.Apply the SELinux context
container_file_t
to the PV backing directory on each node. For example:$ sudo chcon -t container_file_t -R <backing_directory_path>
If you use Fedora CoreOS (FCOS) 8 workers, you must configure SELinux by using a
MachineConfig
manifest instead.
Procedure
Create the
HostPathProvisioner
custom resource file. For example:$ touch hostpathprovisioner_cr.yaml
Edit the file, ensuring that the
spec.pathConfig.path
value is the directory where you want the hostpath provisioner to create PVs. For example:apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
kind: HostPathProvisioner
metadata:
name: hostpath-provisioner
spec:
imagePullPolicy: IfNotPresent
pathConfig:
path: "<backing_directory_path>" (1)
useNamingPrefix: false (2)
workload: (3)
1 Specify the backing directory where you want the provisioner to create PVs. This directory must not be located in the filesystem’s root directory ( /
).2 Change this value to true
if you want to use the name of the persistent volume claim (PVC) that is bound to the created PV as the prefix of the directory name.3 Optional: You can use the spec.workload
field to configure node placement rules for the hostpath provisioner.If you did not create the backing directory, the provisioner attempts to create it for you. If you did not apply the
container_file_t
SELinux context, this can causePermission denied
errors.Create the custom resource in the
openshift-cnv
namespace:$ oc create -f hostpathprovisioner_cr.yaml -n openshift-cnv
Additional resources
Creating a storage class
When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class.
When using OKD Virtualization with OKD Container Storage, specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks. With virtual machine disks, RBD block mode volumes are more efficient and provide better performance than Ceph FS or RBD filesystem-mode PVCs. To specify RBD block mode PVCs, use the ‘ocs-storagecluster-ceph-rbd’ storage class and |
You cannot update a |
Procedure
Create a YAML file for defining the storage class. For example:
$ touch storageclass.yaml
Edit the file. For example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: hostpath-provisioner (1)
provisioner: kubevirt.io/hostpath-provisioner
reclaimPolicy: Delete (2)
volumeBindingMode: WaitForFirstConsumer (3)
1 You can optionally rename the storage class by changing this value. 2 The two possible reclaimPolicy
values areDelete
andRetain
. If you do not specify a value, the storage class defaults toDelete
.3 The volumeBindingMode
value determines when dynamic provisioning and volume binding occur. SpecifyWaitForFirstConsumer
to delay the binding and provisioning of a PV until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod’s scheduling requirements.Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.
To solve this problem, use the Kubernetes pod scheduler to bind the PVC to a PV on the correct node. By using
StorageClass
withvolumeBindingMode
set toWaitForFirstConsumer
, the binding and provisioning of the PV is delayed until aPod
is created using the PVC.Create the
StorageClass
object:$ oc create -f storageclass.yaml
Additional resources