VMware vSphere CSI Driver Operator
Overview
OKD can provision persistent volumes (PVs) using the Container Storage Interface (CSI) VMware vSphere driver for Virtual Machine Disk (VMDK) volumes.
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.
To create CSI-provisioned persistent volumes (PVs) that mount to vSphere storage assets, OKD installs the vSphere CSI Driver Operator and the vSphere CSI driver by default in the openshift-cluster-csi-drivers
namespace.
vSphere CSI Driver Operator: The Operator provides a storage class, called
thin-csi
, that you can use to create persistent volumes claims (PVCs). The vSphere CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage. You can disable this default storage class if desired (see Managing the default storage class).vSphere CSI driver: The driver enables you to create and mount vSphere PVs. In OKD 4.14, the driver version is 3.0.2. The vSphere CSI driver supports all of the file systems supported by the underlying Red Hat Core OS release, including XFS and Ext4. For more information about supported file systems, see Overview of available file systems.
OKD defaults to using the CSI plugin to provision vSphere storage. |
The vSphere CSI Driver supports dynamic and static provisioning. When using static provisioning in the PV specifications, do not use the key |
About CSI
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give OKD users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
vSphere storage policy
The vSphere CSI Driver Operator storage class uses vSphere’s storage policy. OKD automatically creates a storage policy that targets datastore configured in cloud configuration:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: thin-csi
provisioner: csi.vsphere.vmware.com
parameters:
StoragePolicyName: "$openshift-storage-policy-xxxx"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: false
reclaimPolicy: Delete
ReadWriteMany vSphere volume support
If the underlying vSphere environment supports the vSAN file service, then vSphere Container Storage Interface (CSI) Driver Operator installed by OKD supports provisioning of ReadWriteMany (RWX) volumes. If vSAN file service is not configured, then ReadWriteOnce (RWO) is the only access mode available. If you do not have vSAN file service configured, and you request RWX, the volume fails to get created and an error is logged.
For more information about configuring the vSAN file service in your environment, see vSAN File Service.
You can request RWX volumes by making the following persistent volume claim (PVC):
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
resources:
requests:
storage: 1Gi
accessModes:
- ReadWriteMany
storageClassName: thin-csi
Requesting a PVC of the RWX volume type should result in provisioning of persistent volumes (PVs) backed by the vSAN file service.
VMware vSphere CSI Driver Operator requirements
To install the vSphere CSI Driver Operator, the following requirements must be met:
VMware vSphere version 7.0 Update 2 or later
vCenter 7.0 Update 2 or later
Virtual machines of hardware version 15 or later
No third-party vSphere CSI driver already installed in the cluster
If a third-party vSphere CSI driver is present in the cluster, OKD does not overwrite it. The presence of a third-party vSphere CSI driver prevents OKD from updating to OKD 4.13 or later.
The VMware vSphere CSI Driver Operator is supported only on clusters deployed with |
To remove a third-party CSI driver, see Removing a third-party vSphere CSI Driver.
Removing a third-party vSphere CSI Driver Operator
OKD 4.10, and later, includes a built-in version of the vSphere Container Storage Interface (CSI) Operator Driver that is supported by Red Hat. If you have installed a vSphere CSI driver provided by the community or another vendor, updates to the next major version of OKD, such as 4.13, or later, might be disabled for your cluster.
OKD 4.12, and later, clusters are still fully supported, and updates to z-stream releases of 4.12, such as 4.12.z, are not blocked, but you must correct this state by removing the third-party vSphere CSI Driver before updates to next major version of OKD can occur. Removing the third-party vSphere CSI driver does not require deletion of associated persistent volume (PV) objects, and no data loss should occur.
These instructions may not be complete, so consult the vendor or community provider uninstall guide to ensure removal of the driver and components. |
To uninstall the third-party vSphere CSI Driver:
Delete the third-party vSphere CSI Driver (VMware vSphere Container Storage Plugin) Deployment and Daemonset objects.
Delete the configmap and secret objects that were installed previously with the third-party vSphere CSI Driver.
Delete the third-party vSphere CSI driver
CSIDriver
object:~ $ oc delete CSIDriver csi.vsphere.vmware.com
csidriver.storage.k8s.io "csi.vsphere.vmware.com" deleted
After you have removed the third-party vSphere CSI Driver from the OKD cluster, installation of Red Hat’s vSphere CSI Driver Operator automatically resumes, and any conditions that could block upgrades to OKD 4.11, or later, are automatically removed. If you had existing vSphere CSI PV objects, their lifecycle is now managed by Red Hat’s vSphere CSI Driver Operator.
vSphere persistent disks encryption
You can encrypt virtual machines (VMs) and dynamically provisioned persistent volumes (PVs) on OKD running on top of vSphere.
OKD does not support RWX-encrypted PVs. You cannot request RWX PVs out of a storage class that uses an encrypted storage policy. |
You must encrypt VMs before you can encrypt PVs, which you can do during installation or postinstallation.
For information about encrypting VMs, see:
After encrypting VMs, you can configure a storage class that supports dynamic encryption volume provisioning using the vSphere Container Storage Interface (CSI) driver. This can be accomplished in one of two ways using:
Datastore URL: This approach is not very flexible, and forces you to use a single datastore. It also does not support topology-aware provisioning.
Tag-based placement: Encrypts the provisioned volumes and uses tag-based placement to target specific datastores.
Using datastore URL
Procedure
To encrypt using the datastore URL:
Find out the name of the default storage policy in your datastore that supports encryption.
This is same policy that was used for encrypting your VMs.
Create a storage class that uses this storage policy:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: encryption
provisioner: csi.vsphere.vmware.com
parameters:
storagePolicyName: <storage-policy-name> (1)
datastoreurl: "ds:///vmfs/volumes/vsan:522e875627d-b090c96b526bb79c/"
1 Name of default storage policy in your datastore that supports encryption
Using tag-based placement
Procedure
To encrypt using tag-based placement:
In vCenter create a category for tagging datastores that will be made available to this storage class. Also, ensure that StoragePod(Datastore clusters), Datastore, and Folder are selected as Associable Entities for the created category.
In vCenter, create a tag that uses the category created earlier.
Assign the previously created tag to each datastore that will be made available to the storage class. Make sure that datastores are shared with hosts participating in the OKD cluster.
In vCenter, from the main menu, click Policies and Profiles.
On the Policies and Profiles page, in the navigation pane, click VM Storage Policies.
Click CREATE.
Type a name for the storage policy.
Select Enable host based rules and Enable tag based placement rules.
In the Next tab:
Select Encryption and Default Encryption Properties.
Select the tag category created earlier, and select tag selected. Verify that the policy is selecting matching datastores.
Create the storage policy.
Create a storage class that uses the storage policy:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: csi-encrypted
provisioner: csi.vsphere.vmware.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
parameters:
storagePolicyName: <storage-policy-name> (1)
1 Name of the storage policy that you created for encryption
vSphere CSI topology overview
OKD provides the ability to deploy OKD for vSphere on different zones and regions, which allows you to deploy over multiple compute clusters and datacenters, thus helping to avoid a single point of failure.
This is accomplished by defining zone and region categories in vCenter, and then assigning these categories to different failure domains, such as a compute cluster, by creating tags for these zone and region categories. After you have created the appropriate categories, and assigned tags to vCenter objects, you can create additional machinesets that create virtual machines (VMs) that are responsible for scheduling pods in those failure domains.
The following example defines two failure domains with one region and two zones:
Compute cluster | Failure domain | Description |
---|---|---|
Compute cluster: ocp1, Datacenter: Atlanta | openshift-region: us-east-1 (tag), openshift-zone: us-east-1a (tag) | This defines a failure domain in region us-east-1 with zone us-east-1a. |
Computer cluster: ocp2, Datacenter: Atlanta | openshift-region: us-east-1 (tag), openshift-zone: us-east-1b (tag) | This defines a different failure domain within the same region called us-east-1b. |
Creating vSphere storage topology during installation
Procedure
- Specify the topology during installation. See the Configuring regions and zones for a VMware vCenter section.
No additional action is necessary and the default storage class that is created by OKD is topology aware and should allow provisioning of volumes in different failure domains.
Additional resources
Creating vSphere storage topology postinstallation
Procedure
In the VMware vCenter vSphere client GUI, define appropriate zone and region catagories and tags.
While vSphere allows you to create categories with any arbitrary name, OKD strongly recommends use of
openshift-region
andopenshift-zone
names for defining topology categories.For more information about vSphere categories and tags, see the VMware vSphere documentation.
In OKD, create failure domains. See the Specifying multiple regions and zones for your cluster on vSphere section.
Create a tag to assign to datastores across failure domains:
When an OKD spans more than one failure domain, the datastore might not be shared across those failure domains, which is where topology-aware provisioning of persistent volumes (PVs) is useful.
In vCenter, create a category for tagging the datastores. For example,
openshift-zonal-datastore-cat
. You can use any other category name, provided the category uniquely is used for tagging datastores participating in OKD cluster. Also, ensure thatStoragePod
,Datastore
, andFolder
are selected as Associable Entities for the created category.In vCenter, create a tag that uses the previously created category. This example uses the tag name
openshift-zonal-datastore
.Assign the previously created tag (in this example
openshift-zonal-datastore
) to each datastore in a failure domain that would be considered for dynamic provisioning.You can use any names you like for datastore categories and tags. The names used in this example are provided as recommendations. Ensure that the tags and categories that you define uniquely identify only datastores that are shared with all hosts in the OKD cluster.
As needed, create a storage policy that targets the tag-based datastores in each failure domain:
In vCenter, from the main menu, click Policies and Profiles.
On the Policies and Profiles page, in the navigation pane, click VM Storage Policies.
Click CREATE.
Type a name for the storage policy.
For the rules, choose Tag Placement rules and select the tag and category that targets the desired datastores (in this example, the
openshift-zonal-datastore
tag).The datastores are listed in the storage compatibility table.
Create a new storage class that uses the new zoned storage policy:
Click Storage > StorageClasses.
On the StorageClasses page, click Create StorageClass.
Type a name for the new storage class in Name.
Under Provisioner, select csi.vsphere.vmware.com.
Under Additional parameters, for the StoragePolicyName parameter, set Value to the name of the new zoned storage policy that you created earlier.
Click Create.
Example output
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: zoned-sc (1)
provisioner: csi.vsphere.vmware.com
parameters:
StoragePolicyName: zoned-storage-policy (2)
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
1 New topology aware storage class name. 2 Specify zoned storage policy. You can also create the storage class by editing the preceding YAML file and running the command
oc create -f $FILE
.
Additional resources
Creating vSphere storage topology without an infra topology
OKD recommends using the infrastructure object for specifying failure domains in a topology aware setup. Specifying failure domains in the infrastructure object and specify topology-categories in the |
Procedure
In the VMware vCenter vSphere client GUI, define appropriate zone and region catagories and tags.
While vSphere allows you to create categories with any arbitrary name, OKD strongly recommends use of
openshift-region
andopenshift-zone
names for defining topology.For more information about vSphere categories and tags, see the VMware vSphere documentation.
To allow the container storage interface (CSI) driver to detect this topology, edit the
clusterCSIDriver
object YAML filedriverConfig
section:Specify the
openshift-zone
andopenshift-region
categories that you created earlier.Set
driverType
tovSphere
.~ $ oc edit clustercsidriver csi.vsphere.vmware.com -o yaml
Example output
apiVersion: operator.openshift.io/v1
kind: ClusterCSIDriver
metadata:
name: csi.vsphere.vmware.com
spec:
logLevel: Normal
managementState: Managed
observedConfig: null
operatorLogLevel: Normal
unsupportedConfigOverrides: null
driverConfig:
driverType: vSphere (1)
vSphere:
topologyCategories: (2)
- openshift-zone
- openshift-region
1 Ensure that driverType
is set tovSphere
.2 openshift-zone
andopenshift-region
categories created earlier in vCenter.
Verify that
CSINode
object has topology keys by running the following commands:~ $ oc get csinode
Example output
NAME DRIVERS AGE
co8-4s88d-infra-2m5vd 1 27m
co8-4s88d-master-0 1 70m
co8-4s88d-master-1 1 70m
co8-4s88d-master-2 1 70m
co8-4s88d-worker-j2hmg 1 47m
co8-4s88d-worker-mbb46 1 47m
co8-4s88d-worker-zlk7d 1 47m
~ $ oc get csinode co8-4s88d-worker-j2hmg -o yaml
Example output
...
spec:
drivers:
- allocatable:
count: 59
name: csi-vsphere.vmware.com
nodeID: co8-4s88d-worker-j2hmg
topologyKeys: (1)
- topology.csi.vmware.com/openshift-zone
- topology.csi.vmware.com/openshift-region
1 Topology keys from vSphere openshift-zone
andopenshift-region
catagories.CSINode
objects might take some time to receive updated topology information. After the driver is updated,CSINode
objects should have topology keys in them.Create a tag to assign to datastores across failure domains:
When an OKD spans more than one failure domain, the datastore might not be shared across those failure domains, which is where topology-aware provisioning of persistent volumes (PVs) is useful.
In vCenter, create a category for tagging the datastores. For example,
openshift-zonal-datastore-cat
. You can use any other category name, provided the category uniquely is used for tagging datastores participating in OKD cluster. Also, ensure thatStoragePod
,Datastore
, andFolder
are selected as Associable Entities for the created category.In vCenter, create a tag that uses the previously created category. This example uses the tag name
openshift-zonal-datastore
.Assign the previously created tag (in this example
openshift-zonal-datastore
) to each datastore in a failure domain that would be considered for dynamic provisioning.You can use any names you like for categories and tags. The names used in this example are provided as recommendations. Ensure that the tags and categories that you define uniquely identify only datastores that are shared with all hosts in the OKD cluster.
Create a storage policy that targets the tag-based datastores in each failure domain:
In vCenter, from the main menu, click Policies and Profiles.
On the Policies and Profiles page, in the navigation pane, click VM Storage Policies.
Click CREATE.
Type a name for the storage policy.
For the rules, choose Tag Placement rules and select the tag and category that targets the desired datastores (in this example, the
openshift-zonal-datastore
tag).The datastores are listed in the storage compatibility table.
Create a new storage class that uses the new zoned storage policy:
Click Storage > StorageClasses.
On the StorageClasses page, click Create StorageClass.
Type a name for the new storage class in Name.
Under Provisioner, select csi.vsphere.vmware.com.
Under Additional parameters, for the StoragePolicyName parameter, set Value to the name of the new zoned storage policy that you created earlier.
Click Create.
Example output
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: zoned-sc (1)
provisioner: csi.vsphere.vmware.com
parameters:
StoragePolicyName: zoned-storage-policy (2)
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
1 New topology aware storage class name. 2 Specify zoned storage policy. You can also create the storage class by editing the preceding YAML file and running the command
oc create -f $FILE
.
Additional resources
Results
Creating persistent volume claims (PVCs) and PVs from the topology aware storage class are truly zonal, and should use the datastore in their respective zone depending on how pods are scheduled:
~ $ oc get pv <pv-name> -o yaml
Example output
...
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.csi.vmware.com/openshift-zone (1)
operator: In
values:
- <openshift-zone>
-key: topology.csi.vmware.com/openshift-region (1)
operator: In
values:
- <openshift-region>
...
peristentVolumeclaimPolicy: Delete
storageClassName: <zoned-storage-class-name> (2)
volumeMode: Filesystem
...
1 | PV has zoned keys. |
2 | PV is using the zoned storage class. |