Configuring CSI volumes
The Container Storage Interface (CSI) allows OKD to consume storage from storage back ends that implement the CSI interface as persistent storage.
OKD 4 supports version 1.6.0 of the CSI specification. |
CSI architecture
CSI drivers are typically shipped as container images. These containers are not aware of OKD where they run. To use CSI-compatible storage back end in OKD, the cluster administrator must deploy several components that serve as a bridge between OKD and the storage driver.
The following diagram provides a high-level overview about the components running in pods in the OKD cluster.
It is possible to run multiple CSI drivers for different storage back ends. Each driver needs its own external controllers deployment and daemon set with the driver and CSI registrar.
External CSI controllers
External CSI controllers is a deployment that deploys one or more pods with five containers:
The snapshotter container watches
VolumeSnapshot
andVolumeSnapshotContent
objects and is responsible for the creation and deletion ofVolumeSnapshotContent
object.The resizer container is a sidecar container that watches for
PersistentVolumeClaim
updates and triggersControllerExpandVolume
operations against a CSI endpoint if you request more storage onPersistentVolumeClaim
object.An external CSI attacher container translates
attach
anddetach
calls from OKD to respectiveControllerPublish
andControllerUnpublish
calls to the CSI driver.An external CSI provisioner container that translates
provision
anddelete
calls from OKD to respectiveCreateVolume
andDeleteVolume
calls to the CSI driver.A CSI driver container.
The CSI attacher and CSI provisioner containers communicate with the CSI driver container using UNIX Domain Sockets, ensuring that no CSI communication leaves the pod. The CSI driver is not accessible from outside of the pod.
The |
The external attacher must also run for CSI drivers that do not support third-party |
CSI driver daemon set
The CSI driver daemon set runs a pod on every node that allows OKD to mount storage provided by the CSI driver to the node and use it in user workloads (pods) as persistent volumes (PVs). The pod with the CSI driver installed contains the following containers:
A CSI driver registrar, which registers the CSI driver into the
openshift-node
service running on the node. Theopenshift-node
process running on the node then directly connects with the CSI driver using the UNIX Domain Socket available on the node.A CSI driver.
The CSI driver deployed on the node should have as few credentials to the storage back end as possible. OKD will only use the node plugin set of CSI calls such as NodePublish
/NodeUnpublish
and NodeStage
/NodeUnstage
, if these calls are implemented.
CSI drivers supported by OKD
OKD installs certain CSI drivers by default, giving users storage options that are not possible with in-tree volume plugins.
To create CSI-provisioned persistent volumes that mount to these supported storage assets, OKD installs the necessary CSI driver Operator, the CSI driver, and the required storage class by default. For more details about the default namespace of the Operator and driver, see the documentation for the specific CSI Driver Operator.
The AWS EFS and GCP Filestore CSI drivers are not installed by default, and must be installed manually. For instructions on installing the AWS EFS CSI driver, see Setting up AWS Elastic File Service CSI Driver Operator. For instructions on installing the GCP Filestore CSI driver, see Google Compute Platform Filestore CSI Driver Operator. |
The following table describes the CSI drivers that are installed with OKD supported by OKD and which CSI features they support, such as volume snapshots and resize.
CSI driver | CSI volume snapshots | CSI cloning | CSI resize | Inline ephemeral volumes |
---|---|---|---|---|
AliCloud Disk | ✅ | - | ✅ | - |
AWS EBS | ✅ | - | ✅ | - |
AWS EFS | - | - | - | - |
Google Compute Platform (GCP) persistent disk (PD) | ✅ | ✅ | ✅ | - |
GCP Filestore | ✅ | - | ✅ | - |
IBM Power® Virtual Server Block | - | - | ✅ | - |
IBM Cloud® Block | ✅[3] | - | ✅[3] | - |
Microsoft Azure Disk | ✅ | ✅ | ✅ | - |
Microsoft Azure Stack Hub | ✅ | ✅ | ✅ | - |
Microsoft Azure File | - | - | ✅ | ✅ |
OpenStack Cinder | ✅ | ✅ | ✅ | - |
OpenShift Data Foundation | ✅ | ✅ | ✅ | - |
OpenStack Manila | ✅ | - | - | - |
Shared Resource | - | - | - | ✅ |
VMware vSphere | ✅[1] | - | ✅[2] | - |
1.
Requires vSphere version 7.0 Update 3 or later for both vCenter Server and ESXi.
Does not support fileshare volumes.
2.
Offline volume expansion: minimum required vSphere version is 6.7 Update 3 P06
Online volume expansion: minimum required vSphere version is 7.0 Update 2.
3.
- Does not support offline snapshots or resize. Volume must be attached to a running pod.
If your CSI driver is not listed in the preceding table, you must follow the installation instructions provided by your CSI storage vendor to use their supported CSI features. |
Dynamic provisioning
Dynamic provisioning of persistent storage depends on the capabilities of the CSI driver and underlying storage back end. The provider of the CSI driver should document how to create a storage class in OKD and the parameters available for configuration.
The created storage class can be configured to enable dynamic provisioning.
Procedure
Create a default storage class that ensures all PVCs that do not require any special storage class are provisioned by the installed CSI driver.
# oc create -f - << EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: <storage-class> (1)
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: <provisioner-name> (2)
parameters:
EOF
1 The name of the storage class that will be created. 2 The name of the CSI driver that has been installed.
Example using the CSI driver
The following example installs a default MySQL template without any changes to the template.
Prerequisites
The CSI driver has been deployed.
A storage class has been created for dynamic provisioning.
Procedure
Create the MySQL template:
# oc new-app mysql-persistent
Example output
--> Deploying template "openshift/mysql-persistent" to project default
...
# oc get pvc
Example output
NAME STATUS VOLUME CAPACITY
ACCESS MODES STORAGECLASS AGE
mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi
RWO cinder 3s
Volume populators
Volume populators use the datasource
field in a persistent volume claim (PVC) spec to create pre-populated volumes.
Volume population is currently enabled, and supported as a Technology Preview feature. However, OKD does not ship with any volume populators.
Volume populators is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
For more information about volume populators, see Kubernetes volume populators.