GCP PD CSI Driver Operator
Overview
OKD can provision persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Google Cloud Platform (GCP) persistent disk (PD) storage.
GCP PD CSI Driver Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/. |
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver.
To create CSI-provisioned persistent volumes (PVs) that mount to GCP PD storage assets, OKD installs the GCP PD CSI Driver Operator and the GCP PD CSI driver by default in the openshift-cluster-csi-drivers
namespace.
GCP PD CSI Driver Operator: By default, the Operator provides a storage class that you can use to create PVCs. You also have the option to create the GCP PD storage class as described in Persistent storage using GCE Persistent Disk.
GCP PD driver: The driver enables you to create and mount GCP PD PVs.
OKD defaults to using an in-tree, or non-CSI, driver to provision GCP PD storage. This in-tree driver will be removed in a subsequent update of OKD. Volumes provisioned using the existing in-tree driver are planned for migration to the CSI driver at that time. |
About CSI
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plug-ins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give OKD users storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins.
GCP PD CSI driver storage class parameters
The Google Cloud Platform (GCP) persistent disk (PD) Container Storage Interface (CSI) driver uses the CSI external-provisioner
sidecar as a controller. This is a separate helper container that is deployed with the CSI driver. The sidecar manages persistent volumes (PVs) by triggering the CreateVolume
operation.
The GCP PD CSI driver uses the csi.storage.k8s.io/fstype
parameter key to support dynamic provisioning. The following table describes all the GCP PD CSI storage class parameters that are supported by OKD.
Parameter | Values | Default | Description |
---|---|---|---|
|
|
| Allows you to choose between standard PVs or solid-state-drive PVs. |
|
|
| Allows you to choose between zonal or regional PVs. |
| Fully qualified resource identifier for the key to use to encrypt new disks. | Empty string | Uses customer-managed encryption keys (CMEK) to encrypt new disks. |
Creating a custom-encrypted persistent volume
When you create a PersistentVolumeClaim
object, OKD provisions a new persistent volume (PV) and creates a PersistentVolume
object. You can add a custom encryption key in Google Cloud Platform (GCP) to protect a PV in your cluster by encrypting the newly created PV.
For encryption, the newly attached PV that you create uses customer-managed encryption keys (CMEK) on a cluster by using a new or existing Google Cloud Key Management Service (KMS) key.
Prerequisites
You are logged in to a running OKD cluster.
You have created a Cloud KMS key ring and key version.
For more information about CMEK and Cloud KMS resources, see Using customer-managed encryption keys (CMEK).
Procedure
To create a custom-encrypted PV, complete the following steps:
Create a storage class with the Cloud KMS key. The following example enables dynamic provisioning of encrypted volumes:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-gce-pd-cmek
provisioner: pd.csi.storage.gke.io
volumeBindingMode: "WaitForFirstConsumer"
allowVolumeExpansion: true
parameters:
type: pd-standard
disk-encryption-kms-key: projects/<key-project-id>/locations/<location>/keyRings/<key-ring>/cryptoKeys/<key> (1)
1 This field must be the resource identifier for the key that will be used to encrypt new disks. Values are case-sensitive. For more information about providing key ID values, see Retrieving a resource’s ID and Getting a Cloud KMS resource ID. You cannot add the
disk-encryption-kms-key
parameter to an existing storage class. However, you can delete the storage class and recreate it with the same name and a different set of parameters. If you do this, the provisioner of the existing class must bepd.csi.storage.gke.io
.Deploy the storage class on your OKD cluster using the
oc
command:$ oc describe storageclass csi-gce-pd-cmek
Example output
Name: csi-gce-pd-cmek
IsDefaultClass: No
Annotations: None
Provisioner: pd.csi.storage.gke.io
Parameters: disk-encryption-kms-key=projects/key-project-id/locations/location/keyRings/ring-name/cryptoKeys/key-name,type=pd-standard
AllowVolumeExpansion: true
MountOptions: none
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: none
Create a file named
pvc.yaml
that matches the name of your storage class object that you created in the previous step:kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: podpvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: csi-gce-pd-cmek
resources:
requests:
storage: 6Gi
If you marked the new storage class as default, you can omit the
storageClassName
field.Apply the PVC on your cluster:
$ oc apply -f pvc.yaml
Get the status of your PVC and verify that it is created and bound to a newly provisioned PV:
$ oc get pvc
Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
podpvc Bound pvc-e36abf50-84f3-11e8-8538-42010a800002 10Gi RWO csi-gce-pd-cmek 9s
If your storage class has the
volumeBindingMode
field set toWaitForFirstConsumer
, you must create a pod to use the PVC before you can verify it.
Your CMEK-protected PV is now ready to use with your OKD cluster.
Additional resources