Selector-Label Volume Binding

Overview

This guide provides the steps necessary to enable binding of persistent volume claims (PVCs) to persistent volumes (PVs) via selector and label attributes. By implementing selectors and labels, regular users are able to target provisioned storage by identifiers defined by a cluster administrator.

Motivation

In cases of statically provisioned storage, developers seeking persistent storage are required to know a handful of identifying attributes of a PV in order to deploy and bind a PVC. This creates several problematic situations. Regular users might have to contact a cluster administrator to either deploy the PVC or provide the PV values. PV attributes alone do not convey the intended use of the storage volumes, nor do they provide methods by which volumes can be grouped.

Selector and label attributes can be used to abstract away PV details from the user while providing cluster administrators with a way of identifying volumes by a descriptive and customizable tag. Through the selector-label method of binding, users are only required to know which labels are defined by the administrator.

The selector-label feature is currently only available for statically provisioned storage and is currently not implemented for storage provisioned dynamically.

Deployment

This section reviews how to define and deploy PVCs.

Prerequisites

  1. A running OKD 3.3+ cluster

  2. A volume provided by a supported storage provider

  3. A user with a cluster-admin role binding

Define the Persistent Volume and Claim

  1. As the cluster-admin user, define the PV. For this example, we will be using a GlusterFS volume. See the appropriate storage provider for your provider’s configuration.

    Example 1. Persistent Volume with Labels

    1. apiVersion: v1
    2. kind: PersistentVolume
    3. metadata:
    4. name: gluster-volume
    5. labels: (1)
    6. volume-type: ssd
    7. aws-availability-zone: us-east-1
    8. spec:
    9. capacity:
    10. storage: 2Gi
    11. accessModes:
    12. - ReadWriteMany
    13. glusterfs:
    14. endpoints: glusterfs-cluster
    15. path: myVol1
    16. readOnly: false
    17. persistentVolumeReclaimPolicy: Retain
    1A PVC whose selectors match all of a PV’s labels will be bound, assuming a PV is available.
  2. Define the PVC:

    Example 2. Persistent Volume Claim with Selectors

    1. apiVersion: v1
    2. kind: PersistentVolumeClaim
    3. metadata:
    4. name: gluster-claim
    5. spec:
    6. accessModes:
    7. - ReadWriteMany
    8. resources:
    9. requests:
    10. storage: 1Gi
    11. selector: (1)
    12. matchLabels: (2)
    13. volume-type: ssd
    14. aws-availability-zone: us-east-1
    1Begin selectors section.
    2List all labels by which the user is requesting storage. Must match all labels of targeted PV.

Optional: Bind a PVC to a specific PV

A PVC that does not specify a PV name or selector will match any PV.

To bind a PVC to a specific PV as a cluster administrator:

  • Use pvc.spec.volumeName if you know the PV name.

  • Use pvc.spec.selector if you know the PV labels.

    By specifying a selector, the PVC requires the PV to have specific labels.

Optional: Reserve a PV to a specific PVC

To reserve a PV for specific tasks, you have two options: create a specific storage class, or pre-bind the PV to your PVC.

  1. Request a specific storage class for the PV by specifying the storage class’s name.

    The following resource shows the required values that you use to configure a StorageClass. This example uses the AWS ElasticBlockStore (EBS) object definition.

    Example 3. StorageClass definition for EBS

    1. kind: StorageClass
    2. apiVersion: storage.k8s.io/v1
    3. metadata:
    4. name: kafka
    5. provisioner: kubernetes.io/aws-ebs
    6. ...

    If necessary in a multi-tenant environment, use a quota definition to reserve the storage class and PV(s) only to a specific namespace.

  2. Pre-bind the PV to your PVC using the PVC namespace and name. A PV defined as such will bind only to the specified PVC and to nothing else, as shown in the following example:

    Example 4. claimRef in PV definition

    1. apiVersion: v1
    2. kind: PersistentVolume
    3. metadata:
    4. name: mktg-ops--kafka--kafka-broker01
    5. spec:
    6. capacity:
    7. storage: 15Gi
    8. accessModes:
    9. - ReadWriteOnce
    10. claimRef:
    11. apiVersion: v1
    12. kind: PersistentVolumeClaim
    13. name: kafka-broker01
    14. namespace: default
    15. ...

Deploy the Persistent Volume and Claim

As the cluster-admin user, create the persistent volume:

Example 5. Create the Persistent Volume

  1. # oc create -f gluster-pv.yaml
  2. persistentVolume "gluster-volume" created
  3. # oc get pv
  4. NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
  5. gluster-volume map[] 2147483648 RWX Available 2s

Once the PV is created, any user whose selectors match all its labels can create their PVC.

Example 6. Create the Persistent Volume Claim

  1. # oc create -f gluster-pvc.yaml
  2. persistentVolumeClaim "gluster-claim" created
  3. # oc get pvc
  4. NAME LABELS STATUS VOLUME
  5. gluster-claim Bound gluster-volume