Persistent Storage Configuration

Overview

Persistent volumes are a Must for installing KubeSphere. KubeKey lets KubeSphere be installed on different storage systems by the add-on mechanism. General steps of installing KubeSphere by KubeKey on Linux are:

  1. Install Kubernetes.
  2. Install the add-on plugin for KubeSphere.
  3. Install Kubesphere by ks-installer.

In KubeKey configurations, spec.persistence.storageClass of ClusterConfiguration needs to be set for ks-installer to create a PersistentVolumeClaim (PVC) for KubeSphere. If it is empty, the default StorageClass (annotation storageclass.kubernetes.io/is-default-class equals to true) will be used.

  1. apiVersion: installer.kubesphere.io/v1alpha1
  2. kind: ClusterConfiguration
  3. spec:
  4. persistence:
  5. storageClass: ""
  6. ...

Therefore, an available StorageClass must be installed in Step 2 above. It includes:

  • StorageClass itself
  • Storage Plugin for the StorageClass if necessary

This tutorial introduces KubeKey add-on configurations for some mainly used storage plugins. If spec.persistence.storageClass is empty, the default StorageClass will be installed. Refer to the following sections if you want to configure other storage systems.

QingCloud CSI

If you plan to install KubeSphere on QingCloud, QingCloud CSI can be chosen as the underlying storage plugin. The following is an example of KubeKey add-on configurations for QingCloud CSI installed by Helm Charts including a StorageClass.

Chart Config

  1. config:
  2. qy_access_key_id: "MBKTPXWCIRIEDQYQKXYL" # Replace it with your own key id.
  3. qy_secret_access_key: "cqEnHYZhdVCVif9qCUge3LNUXG1Cb9VzKY2RnBdX" # Replace it with your own access key.
  4. zone: "pek3a" # Lowercase letters only.
  5. sc:
  6. isDefaultClass: true # Set it as the default storage class.

You need to create this file of chart configurations and input the values above manually.

Key

To get values for qy_access_key_id and qy_secret_access_key, log in the web console of QingCloud and refer to the image below to create a key first. Download the key after it is created, which is stored in a csv file.

access-key

Zone

The field zone specifies where your cloud volumes are deployed. On QingCloud Platform, you must select a zone before you create volumes.

storage-zone

Make sure the value you specify for zone matches the region ID below:

ZoneRegion ID
Shanghai1-A/Shanghai1-Bsh1a/sh1b
Beijing3-A/Beijing3-B/Beijing3-C/Beijing3-Dpek3a/pek3b/pek3c/pek3d
Guangdong2-A/Guangdong2-Bgd2a/gd2b
Asia-Pacific 2-Aap2a

If you want to configure more values, see chart configuration for QingCloud CSI.

Add-on Config

Save the above chart config locally (e.g. /root/csi-qingcloud.yaml). The add-on config for QingCloud CSI could be like:

  1. addons:
  2. - name: csi-qingcloud
  3. namespace: kube-system
  4. sources:
  5. chart:
  6. name: csi-qingcloud
  7. repo: https://charts.kubesphere.io/test
  8. values: /root/csi-qingcloud.yaml

NFS Client

With a NFS server, you can choose NFS-client Provisioner as the storage plugin. NFS-client Provisioner creates the PersistentVolume dynamically. The following is an example of KubeKey add-on configurations for NFS-client Provisioner installed by Helm Charts including a StorageClass .

Chart Config

  1. nfs:
  2. server: "192.168.0.27" # <--ToBeReplaced->
  3. path: "/mnt/csi/" # <--ToBeReplaced->
  4. storageClass:
  5. defaultClass: false

If you want to configure more values, see chart configuration for nfs-client.

Add-on Config

Save the above chart config locally (e.g. /root/nfs-client.yaml). The add-on config for NFS-Client Provisioner cloud be like:

  1. addons:
  2. - name: nfs-client
  3. namespace: kube-system
  4. sources:
  5. chart:
  6. name: nfs-client-provisioner
  7. repo: https://charts.kubesphere.io/main
  8. values: /root/nfs-client.yaml

Ceph

With a Ceph server, you can choose Ceph RBD or Ceph CSI as the underlying storage plugin. Ceph RBD is an in-tree storage plugin on Kubernetes, and Ceph CSI is a Container Storage Interface (CSI) driver for RBD, CephFS.

Which Plugin to Select for Ceph

Ceph CSI RBD is the preferred choice if you work with 14.0.0 (Nautilus)+ Ceph cluster. Here are some reasons:

  • The in-tree plugin will be deprecated in the future.
  • Ceph RBD only works on Kubernetes with hyperkube images, and hyperkube images were deprecated since Kubernetes 1.17.
  • Ceph CSI has more features such as cloning, expanding and snapshots.

Ceph CSI RBD

Ceph-CSI needs to be installed on v1.14.0+ Kubernetes, and work with 14.0.0 (Nautilus)+ Ceph Cluster. For details about compatibility, see Ceph CSI Support Matrix.

The following is an example of KubeKey add-on configurations for Ceph CSI RBD installed by Helm Charts. As the StorageClass is not included in the chart, a StorageClass needs to be configured in the add-on config.

Chart Config

  1. csiConfig:
  2. - clusterID: "cluster1"
  3. monitors:
  4. - "192.168.0.8:6789" # <--TobeReplaced-->
  5. - "192.168.0.9:6789" # <--TobeReplaced-->
  6. - "192.168.0.10:6789" # <--TobeReplaced-->

If you want to configure more values, see chart configuration for ceph-csi-rbd.

StorageClass (including secret)

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: csi-rbd-secret
  5. namespace: kube-system
  6. stringData:
  7. userID: admin
  8. userKey: "AQDoECFfYD3DGBAAm6CPhFS8TQ0Hn0aslTlovw==" # <--ToBeReplaced-->
  9. encryptionPassphrase: test_passphrase
  10. ---
  11. apiVersion: storage.k8s.io/v1
  12. kind: StorageClass
  13. metadata:
  14. name: csi-rbd-sc
  15. annotations:
  16. storageclass.beta.kubernetes.io/is-default-class: "true"
  17. storageclass.kubesphere.io/supported-access-modes: '["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]'
  18. provisioner: rbd.csi.ceph.com
  19. parameters:
  20. clusterID: "cluster1"
  21. pool: "rbd" # <--ToBeReplaced-->
  22. imageFeatures: layering
  23. csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
  24. csi.storage.k8s.io/provisioner-secret-namespace: kube-system
  25. csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
  26. csi.storage.k8s.io/controller-expand-secret-namespace: kube-system
  27. csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
  28. csi.storage.k8s.io/node-stage-secret-namespace: kube-system
  29. csi.storage.k8s.io/fstype: ext4
  30. reclaimPolicy: Delete
  31. allowVolumeExpansion: true
  32. mountOptions:
  33. - discard

Add-On Config

Save the above chart config and StorageClass locally (e.g. /root/ceph-csi-rbd.yaml and /root/ceph-csi-rbd-sc.yaml). The add-on configuration can be set like:

  1. addons:
  2. - name: ceph-csi-rbd
  3. namespace: kube-system
  4. sources:
  5. chart:
  6. name: ceph-csi-rbd
  7. repo: https://ceph.github.io/csi-charts
  8. values: /root/ceph-csi-rbd.yaml
  9. - name: ceph-csi-rbd-sc
  10. sources:
  11. yaml:
  12. path:
  13. - /root/ceph-csi-rbd-sc.yaml

Ceph RBD

KubeKey will never use hyperkube images. Hence, in-tree Ceph RBD may not work on Kubernetes installed by KubeKey. However, if your Ceph cluster is lower than 14.0.0 which means Ceph CSI can’t be used, rbd provisioner can be used as a substitute for Ceph RBD. Its format is the same with in-tree Ceph RBD. The following is an example of KubeKey add-on configurations for rbd provisioner installed by Helm Charts including a StorageClass.

Chart Config

  1. ceph:
  2. mon: "192.168.0.12:6789" # <--ToBeReplaced-->
  3. adminKey: "QVFBS1JkdGRvV0lySUJBQW5LaVpSKzBRY2tjWmd6UzRJdndmQ2c9PQ==" # <--ToBeReplaced-->
  4. userKey: "QVFBS1JkdGRvV0lySUJBQW5LaVpSKzBRY2tjWmd6UzRJdndmQ2c9PQ==" # <--ToBeReplaced-->
  5. sc:
  6. isDefault: false

If you want to configure more values, see chart configuration for rbd-provisioner.

Add-on Config

Save the above chart config locally (e.g. /root/rbd-provisioner.yaml). The add-on config for rbd provisioner cloud be like:

  1. - name: rbd-provisioner
  2. namespace: kube-system
  3. sources:
  4. chart:
  5. name: rbd-provisioner
  6. repo: https://charts.kubesphere.io/test
  7. values: /root/rbd-provisioner.yaml

Glusterfs

Glusterfs is an in-tree storage plugin in Kubernetes. Hence, only StorageClass needs to be installed. The following is an example of KubeKey add-on configurations for glusterfs.

StorageClass (including secret)

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: heketi-secret
  5. namespace: kube-system
  6. type: kubernetes.io/glusterfs
  7. data:
  8. key: "MTIzNDU2" # <--ToBeReplaced-->
  9. ---
  10. apiVersion: storage.k8s.io/v1
  11. kind: StorageClass
  12. metadata:
  13. annotations:
  14. storageclass.beta.kubernetes.io/is-default-class: "true"
  15. storageclass.kubesphere.io/supported-access-modes: '["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]'
  16. name: glusterfs
  17. parameters:
  18. clusterid: "21240a91145aee4d801661689383dcd1" # <--ToBeReplaced-->
  19. gidMax: "50000"
  20. gidMin: "40000"
  21. restauthenabled: "true"
  22. resturl: "http://192.168.0.14:8080" # <--ToBeReplaced-->
  23. restuser: admin
  24. secretName: heketi-secret
  25. secretNamespace: kube-system
  26. volumetype: "replicate:2" # <--ToBeReplaced-->
  27. provisioner: kubernetes.io/glusterfs
  28. reclaimPolicy: Delete
  29. volumeBindingMode: Immediate
  30. allowVolumeExpansion: true

Add-on Config

Save the above StorageClass yaml locally (e.g. /root/glusterfs-sc.yaml). The add-on configuration can be set like:

  1. - addon
  2. - name: glusterfs
  3. sources:
  4. yaml:
  5. path:
  6. - /root/glusterfs-sc.yaml

OpenEBS/LocalVolumes

OpenEBS Dynamic Local PV provisioner can create Kubernetes Local Persistent Volumes using a unique HostPath (directory) on the node to persist data. It is very convenient for users to get started with KubeSphere when they have no special storage system. If no default StorageClass is configured with KubeKey add-on, OpenEBS/LocalVolumes will be installed.

Multi-Storage

If you intend to install more than one storage plugins, please only set one of them to be the default or set spec.persistence.storageClass of ClusterConfiguration with the StorageClass name you want Kubesphere to use. Otherwise, ks-installer will be confused about which StorageClass to use.