Harvester CSI Driver

Harvester CSI Driver - 图1caution

A known issue in v0.1.20 of the Harvester CSI driver causes volumes to get stuck when the host cluster is running a Harvester version that was released before v1.4.0.

This issue was fixed in v0.1.21. If your system is affected, you can follow the suggested workaround.

Harvester CSI Driver VersionHarvester VersionAffected
v0.1.21 and laterAll versionsNo
v0.1.20v1.4.0 and laterNo
v0.1.20v1.3.2 and earlierYes
v0.1.18 and earlierAll versionsNo

The Harvester Container Storage Interface (CSI) Driver provides a standard CSI interface used by guest Kubernetes clusters in Harvester. It connects to the host cluster and hot-plugs host volumes to the virtual machines (VMs) to provide native storage performance.

Deploying

Prerequisites

  • The Kubernetes cluster is built on top of Harvester virtual machines.
  • The Harvester virtual machines that run as guest Kubernetes nodes are in the same namespace.

Harvester CSI Driver - 图2note

Currently, the Harvester CSI driver only supports single-node read-write(RWO) volumes. Please follow the issue #1992 for future multi-node read-only(ROX) and read-write(RWX) support.

Deploying with Harvester RKE1 node driver

  • Select the Harvester(Out-of-tree) option.

    Harvester CSI Driver - 图3

  • Install Harvester CSI Driver from the Rancher marketplace.

    Harvester CSI Driver - 图4

Deploying with Harvester RKE2 node driver

When spinning up a Kubernetes cluster using Rancher RKE2 node driver, the Harvester CSI driver will be deployed automatically when Harvester cloud provider is selected.

select-harvester-cloud-provider

Install CSI driver manually in the RKE2 cluster

If you prefer to install the Harvester CSI driver without enabling the Harvester cloud provider, you can refer to the following steps:

Prerequisites of manual install

Ensure that you have the following prerequisites in place:

  • You have kubectl and jq installed on your system.
  • You have the kubeconfig file for your bare-metal Harvester cluster. You can find the kubeconfig file from one of the Harvester management nodes in the /etc/rancher/rke2/rke2.yaml path.

    1. export KUBECONFIG=/path/to/your/harvester-kubeconfig

Perform the following steps to deploy the Harvester CSI driver manually:

Deploy Harvester CSI driver

  1. Generate the cloud-config. You can generate the cloud-config file using the generate_addon_csi.sh script. It is available on the harvester/harvester-csi-driver repo.

    <serviceaccount name> usually corresponds to your guest cluster name, and <namespace> should match the machine pool’s namespace.

    1. ./generate_addon_csi.sh <serviceaccount name> <namespace> RKE2

    Harvester CSI Driver - 图6

    The generated output will be similar to the following one:

    1. ########## cloud-config ############
    2. apiVersion: v1
    3. clusters:
    4. - cluster: <token>
    5. server: https://<YOUR HOST HARVESTER VIP>:6443
    6. name: default
    7. contexts:
    8. - context:
    9. cluster: default
    10. namespace: default
    11. user: rke2-guest-01-default-default
    12. name: rke2-guest-01-default-default
    13. current-context: rke2-guest-01-default-default
    14. kind: Config
    15. preferences: {}
    16. users:
    17. - name: rke2-guest-01-default-default
    18. user:
    19. token: <token>
    20. ########## cloud-init user data ############
    21. write_files:
    22. - encoding: b64
    23. content: YXBpVmVyc2lvbjogdjEKY2x1c3RlcnM6Ci0gY2x1c3RlcjoKICAgIGNlcnRpZmljYXRlLWF1dGhvcml0eS1kYXRhOiBMUzB0TFMxQ1JVZEpUaUJEUlZKVVNVWkpRMEZVUlMwdExTMHRDazFKU1VKbFZFTkRRVklyWjBGM1NVSkJaMGxDUVVSQlMwSm5aM0ZvYTJwUFVGRlJSRUZxUVd0TlUwbDNTVUZaUkZaUlVVUkVRbXg1WVRKVmVVeFlUbXdLWTI1YWJHTnBNV3BaVlVGNFRtcG5NVTE2VlhoT1JGRjNUVUkwV0VSVVNYcE5SRlY1VDFSQk5VMVVRVEJOUm05WVJGUk5lazFFVlhsT2FrRTFUVlJCTUFwTlJtOTNTa1JGYVUxRFFVZEJNVlZGUVhkM1dtTnRkR3hOYVRGNldsaEtNbHBZU1hSWk1rWkJUVlJaTkU1VVRURk5WRkV3VFVSQ1drMUNUVWRDZVhGSENsTk5ORGxCWjBWSFEwTnhSMU5OTkRsQmQwVklRVEJKUVVKSmQzRmFZMDVTVjBWU2FsQlVkalJsTUhFMk0ySmxTSEZEZDFWelducGtRa3BsU0VWbFpHTUtOVEJaUTNKTFNISklhbWdyTDJab2VXUklNME5ZVURNeFZXMWxTM1ZaVDBsVGRIVnZVbGx4YVdJMGFFZE5aekpxVVdwQ1FVMUJORWRCTVZWa1JIZEZRZ292ZDFGRlFYZEpRM0JFUVZCQ1owNVdTRkpOUWtGbU9FVkNWRUZFUVZGSUwwMUNNRWRCTVZWa1JHZFJWMEpDVWpaRGEzbEJOSEZqYldKSlVESlFWVW81Q2xacWJWVTNVV2R2WjJwQlMwSm5aM0ZvYTJwUFVGRlJSRUZuVGtsQlJFSkdRV2xCZUZKNU4xUTNRMVpEYVZWTVdFMDRZazVaVWtWek1HSnBZbWxVSzJzS1kwRnhlVmt5Tm5CaGMwcHpMM2RKYUVGTVNsQnFVVzVxZEcwMVptNTZWR3AxUVVsblRuTkdibFozWkZRMldXWXpieTg0ZFRsS05tMWhSR2RXQ2kwdExTMHRSVTVFSUVORlVsUkpSa2xEUVZSRkxTMHRMUzBLCiAgICBzZXJ2ZXI6IGh0dHBzOi8vMTkyLjE2OC4wLjEzMTo2NDQzCiAgbmFtZTogZGVmYXVsdApjb250ZXh0czoKLSBjb250ZXh0OgogICAgY2x1c3RlcjogZGVmYXVsdAogICAgbmFtZXNwYWNlOiBkZWZhdWx0CiAgICB1c2VyOiBya2UyLWd1ZXN0LTAxLWRlZmF1bHQtZGVmYXVsdAogIG5hbWU6IHJrZTItZ3Vlc3QtMDEtZGVmYXVsdC1kZWZhdWx0CmN1cnJlbnQtY29udGV4dDogcmtlMi1ndWVzdC0wMS1kZWZhdWx0LWRlZmF1bHQKa2luZDogQ29uZmlnCnByZWZlcmVuY2VzOiB7fQp1c2VyczoKLSBuYW1lOiBya2UyLWd1ZXN0LTAxLWRlZmF1bHQtZGVmYXVsdAogIHVzZXI6CiAgICB0b2tlbjogZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklreGhUazQxUTBsMWFsTnRORE5TVFZKS00waE9UbGszTkV0amNVeEtjM1JSV1RoYVpUbGZVazA0YW1zaWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUprWldaaGRXeDBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5elpXTnlaWFF1Ym1GdFpTSTZJbkpyWlRJdFozVmxjM1F0TURFdGRHOXJaVzRpTENKcmRXSmxjbTVsZEdWekxtbHZMM05sY25acFkyVmhZMk52ZFc1MEwzTmxjblpwWTJVdFlXTmpiM1Z1ZEM1dVlXMWxJam9pY210bE1pMW5kV1Z6ZEMwd01TSXNJbXQxWW1WeWJtVjBaWE11YVc4dmMyVnlkbWxqWldGalkyOTFiblF2YzJWeWRtbGpaUzFoWTJOdmRXNTBMblZwWkNJNkltTXlZak5sTldGaExUWTBNMlF0TkRkbU1pMDROemt3TFRjeU5qWXpNbVl4Wm1aaU5pSXNJbk4xWWlJNkluTjVjM1JsYlRwelpYSjJhV05sWVdOamIzVnVkRHBrWldaaGRXeDBPbkpyWlRJdFozVmxjM1F0TURFaWZRLmFRZmU1d19ERFRsSWJMYnUzWUVFY3hmR29INGY1VnhVdmpaajJDaWlhcXB6VWI0dUYwLUR0cnRsa3JUM19ZemdXbENRVVVUNzNja1BuQmdTZ2FWNDhhdmlfSjJvdUFVZC04djN5d3M0eXpjLVFsTVV0MV9ScGJkUURzXzd6SDVYeUVIREJ1dVNkaTVrRWMweHk0X0tDQ2IwRHQ0OGFoSVhnNlMwRDdJUzFfVkR3MmdEa24wcDVXUnFFd0xmSjdEbHJDOFEzRkNUdGhpUkVHZkUzcmJGYUdOMjdfamR2cUo4WXlJQVd4RHAtVHVNT1pKZUNObXRtUzVvQXpIN3hOZlhRTlZ2ZU05X29tX3FaVnhuTzFEanllbWdvNG9OSEpzekp1VWliRGxxTVZiMS1oQUxYSjZXR1Z2RURxSTlna1JlSWtkX3JqS2tyY3lYaGhaN3lTZ3o3QQo=
    24. owner: root:root
    25. path: /var/lib/rancher/rke2/etc/config-files/cloud-provider-config
    26. permissions: '0644'
  2. Copy and paste the cloud-init user data content to Machine Pools > Show Advanced > User Data. Harvester CSI Driver - 图7

    The cloud-provider-config file will be created after you apply the cloud-init user data above. You can find it on the guest Kubernetes nodes at the path /var/lib/rancher/rke2/etc/config-files/cloud-provider-config.

  3. Configure the Cloud Provider either to Default - RKE2 Embedded or External.

    Harvester CSI Driver - 图8

  4. Select Create to create your RKE2 cluster.

  5. Once the RKE2 cluster is ready, install the Harvester CSI Driver chart from the Rancher marketplace. You do not need to change the cloud-config path by default.

    Harvester CSI Driver - 图9

    Harvester CSI Driver - 图10

Harvester CSI Driver - 图11note

If you prefer not to install the Harvester CSI driver using Rancher (Apps > Charts), you can use Helm instead. The Harvester CSI driver is packaged as a Helm chart. For more information, see https://charts.harvesterhci.io.

By following the above steps, you should be able to see those CSI driver pods are up and running on the kube-system namespace, and you can verify it by provisioning a new PVC using the default StorageClass harvester on your RKE2 cluster.

Deploying with Harvester K3s node driver

You can follow the Deploy Harvester CSI Driver steps described in the RKE2 section.

The only difference is in generating the cloud-init config where you need to specify the provider type as k3s:

  1. ./generate_addon_csi.sh <serviceaccount name> <namespace> k3s

Customize the Default StorageClass

The Harvester CSI driver provides the interface for defining the default StorageClass. If the default StorageClass in unspecified, the Harvester CSI driver uses the default StorageClass of the host Harvester cluster.

You can use the parameter host-storage-class to customize the default StorageClass.

  1. Create a StorageClass for the host Harvester cluster.

    Example: Harvester CSI Driver - 图12

  2. Deploy the CSI driver with the parameter host-storage-class.

    Example: Harvester CSI Driver - 图13

  3. Verify that the Harvester CSI driver is ready.

    1. On the PersistentVolumeClaims screen, create a PVC. Select Use a Storage Class to provision a new Persistent Volume and specify the StorageClass you created.

      Example: Harvester CSI Driver - 图14

    2. Once the PVC is created, note the name of the provisioned volume and verify that the status is Bound.

      Example: Harvester CSI Driver - 图15

    3. On the Volumes screen, verify that the volume was provisioned using the StorageClass that you created.

      Example: Harvester CSI Driver - 图16

Passthrough Custom StorageClass

Beginning with Harvester CSI driver v0.1.15, it’s possible to create a PersistentVolumeClaim (PVC) using a different Harvester StorageClass on the guest Kubernetes Cluster.

Harvester CSI Driver - 图17note

Harvester CSI driver v0.1.15 is supported out of the box starting with the following RKE2 versions. For RKE1, manual installation of the CSI driver chart is required:

  • v1.23.16+rke2r1 and later
  • v1.24.10+rke2r1 and later
  • v1.25.6+rke2r1 and later
  • v1.26.1+rke2r1 and later
  • v1.27.1+rke2r1 and later

Prerequisites

Add the following prerequisites to your Harvester cluster to ensure the Harvester CSI driver displays error messages correctly. Proper RBAC settings are essential for error message visibility, especially when creating a PVC with a non-existent StorageClass, as shown in the image below:

Harvester CSI Driver - 图18

Follow these steps to set up RBAC for error message visibility:

  1. Create a new clusterrole named harvesterhci.io:csi-driver using the following manifest.

    1. apiVersion: rbac.authorization.k8s.io/v1
    2. kind: ClusterRole
    3. metadata:
    4. labels:
    5. app.kubernetes.io/component: apiserver
    6. app.kubernetes.io/name: harvester
    7. app.kubernetes.io/part-of: harvester
    8. name: harvesterhci.io:csi-driver
    9. rules:
    10. - apiGroups:
    11. - storage.k8s.io
    12. resources:
    13. - storageclasses
    14. verbs:
    15. - get
    16. - list
    17. - watch
  2. Create a new clusterrolebinding associated with the clusterrole above with the relevant serviceaccount using the following manifest.

    1. apiVersion: rbac.authorization.k8s.io/v1
    2. kind: ClusterRoleBinding
    3. metadata:
    4. name: <namespace>-<serviceaccount name>
    5. roleRef:
    6. apiGroup: rbac.authorization.k8s.io
    7. kind: ClusterRole
    8. name: harvesterhci.io:csi-driver
    9. subjects:
    10. - kind: ServiceAccount
    11. name: <serviceaccount name>
    12. namespace: <namespace>

    Make sure the serviceaccount name and namespace match your cloud provider settings. Perform the following steps to retrieve these details.

    1. Find the rolebinding associated with your cloud provider:

      1. $ kubectl get rolebinding -A |grep harvesterhci.io:cloudprovider
      2. default default-rke2-guest-01 ClusterRole/harvesterhci.io:cloudprovider 7d1h
    2. Extract the subjects information from this rolebinding:

      1. $ kubectl get rolebinding default-rke2-guest-01 -n default -o yaml |yq -e '.subjects'
    3. Identify the ServiceAccount information:

      1. - kind: ServiceAccount
      2. name: rke2-guest-01
      3. namespace: default

Deploying

Now you can create a new StorageClass that you intend to use in your guest Kubernetes cluster.

  1. For administrators, you can create a desired StorageClass (e.g., named replica-2) in your bare-metal Harvester cluster.

    Harvester CSI Driver - 图19

  2. Then, on the guest Kubernetes cluster, create a new StorageClass associated with the StorageClass named replica-2 from the Harvester Cluster:

    Harvester CSI Driver - 图20

    Harvester CSI Driver - 图21note

    • When choosing a Provisioner, select Harvester (CSI). The Host StorageClass parameter should match the StorageClass name created on the Harvester Cluster.
    • For guest Kubernetes owners, you may request that the Harvester cluster administrator create a new StorageClass.
    • If you leave the Host StorageClass field empty, the default StorageClass of the Harvester cluster will be used.
  3. You can now create a PVC based on this new StorageClass, which utilizes the Host StorageClass to provision volumes on the bare-metal Harvester cluster.

RWX Volumes Support

Prerequisites

  • Harvester v1.4 or later is installed on the host cluster.

  • You have created an RWX StorageClass on the host Harvester cluster.

    On the Storage Class: Create screen, click Edit as YAML and specify the following:

    1. kind: StorageClass
    2. apiVersion: storage.k8s.io/v1
    3. metadata:
    4. name: longhorn-rwx
    5. provisioner: driver.longhorn.io
    6. allowVolumeExpansion: true
    7. reclaimPolicy: Delete
    8. volumeBindingMode: Immediate
    9. parameters:
    10. numberOfReplicas: "3"
    11. staleReplicaTimeout: "2880"
    12. fromBackup: ""
    13. fsType: "ext4"
    14. nfsOptions: "vers=4.2,noresvport,softerr,timeo=600,retrans=5"

    Harvester CSI Driver - 图22

    Harvester CSI Driver - 图23

    Harvester CSI Driver - 图24

  • The role-based access control (RBAC) settings are up-to-date.

    RBAC authorization uses a specific Kubernetes API group to drive authorization decisions regarding access to computer or network resources.

    The Harvester CSI driver requires the new RBAC settings to support RWX volumes. To check the RBAC settings, run the command kubectl get clusterrole harvesterhci.io:csi-driver -o yaml.

    1. # kubectl get clusterrole harvesterhci.io:csi-driver -o yaml
    2. apiVersion: rbac.authorization.k8s.io/v1
    3. kind: ClusterRole
    4. metadata:
    5. ...
    6. name: harvesterhci.io:csi-driver
    7. ...
    8. rules:
    9. - apiGroups:
    10. - storage.k8s.io
    11. resources:
    12. - storageclasses
    13. verbs:
    14. - get
    15. - list
    16. - watch
    17. - apiGroups:
    18. - harvesterhci.io
    19. resources:
    20. - networkfilesystems
    21. - networkfilesystems/status
    22. verbs:
    23. - '*'
    24. - apiGroups:
    25. - longhorn.io
    26. resources:
    27. - volumes
    28. - volumes/status
    29. verbs:
    30. - get
    31. - list
  • The networkfs-manager pods are running.

    To check the status of the networkfs-manager pods, run the command kubectl get pods -n harvester-system | grep networkfs-manager.

    Example:

    1. # kubectl get pods -n harvester-system | grep networkfs-manager
    2. harvester-networkfs-manager-2pxhm 1/1 Running 4 (34m ago) 3h41m
    3. harvester-networkfs-manager-8tst2 1/1 Running 4 (37m ago) 3h41m
    4. harvester-networkfs-manager-xvkgp 1/1 Running 4 (37m ago) 3h41m
  • The Harvester CSI driver version is v0.1.20 or later.

    Harvester CSI Driver - 图25

  • The NFS client is installed on each node in the guest cluster.

    Run any of the following commands to install the NFS client.

    • Debian and Ubuntu: apt-get install -y nfs-common

    • CentOS and RHEL: yum install -y nfs-utils

    • SUSE and OpenSUSE: zypper install -y nfs-client

Usage

  1. Create a new StorageClass on the guest cluster.

    On the StorageClass: Create screen, add a Host Storage Class parameter and specify the RWX StorageClass that you created on the host Harvester cluster.

    Harvester CSI Driver - 图26

  2. Create an RWX PersistentVolumeClaim (PVC).

    On the PersistentVolumeClaim: Create screen, configure the following settings:

  • Volume Claim tab: Specify the new StorageClass.

  • Customize tab: Select Many Nodes Read-Write.

    Harvester CSI Driver - 图27

    Harvester CSI Driver - 图28

  1. Verify that the RWX PVC was created successfully.

    Harvester CSI Driver - 图29

  2. Create two pods.

    On the Pod: Create screen, specify the RWX PVC.

    Harvester CSI Driver - 图30

    Harvester CSI Driver - 图31

    Harvester CSI Driver - 图32

Harvester CSI Driver - 图33note

You can follow the same steps to create an RWX PVC on the guest cluster and then use it on pods that require RWX volumes.

Upgrade the CSI Driver

Upgrade RKE2

To upgrade the CSI driver, use the Rancher UI to upgrade RKE2. Ensure the new RKE2 version supports/bundled with the updated CSI driver version.

  1. Go to > Cluster Management.

  2. Find the guest cluster that you want to upgrade and select > Edit Config.

  3. Select Kubernetes Version.

  4. Click Save.

Upgrade RKE and K3s

You can upgrade RKE and K3s using the Rancher UI.

  1. Go to > RKE/K3s Cluster > Apps > Installed Apps.

  2. Find the CSI driver chart and select > Edit/Upgrade.

  3. Select Version.

  4. Select Next > Update.