Quick Start

Table of Contents


Longhorn’s V2 Data Engine harnesses the power of the Storage Performance Development Kit (SPDK) to elevate its overall performance. The integration significantly reduces I/O latency while simultaneously boosting IOPS and throughput. The enhancement provides a high-performance storage solution capable of meeting diverse workload demands.

V2 Data Engine is currently a PREVIEW feature and should NOT be utilized in a production environment. At present, a volume with V2 Data Engine only supports

  • Volume lifecycle (creation, attachment, detachment and deletion)
  • Degraded volume
  • Block disk management
  • Orphaned replica management

In addition to the features mentioned above, additional functionalities such as replica number adjustment, online replica rebuilding, snapshot, backup, restore and so on will be introduced in future versions.

This tutorial will guide you through the process of configuring the environment and create Kubernetes persistent storage resources of persistent volumes (PVs) and persistent volume claims (PVCs) that correspond to Longhorn volumes using V2 Data Engine.

Prerequisites

Configure Kernel Modules and Huge Pages

For Debian and Ubuntu, please install Linux kernel extra modules before loading the kernel modules

  1. apt install -y linux-modules-extra-`uname -r`

We provide a manifest that helps you configure the kernel modules and huge pages automatically, making it easier to set up.

  1. kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.7.1/deploy/prerequisite/longhorn-spdk-setup.yaml

And also can check the log with the following command to see the installation result.

  1. Cloning into '/tmp/spdk'...
  2. INFO: Requested 1024 hugepages but 1024 already allocated on node0
  3. SPDK environment is configured successfully

Or, you can install them manually by following these steps.

  • Load the kernel modules on the each Longhorn node

    1. modprobe vfio_pci
    2. modprobe uio_pci_generic
  • Configure huge pages SPDK leverages huge pages for enhancing performance and minimizing memory overhead. You must configure 2 MiB-sized huge pages on each Longhorn node to enable usage of huge pages. Specifically, 1024 pages (equivalent to a total of 2 GiB) must be available on each Longhorn node.

To allocate huge pages, run the following commands on each node.

  1. echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages

To make the change permanent, add the following line to the file /etc/sysctl.conf.

  1. echo "vm.nr_hugepages=1024" >> /etc/sysctl.conf

Load nvme-tcp Kernel Module

We provide a manifest that helps you finish the deployment on each Longhorn node.

  1. kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.7.1/deploy/prerequisite/longhorn-nvme-cli-installation.yaml

Or, you can manually load nvme-tcp kernel module on the each Longhorn node

  1. modprobe nvme-tcp

Load Kernel Modules Automatically on Boot

Rather than manually loading kernel modules vfio_pci, uio_pci_generic and nvme-tcp each time after reboot, you can streamline the process by configuring automatic module loading during the boot sequence. For detailed instructions, please consult the manual provided by your operating system.

Reference:

Restart kubelet

After finishing the above steps, restart kubelet on each node.

Check Environment

Using the Longhorn Command Line Tool

The longhornctl tool is a CLI for Longhorn operations. For more information, see Command Line Tool (longhornctl).

To check the prerequisites and configurations, download the tool and run the check sub-command:

  1. # For AMD64 platform
  2. curl -sSfL -o longhornctl https://github.com/longhorn/cli/releases/download/v1.7.1/longhornctl-linux-amd64
  3. # For ARM platform
  4. curl -sSfL -o longhornctl https://github.com/longhorn/cli/releases/download/v1.7.1/longhornctl-linux-arm64
  5. chmod +x longhornctl
  6. ./longhornctl check preflight --enable-spdk

Example of result:

  1. INFO[2024-01-10T00:00:01Z] Initializing preflight checker
  2. INFO[2024-01-01T00:00:01Z] Cleaning up preflight checker
  3. INFO[2024-01-01T00:00:01Z] Running preflight checker
  4. INFO[2024-01-01T00:00:02Z] Retrieved preflight checker result:
  5. worker1:
  6. error:
  7. - 'HugePages is insufficient. Required 2MiB HugePages: 1024 pages, Total 2MiB HugePages: 0 pages'
  8. - 'Module nvme_tcp is not loaded: failed to execute: nsenter [--mount=/host/proc/204896/ns/mnt --net=/host/proc/204896/ns/net grep nvme_tcp /proc/modules], output , stderr : exit status 1'
  9. - 'Module uio_pci_generic is not loaded: failed to execute: nsenter [--mount=/host/proc/204896/ns/mnt --net=/host/proc/204896/ns/net grep uio_pci_generic /proc/modules], output , stderr : exit status 1'
  10. info:
  11. - Service iscsid is running
  12. - NFS4 is supported
  13. - Package nfs-common is installed
  14. - Package open-iscsi is installed
  15. - CPU instruction set sse4_2 is supported
  16. warn:
  17. - multipathd.service is running. Please refer to https://longhorn.io/kb/troubleshooting-volume-with-multipath/ for more information.

Use the install sub-command to install and set up the preflight dependencies before installing Longhorn.

  1. master:~# ./longhornctl install preflight --enable-spdk
  2. INFO[2024-01-01T00:00:03Z] Initializing preflight installer
  3. INFO[2024-01-01T00:00:03Z] Cleaning up preflight installer
  4. INFO[2024-01-01T00:00:03Z] Running preflight installer
  5. INFO[2024-01-01T00:00:03Z] Installing dependencies with package manager
  6. INFO[2024-01-01T00:00:10Z] Installed dependencies with package manager
  7. INFO[2024-01-01T00:00:10Z] Cleaning up preflight installer
  8. INFO[2024-01-01T00:00:10Z] Completed preflight installer. Use 'longhornctl check preflight' to check the result.

After installing and setting up the preflight dependencies, you can run the check sub-command again to verify that all environment settings are correct.

  1. master:~# ./longhornctl check preflight --enable-spdk
  2. INFO[2024-01-01T00:00:13Z] Initializing preflight checker
  3. INFO[2024-01-01T00:00:13Z] Cleaning up preflight checker
  4. INFO[2024-01-01T00:00:13Z] Running preflight checker
  5. INFO[2024-01-01T00:00:16Z] Retrieved preflight checker result:
  6. worker1:
  7. info:
  8. - Service iscsid is running
  9. - NFS4 is supported
  10. - Package nfs-common is installed
  11. - Package open-iscsi is installed
  12. - CPU instruction set sse4_2 is supported
  13. - HugePages is enabled
  14. - Module nvme_tcp is loaded
  15. - Module uio_pci_generic is loaded

Using the Script

Make sure everything is correctly configured and installed by

  1. bash -c "$(curl -sfL https://raw.githubusercontent.com/longhorn/longhorn/v1.7.1/scripts/environment_check.sh)" -s -s

Installation

Install Longhorn System

Follow the steps in Quick Installation to install Longhorn system.

Enable V2 Data Engine

Enable the V2 Data Engine by changing the v2-data-engine setting to true after installation. Following this, the instance-manager pods will be automatically restarted.

Or, you can enable it in Setting > General > V2 Data Engine.

CPU and Memory Usage

When the V2 Data Engine is enabled, each Instance Manager pod for the V2 Data Engine uses 1 CPU core. The high CPU usage is caused by spdk_tgt, a process running in each Instance Manager pod that handles input/output (IO) operations and requires intensive polling. spdk_tgt consumes 100% of a dedicated CPU core to efficiently manage and process the IO requests, ensuring optimal performance and responsiveness for storage operations.

  1. NAME CPU(cores) MEMORY(bytes)
  2. csi-attacher-57c5fd5bdf-jsfs4 1m 7Mi
  3. csi-attacher-57c5fd5bdf-kb6dv 1m 9Mi
  4. csi-attacher-57c5fd5bdf-s7fb6 1m 7Mi
  5. csi-provisioner-7b95bf4b87-8xr6f 1m 11Mi
  6. csi-provisioner-7b95bf4b87-v4gwb 1m 9Mi
  7. csi-provisioner-7b95bf4b87-vnt58 1m 9Mi
  8. csi-resizer-6df9886858-6v2ds 1m 8Mi
  9. csi-resizer-6df9886858-b6mns 1m 9Mi
  10. csi-resizer-6df9886858-l4vmj 1m 8Mi
  11. csi-snapshotter-5d84585dd4-4dwkz 1m 7Mi
  12. csi-snapshotter-5d84585dd4-km8bc 1m 9Mi
  13. csi-snapshotter-5d84585dd4-kzh6w 1m 7Mi
  14. engine-image-ei-b907910b-79k2s 3m 19Mi
  15. instance-manager-214803c4f23376af5a75418299b12ad6 1015m 133Mi (for V2 Data Engine)
  16. instance-manager-4550bbc4938ff1266584f42943b511ad 4m 15Mi (for V1 Data Engine)
  17. longhorn-csi-plugin-nz94f 1m 26Mi
  18. longhorn-driver-deployer-556955d47f-h5672 1m 12Mi
  19. longhorn-manager-2n9hd 4m 42Mi
  20. longhorn-ui-58db78b68-bzzz8 0m 2Mi
  21. longhorn-ui-58db78b68-ffbxr 0m 2Mi

You can observe the utilization of allocated huge pages on each node by running the command kubectl get node <node name> -o yaml.

  1. # kubectl get node sles-pool1-07437316-4jw8f -o yaml
  2. ...
  3. status:
  4. ...
  5. allocatable:
  6. cpu: "8"
  7. ephemeral-storage: "203978054087"
  8. hugepages-1Gi: "0"
  9. hugepages-2Mi: 2Gi
  10. memory: 31813168Ki
  11. pods: "110"
  12. capacity:
  13. cpu: "8"
  14. ephemeral-storage: 209681388Ki
  15. hugepages-1Gi: "0"
  16. hugepages-2Mi: 2Gi
  17. memory: 32861744Ki
  18. pods: "110"
  19. ...

Add block-type Disks in Longhorn Nodes

Unlike filesystem-type disks that are designed for legacy volumes, volumes using V2 Data Engine are persistent on block-type disks. Therefore, it is necessary to equip Longhorn nodes with block-type disks.

Prepare disks

If there are no additional disks available on the Longhorn nodes, you can create loop block devices to test the feature. To accomplish this, execute the following command on each Longhorn node to create a 10 GiB block device.

  1. dd if=/dev/zero of=blockfile bs=1M count=10240
  2. losetup -f blockfile

To display the path of the block device when running the command losetup -f blockfile, use the following command.

  1. losetup -j blockfile

Add disks to node.longhorn.io

You can add the disk by navigating to the Node UI page and specify the Disk Type as Block. Next, provide the block device’s path in the Path field.

Or, edit the node.longhorn.io resource.

  1. kubectl -n longhorn-system edit node.longhorn.io <NODE NAME>

Add the disk to Spec.Disks

  1. <DISK NAME>:
  2. allowScheduling: true
  3. evictionRequested: false
  4. path: /PATH/TO/BLOCK/DEVICE
  5. storageReserved: 0
  6. tags: []
  7. diskType: block

Wait for a while, you will see the disk is displayed in the Status.DiskStatus.

Application Deployment

After the installation and configuration, we can dynamically provision a Persistent Volume using V2 Data Engine as the following steps.

Create a StorageClass

Run the following command to create a StorageClass named longhorn-spdk. Set parameters.dataEngine to v2 to enable the V2 Data Engine.

  1. kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.7.1/examples/v2/storageclass.yaml

Create Longhorn Volumes

Create a Pod that uses Longhorn volumes using V2 Data Engine by running this command:

  1. kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.7.1/examples/v2/pod_with_pvc.yaml

Or, if you are creating a volume on Longhorn UI, please specify the Data Engine as v2.