- Persistent storage using logical volume manager storage
- Deploying LVM Storage on single-node OpenShift clusters
- Creating a Logical Volume Manager cluster on a single-node OpenShift worker node
- Provisioning storage using LVM Storage
- Monitoring LVM Storage
- Scaling storage of single-node OpenShift clusters
- Upgrading LVM Storage on single-node OpenShift clusters
- Volume snapshots for single-node OpenShift
- Volume cloning for single-node OpenShift
- Downloading log files and diagnostic information using must-gather
- LVM Storage reference YAML file
Persistent storage using logical volume manager storage
Logical volume manager storage (LVM Storage) uses the TopoLVM CSI driver to dynamically provision local storage on single-node OpenShift clusters.
LVM Storage creates thin-provisioned volumes using Logical Volume Manager and provides dynamic provisioning of block storage on a limited resources single-node OpenShift cluster.
Deploying LVM Storage on single-node OpenShift clusters
You can deploy LVM Storage on a single-node OpenShift bare-metal or user-provisioned infrastructure cluster and configure it to dynamically provision storage for your workloads.
LVM Storage creates a volume group using all the available unused disks and creates a single thin pool with a size of 90% of the volume group. The remaining 10% of the volume group is left free to enable data recovery by expanding the thin pool when required. You might need to manually perform such recovery.
You can use persistent volume claims (PVCs) and volume snapshots provisioned by LVM Storage to request storage and create volume snapshots.
LVM Storage configures a default overprovisioning limit of 10 to take advantage of the thin-provisioning feature. The total size of the volumes and volume snapshots that can be created on the single-node OpenShift clusters is 10 times the size of the thin pool.
You can deploy LVM Storage on single-node OpenShift clusters using one of the following:
Red Hat Advanced Cluster Management (RHACM)
OKD Web Console
Requirements
Before you begin deploying LVM Storage on single-node OpenShift clusters, ensure that the following requirements are met:
You have installed Red Hat Advanced Cluster Management (RHACM) on an OKD cluster.
Every managed single-node OpenShift cluster has dedicated disks that are used to provision storage.
Before you deploy LVM Storage on single-node OpenShift clusters, be aware of the following limitations:
You can only create a single instance of the
LVMCluster
custom resource (CR) on an OKD cluster.You can make only a single
deviceClass
entry in theLVMCluster
CR.When a device becomes part of the
LVMCluster
CR, it cannot be removed.
Limitations
For deploying single-node OpenShift, LVM Storage has the following limitations:
The total storage size is limited by the size of the underlying Logical Volume Manager (LVM) thin pool and the overprovisioning factor.
The size of the logical volume depends on the size of the Physical Extent (PE) and the Logical Extent (LE).
It is possible to define the size of PE and LE during the physical and logical device creation.
The default PE and LE size is 4 MB.
If the size of the PE is increased, the maximum size of the LVM is determined by the kernel limits and your disk space.
Architecture | RHEL 5 | RHEL 6 | RHEL 7 | RHEL 8 |
---|---|---|---|---|
32-bit | 16 TB | 16 TB | - | - |
64-bit | 8 EB [1] | 8 EB [1] 100 TB [2] | 8 EB [1] 500 TB [2] | 8 EB |
Theoretical size.
Tested size.
Additional resources
Installing LVM Storage with the CLI
As a cluster administrator, you can install Logical volume manager storage (LVM Storage) by using the CLI.
Prerequisites
You have installed the OpenShift CLI (
oc
).You have logged in as a user with
cluster-admin
privileges.
Procedure
Create a namespace for the LVM Storage Operator.
Save the following YAML in the
lvms-namespace.yaml
file:apiVersion: v1
kind: Namespace
metadata:
labels:
openshift.io/cluster-monitoring: "true"
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/warn: privileged
name: openshift-storage
Create the
Namespace
CR:$ oc create -f lvms-namespace.yaml
Create an Operator group for the LVM Storage Operator.
Save the following YAML in the
lvms-operatorgroup.yaml
file:apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-storage-operatorgroup
namespace: openshift-storage
spec:
targetNamespaces:
- openshift-storage
Create the
OperatorGroup
CR:$ oc create -f lvms-operatorgroup.yaml
Subscribe to the LVM Storage Operator.
Save the following YAML in the
lvms-sub.yaml
file:apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: lvms
namespace: openshift-storage
spec:
installPlanApproval: Automatic
name: lvms-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
Create the
Subscription
CR:$ oc create -f lvms-sub.yaml
Create the
LVMCluster
resource:Save the following YAML in the
lvmcluster.yaml
file:apiVersion: lvm.topolvm.io/v1alpha1
kind: LVMCluster
metadata:
name: my-lvmcluster
namespace: openshift-storage
spec:
storage:
deviceClasses:
- name: vg1
deviceSelector:
paths:
- /dev/disk/by-path/pci-0000:87:00.0-nvme-1
- /dev/disk/by-path/pci-0000:88:00.0-nvme-1
thinPoolConfig:
name: thin-pool-1
sizePercent: 90
overprovisionRatio: 10
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: app
operator: In
values:
- test1
Create the
LVMCluster
CR:$ oc create -f lvmcluster.yaml
To verify that the Operator is installed, enter the following command:
$ oc get csv -n openshift-storage -o custom-columns=Name:.metadata.name,Phase:.status.phase
Example output
Name Phase
4.13.0-202301261535 Succeeded
Installing LVM Storage with the web console
You can install Logical volume manager storage (LVM Storage) by using the Red Hat OKD OperatorHub.
Prerequisites
You have access to the single-node OpenShift cluster.
You are using an account with the
cluster-admin
and Operator installation permissions.
Procedure
Log in to the OKD Web Console.
Click Operators → OperatorHub.
Scroll or type
LVM Storage
into the Filter by keyword box to find LVM Storage.Click Install.
Set the following options on the Install Operator page:
Update Channel as stable-4.13.
Installation Mode as A specific namespace on the cluster.
Installed Namespace as Operator recommended namespace openshift-storage. If the
openshift-storage
namespace does not exist, it is created during the operator installation.Approval Strategy as Automatic or Manual.
If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
Click Install.
Verification steps
- Verify that LVM Storage shows a green tick, indicating successful installation.
Uninstalling LVM Storage installed using the OpenShift Web Console
You can unstall LVM Storage using the Red Hat OpenShift Container Platform Web Console.
Prerequisites
You deleted all the applications on the clusters that are using the storage provisioned by LVM Storage.
You deleted the persistent volume claims (PVCs) and persistent volumes (PVs) provisioned using LVM Storage.
You deleted all volume snapshots provisioned by LVM Storage.
You verified that no logical volume resources exist by using the
oc get logicalvolume
command.You have access to the single-node OpenShift cluster using an account with
cluster-admin
permissions.
Procedure
From the Operators → Installed Operators page, scroll to LVM Storage or type
LVM Storage
into the Filter by name to find and click on it.Click on the LVMCluster tab.
On the right-hand side of the LVMCluster page, select Delete LVMCluster from the Actions drop-down menu.
Click on the Details tab.
On the right-hand side of the Operator Details page, select Uninstall Operator from the Actions drop-down menu.
Select Remove. LVM Storage stops running and is completely removed.
Installing LVM Storage in a disconnected environment
You can install LVM Storage on OKD 4.13 in a disconnected environment. All sections referenced in this procedure are linked in Additional resources.
Prerequisites
You read the About disconnected installation mirroring section.
You have access to the OKD image repository.
You created a mirror registry.
Procedure
Follow the steps in the Creating the image set configuration procedure. To create an
ImageSetConfiguration
resource for LVM Storage, you can use the following example YAML file:Example ImageSetConfiguration file for LVM Storage
kind: ImageSetConfiguration
apiVersion: mirror.openshift.io/v1alpha2
archiveSize: 4 (1)
storageConfig: (2)
registry:
imageURL: example.com/mirror/oc-mirror-metadata (3)
skipTLS: false
mirror:
platform:
channels:
- name: stable-4.13 (4)
type: ocp
graph: true (5)
operators:
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 (6)
packages:
- name: lvms-operator (7)
channels:
- name: stable (8)
additionalImages:
- name: registry.redhat.io/ubi9/ubi:latest (9)
helm: {}
1 Add archiveSize
to set the maximum size, in GiB, of each file within the image set.2 Set the back-end location to save the image set metadata to. This location can be a registry or local directory. It is required to specify storageConfig
values, unless you are using the Technology Preview OCI feature.3 Set the registry URL for the storage backend. 4 Set the channel to retrieve the OKD images from. 5 Add graph: true
to generate the OpenShift Update Service (OSUS) graph image to allow for an improved cluster update experience when using the web console. For more information, see About the OpenShift Update Service.6 Set the Operator catalog to retrieve the OKD images from. 7 Specify only certain Operator packages to include in the image set. Remove this field to retrieve all packages in the catalog. 8 Specify only certain channels of the Operator packages to include in the image set. You must always include the default channel for the Operator package even if you do not use the bundles in that channel. You can find the default channel by running the following command: oc mirror list operators —catalog=<catalog_name> —package=<package_name>
.9 Specify any additional images to include in image set. Follow the procedure in the Mirroring an image set to a mirror registry section.
Follow the procedure in the Configuring image registry repository mirroring section.
Additional resources
Installing LVM Storage using RHACM
LVM Storage is deployed on single-node OpenShift clusters using Red Hat Advanced Cluster Management (RHACM). You create a Policy
object on RHACM that deploys and configures the Operator when it is applied to managed clusters which match the selector specified in the PlacementRule
resource. The policy is also applied to clusters that are imported later and satisfy the placement rule.
Prerequisites
Access to the RHACM cluster using an account with
cluster-admin
and Operator installation permissions.Dedicated disks on each single-node OpenShift cluster to be used by LVM Storage.
The single-node OpenShift cluster needs to be managed by RHACM, either imported or created.
Procedure
Log in to the RHACM CLI using your OKD credentials.
Create a namespace in which you will create policies.
# oc create ns lvms-policy-ns
To create a policy, save the following YAML to a file with a name such as
policy-lvms-operator.yaml
:apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
name: placement-install-lvms
spec:
clusterConditions:
- status: "True"
type: ManagedClusterConditionAvailable
clusterSelector: (1)
matchExpressions:
- key: mykey
operator: In
values:
- myvalue
---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
name: binding-install-lvms
placementRef:
apiGroup: apps.open-cluster-management.io
kind: PlacementRule
name: placement-install-lvms
subjects:
- apiGroup: policy.open-cluster-management.io
kind: Policy
name: install-lvms
---
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
annotations:
policy.open-cluster-management.io/categories: CM Configuration Management
policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
policy.open-cluster-management.io/standards: NIST SP 800-53
name: install-lvms
spec:
disabled: false
remediationAction: enforce
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: install-lvms
spec:
object-templates:
- complianceType: musthave
objectDefinition:
apiVersion: v1
kind: Namespace
metadata:
labels:
openshift.io/cluster-monitoring: "true"
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/warn: privileged
name: openshift-storage
- complianceType: musthave
objectDefinition:
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-storage-operatorgroup
namespace: openshift-storage
spec:
targetNamespaces:
- openshift-storage
- complianceType: musthave
objectDefinition:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: lvms
namespace: openshift-storage
spec:
installPlanApproval: Automatic
name: lvms-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
remediationAction: enforce
severity: low
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: lvms
spec:
object-templates:
- complianceType: musthave
objectDefinition:
apiVersion: lvm.topolvm.io/v1alpha1
kind: LVMCluster
metadata:
name: my-lvmcluster
namespace: openshift-storage
spec:
storage:
deviceClasses:
- name: vg1
default: true
deviceSelector: (2)
paths:
- /dev/disk/by-path/pci-0000:87:00.0-nvme-1
- /dev/disk/by-path/pci-0000:88:00.0-nvme-1
thinPoolConfig:
name: thin-pool-1
sizePercent: 90
overprovisionRatio: 10
nodeSelector: (3)
nodeSelectorTerms:
- matchExpressions:
- key: app
operator: In
values:
- test1
remediationAction: enforce
severity: low
1 Replace the key and value in PlacementRule.spec.clusterSelector
to match the labels set on the single-node OpenShift clusters on which you want to install LVM Storage.2 To control or restrict the volume group to your preferred disks, you can manually specify the local paths of the disks in the deviceSelector
section of theLVMCluster
YAML.3 To add a node filter, which is a subset of the additional worker nodes, specify the required filter in the nodeSelector
section. LVM Storage detects and uses the additional worker nodes when the new nodes show up.This
nodeSelector
node filter matching is not the same as the pod label matching.Create the policy in the namespace by running the following command:
# oc create -f policy-lvms-operator.yaml -n lvms-policy-ns (1)
1 The policy-lvms-operator.yaml
is the name of the file to which the policy is saved.This creates a
Policy
, aPlacementRule
, and aPlacementBinding
object in thelvms-policy-ns
namespace. The policy creates aNamespace
,OperatorGroup
,Subscription
, andLVMCluster
resource on the clusters that match the placement rule. This deploys the Operator on the single-node OpenShift clusters which match the selection criteria and configures it to set up the required resources to provision storage. The Operator uses all the disks specified in theLVMCluster
CR. If no disks are specified, the Operator uses all the unused disks on the single-node OpenShift node.After a device is added to the
LVMCluster
, it cannot be removed.
Additional resources
Uninstalling LVM Storage installed using RHACM
To uninstall LVM Storage that you installed using RHACM, you need to delete the RHACM policy that you created for deploying and configuring the Operator.
When you delete the RHACM policy, the resources that the policy has created are not removed. You need to create additional policies to remove the resources.
As the created resources are not removed when you delete the policy, you need to perform the following steps:
Remove all the Persistent volume claims (PVCs) and volume snapshots provisioned by LVM Storage.
Remove the
LVMCluster
resources to clean up Logical Volume Manager resources created on the disks.Create an additional policy to uninstall the Operator.
Prerequisites
Ensure that the following are deleted before deleting the policy:
All the applications on the managed clusters that are using the storage provisioned by LVM Storage.
PVCs and persistent volumes (PVs) provisioned using LVM Storage.
All volume snapshots provisioned by LVM Storage.
Ensure you have access to the RHACM cluster using an account with a
cluster-admin
role.
Procedure
In the OpenShift CLI (
oc
), delete the RHACM policy that you created for deploying and configuring LVM Storage on the hub cluster by using the following command:# oc delete -f policy-lvms-operator.yaml -n lvms-policy-ns (1)
1 The policy-lvms-operator.yaml
is the name of the file to which the policy was saved.To create a policy for removing the
LVMCluster
resource, save the following YAML to a file with a name such aslvms-remove-policy.yaml
. This enables the Operator to clean up all Logical Volume Manager resources that it created on the cluster.apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
name: policy-lvmcluster-delete
annotations:
policy.open-cluster-management.io/standards: NIST SP 800-53
policy.open-cluster-management.io/categories: CM Configuration Management
policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
spec:
remediationAction: enforce
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: policy-lvmcluster-removal
spec:
remediationAction: enforce (1)
severity: low
object-templates:
- complianceType: mustnothave
objectDefinition:
kind: LVMCluster
apiVersion: lvm.topolvm.io/v1alpha1
metadata:
name: my-lvmcluster
namespace: openshift-storage (2)
---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
name: binding-policy-lvmcluster-delete
placementRef:
apiGroup: apps.open-cluster-management.io
kind: PlacementRule
name: placement-policy-lvmcluster-delete
subjects:
- apiGroup: policy.open-cluster-management.io
kind: Policy
name: policy-lvmcluster-delete
---
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
name: placement-policy-lvmcluster-delete
spec:
clusterConditions:
- status: "True"
type: ManagedClusterConditionAvailable
clusterSelector:
matchExpressions:
- key: mykey
operator: In
values:
- myvalue
1 The policy-template
spec.remediationAction
is overridden by the preceding parameter value forspec.remediationAction
.2 This namespace
field must have theopenshift-storage
value.Set the value of the
PlacementRule.spec.clusterSelector
field to select the clusters from which to uninstall LVM Storage.Create the policy by running the following command:
# oc create -f lvms-remove-policy.yaml -n lvms-policy-ns
To create a policy to check if the
LVMCluster
CR has been removed, save the following YAML to a file with a name such ascheck-lvms-remove-policy.yaml
:apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
name: policy-lvmcluster-inform
annotations:
policy.open-cluster-management.io/standards: NIST SP 800-53
policy.open-cluster-management.io/categories: CM Configuration Management
policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
spec:
remediationAction: inform
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: policy-lvmcluster-removal-inform
spec:
remediationAction: inform (1)
severity: low
object-templates:
- complianceType: mustnothave
objectDefinition:
kind: LVMCluster
apiVersion: lvm.topolvm.io/v1alpha1
metadata:
name: my-lvmcluster
namespace: openshift-storage (2)
---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
name: binding-policy-lvmcluster-check
placementRef:
apiGroup: apps.open-cluster-management.io
kind: PlacementRule
name: placement-policy-lvmcluster-check
subjects:
- apiGroup: policy.open-cluster-management.io
kind: Policy
name: policy-lvmcluster-inform
---
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
name: placement-policy-lvmcluster-check
spec:
clusterConditions:
- status: "True"
type: ManagedClusterConditionAvailable
clusterSelector:
matchExpressions:
- key: mykey
operator: In
values:
- myvalue
1 The policy-template
spec.remediationAction
is overridden by the preceding parameter value forspec.remediationAction
.2 The namespace
field must have theopenshift-storage
value.Create the policy by running the following command:
# oc create -f check-lvms-remove-policy.yaml -n lvms-policy-ns
Check the policy status by running the following command:
# oc get policy -n lvms-policy-ns
Example output
NAME REMEDIATION ACTION COMPLIANCE STATE AGE
policy-lvmcluster-delete enforce Compliant 15m
policy-lvmcluster-inform inform Compliant 15m
After both the policies are compliant, save the following YAML to a file with a name such as
lvms-uninstall-policy.yaml
to create a policy to uninstall LVM Storage.apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
name: placement-uninstall-lvms
spec:
clusterConditions:
- status: "True"
type: ManagedClusterConditionAvailable
clusterSelector:
matchExpressions:
- key: mykey
operator: In
values:
- myvalue
---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
name: binding-uninstall-lvms
placementRef:
apiGroup: apps.open-cluster-management.io
kind: PlacementRule
name: placement-uninstall-lvms
subjects:
- apiGroup: policy.open-cluster-management.io
kind: Policy
name: uninstall-lvms
---
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
annotations:
policy.open-cluster-management.io/categories: CM Configuration Management
policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
policy.open-cluster-management.io/standards: NIST SP 800-53
name: uninstall-lvms
spec:
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: uninstall-lvms
spec:
object-templates:
- complianceType: mustnothave
objectDefinition:
apiVersion: v1
kind: Namespace
metadata:
name: openshift-storage
- complianceType: mustnothave
objectDefinition:
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-storage-operatorgroup
namespace: openshift-storage
spec:
targetNamespaces:
- openshift-storage
- complianceType: mustnothave
objectDefinition:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: lvms-operator
namespace: openshift-storage
remediationAction: enforce
severity: low
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: policy-remove-lvms-crds
spec:
object-templates:
- complianceType: mustnothave
objectDefinition:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: logicalvolumes.topolvm.io
- complianceType: mustnothave
objectDefinition:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: lvmclusters.lvm.topolvm.io
- complianceType: mustnothave
objectDefinition:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: lvmvolumegroupnodestatuses.lvm.topolvm.io
- complianceType: mustnothave
objectDefinition:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: lvmvolumegroups.lvm.topolvm.io
remediationAction: enforce
severity: high
Create the policy by running the following command:
# oc create -f lvms-uninstall-policy.yaml -ns lvms-policy-ns
Additional resources
Creating a Logical Volume Manager cluster on a single-node OpenShift worker node
You can configure a single-node OpenShift worker node as a Logical Volume Manager cluster. On the control-plane single-node OpenShift node, LVM Storage detects and uses the additional worker nodes when the new nodes become active in the cluster.
When you create a Logical Volume Manager cluster, |
Perform the following procedure to create a Logical Volume Manager cluster on a single-node OpenShift worker node.
You also can perform the same task by using the OKD web console. |
Prerequisites
You have installed the OpenShift CLI (
oc
).You have logged in as a user with
cluster-admin
privileges.You installed LVM Storage in a single-node OpenShift cluster and have installed a worker node for use in the single-node OpenShift cluster.
Procedure
Create the
LVMCluster
custom resource (CR).Save the following YAML in the
lvmcluster.yaml
file:apiVersion: lvm.topolvm.io/v1alpha1
kind: LVMCluster
metadata:
name: lvmcluster
spec:
storage:
deviceClasses: (1)
- name: vg1
default: true (2)
deviceSelector:
paths:
- /dev/disk/by-path/pci-0000:87:00.0-nvme-1
- /dev/disk/by-path/pci-0000:88:00.0-nvme-1
thinPoolConfig:
name: thin-pool-1
sizePercent: 90
overprovisionRatio: 10
nodeSelector: (3)
nodeSelectorTerms:
- matchExpressions:
- key: app
operator: In
Values:
- test1
1 To create multiple device storage classes in the cluster, create a YAML array under deviceClasses
for each required storage class. Configure the local device paths of the disks as an array of values in thedeviceSelector
field. When configuring multiple device classes, you must specify the device path for each device.2 Mandatory: The LVMCluster
resource must contain a single default storage class. Setdefault: false
for secondary device storage classes. If you are upgrading theLVMCluster
resource from a previous version, you must specify a single default device class.3 Optional: To control what worker nodes the LVMCluster
CR is applied to, specify a set of node selector labels. The specified labels must be present on the node in order for theLVMCluster
to be scheduled on that node.Create the
LVMCluster
CR:$ oc create -f lvmcluster.yaml
Example output
lvmcluster/lvmcluster created
The
LVMCluster
resource creates the following system-managed CRs:LVMVolumeGroup
Tracks individual volume groups across multiple nodes.
LVMVolumeGroupNodeStatus
Tracks the status of the volume groups on a node.
Verification
Verify that the LVMCluster
resource has created the StorageClass
, LVMVolumeGroup
, and LVMVolumeGroupNodeStatus
CRs.
|
Check that the
LVMCluster
CR is in aready
state by running the following command:$ oc get lvmclusters.lvm.topolvm.io -o jsonpath='{.items[*].status.deviceClassStatuses[*]}'
Example output
{
"name": "vg1",
"nodeStatus": [
{
"devices": [
"/dev/nvme0n1",
"/dev/nvme1n1",
"/dev/nvme2n1"
],
"node": "kube-node",
"status": "Ready"
}
]
}
Check that the storage class is created:
$ oc get storageclass
Example output
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
lvms-vg1 topolvm.io Delete WaitForFirstConsumer true 31m
Check that the volume snapshot class is created:
$ oc get volumesnapshotclass
Example output
NAME DRIVER DELETIONPOLICY AGE
lvms-vg1 topolvm.io Delete 24h
Check that the
LVMVolumeGroup
resource is created:$ oc get lvmvolumegroup vg1 -o yaml
Example output
apiVersion: lvm.topolvm.io/v1alpha1
kind: LVMVolumeGroup
metadata:
creationTimestamp: "2022-02-02T05:16:42Z"
generation: 1
name: vg1
namespace: lvm-operator-system
resourceVersion: "17242461"
uid: 88e8ad7d-1544-41fb-9a8e-12b1a66ab157
spec: {}
Check that the
LVMVolumeGroupNodeStatus
resource is created:$ oc get lvmvolumegroupnodestatuses.lvm.topolvm.io kube-node -o yaml
Example output
apiVersion: lvm.topolvm.io/v1alpha1
kind: LVMVolumeGroupNodeStatus
metadata:
creationTimestamp: "2022-02-02T05:17:59Z"
generation: 1
name: kube-node
namespace: lvm-operator-system
resourceVersion: "17242882"
uid: 292de9bb-3a9b-4ee8-946a-9b587986dafd
spec:
nodeStatus:
- devices:
- /dev/nvme0n1
- /dev/nvme1n1
- /dev/nvme2n1
name: vg1
status: Ready
Additional resources
Provisioning storage using LVM Storage
You can provision persistent volume claims (PVCs) using the storage class that is created during the Operator installation. You can provision block and file PVCs, however, the storage is allocated only when a pod that uses the PVC is created.
LVM Storage provisions PVCs in units of 1 GiB. The requested storage is rounded up to the nearest GiB. |
Procedure
Identify the
StorageClass
that is created when LVM Storage is deployed.The
StorageClass
name is in the format,lvms-<device-class-name>
. Thedevice-class-name
is the name of the device class that you provided in theLVMCluster
of thePolicy
YAML. For example, if thedeviceClass
is calledvg1
, then thestorageClass
name islvms-vg1
.The
volumeBindingMode
of the storage class is set toWaitForFirstConsumer
.To create a PVC where the application requires storage, save the following YAML to a file with a name such as
pvc.yaml
.Example YAML to create a PVC
# block pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: lvm-block-1
namespace: default
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: 10Gi
storageClassName: lvms-vg1
---
# file pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: lvm-file-1
namespace: default
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 10Gi
storageClassName: lvms-vg1
Create the PVC by running the following command:
# oc create -f pvc.yaml -ns <application_namespace>
The created PVCs remain in
pending
state until you deploy the pods that use them.
Monitoring LVM Storage
When LVM Storage is installed using the OKD Web Console, you can monitor the cluster by using the Block and File dashboard in the console by default. However, when you use RHACM to install LVM Storage, you need to configure RHACM Observability to monitor all the single-node OpenShift clusters from one place.
Metrics
You can monitor LVM Storage by viewing the metrics exported by the Operator on the RHACM dashboards and the alerts that are triggered.
Add the following
topolvm
metrics to theallow
list:topolvm_thinpool_data_percent
topolvm_thinpool_metadata_percent
topolvm_thinpool_size_bytes
Metrics are updated every 10 minutes or when there is a change in the thin pool, such as a new logical volume creation. |
Alerts
When the thin pool and volume group are filled up, further operations fail and might lead to data loss. LVM Storage sends the following alerts about the usage of the thin pool and volume group when utilization crosses a certain value:
Alerts for Logical Volume Manager cluster in RHACM
Alert | Description |
---|---|
| This alert is triggered when both the volume group and thin pool utilization cross 75% on nodes. Data deletion or volume group expansion is required. |
| This alert is triggered when both the volume group and thin pool utilization cross 85% on nodes. |
| This alert is triggered when the thin pool data utilization in the volume group crosses 75% on nodes. Data deletion or thin pool expansion is required. |
| This alert is triggered when the thin pool data utilization in the volume group crosses 85% on nodes. Data deletion or thin pool expansion is required. |
| This alert is triggered when the thin pool metadata utilization in the volume group crosses 75% on nodes. Data deletion or thin pool expansion is required. |
| This alert is triggered when the thin pool metadata utilization in the volume group crosses 85% on nodes. Data deletion or thin pool expansion is required. |
Additional resources
Scaling storage of single-node OpenShift clusters
The OKD supports additional worker nodes for single-node OpenShift clusters on bare-metal user-provisioned infrastructure. LVM Storage detects and uses the new additional worker nodes when the nodes show up.
Additional resources
Scaling up storage by adding capacity to your single-node OpenShift cluster
To scale the storage capacity of your configured worker nodes on a single-node OpenShift cluster, you can increase the capacity by adding disks.
Prerequisites
- You have additional unused disks on each single-node OpenShift cluster to be used by LVM Storage.
Procedure
Log in to OKD console of the single-node OpenShift cluster.
From the Operators → Installed Operators page, click on the LVM Storage Operator in the
openshift-storage
namespace.Click on the LVMCluster tab to list the
LVMCluster
CR created on the cluster.Select Edit LVMCluster from the Actions drop-down menu.
Click on the YAML tab.
Edit the
LVMCluster
CR YAML to add the new device path in thedeviceSelector
section:In case the
deviceSelector
field is not included during theLVMCluster
creation, it is not possible to add thedeviceSelector
section to the CR. You need to remove theLVMCluster
and then create a new CR.apiVersion: lvm.topolvm.io/v1alpha1
kind: LVMCluster
metadata:
name: my-lvmcluster
spec:
storage:
deviceClasses:
- name: vg1
default: true
deviceSelector:
paths:
- /dev/disk/by-path/pci-0000:87:00.0-nvme-1 (1)
- /dev/disk/by-path/pci-0000:88:00.0-nvme-1
- /dev/disk/by-path/pci-0000:89:00.0-nvme-1 (2)
thinPoolConfig:
name: thin-pool-1
sizePercent: 90
overprovisionRatio: 10
1 The path can be added by name ( /dev/sdb
) or by path.2 A new disk is added.
Additional resources
Scaling up storage by adding capacity to your single-node OpenShift cluster using RHACM
You can scale the storage capacity of your configured worker nodes on a single-node OpenShift cluster using RHACM.
Prerequisites
You have access to the RHACM cluster using an account with
cluster-admin
privilages.You have additional unused disks on each single-node OpenShift cluster to be used by LVM Storage.
Procedure
Log in to the RHACM CLI using your OKD credentials.
Find the disk that you want to add. The disk to be added needs to match with the device name and path of the existing disks.
To add capacity to the single-node OpenShift cluster, edit the
deviceSelector
section of the existing policy YAML, for example,policy-lvms-operator.yaml
.In case the
deviceSelector
field is not included during theLVMCluster
creation, it is not possible to add thedeviceSelector
section to the CR. You need to remove theLVMCluster
and then recreate from the new CR.apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
name: placement-install-lvms
spec:
clusterConditions:
- status: "True"
type: ManagedClusterConditionAvailable
clusterSelector:
matchExpressions:
- key: mykey
operator: In
values:
- myvalue
---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
name: binding-install-lvms
placementRef:
apiGroup: apps.open-cluster-management.io
kind: PlacementRule
name: placement-install-lvms
subjects:
- apiGroup: policy.open-cluster-management.io
kind: Policy
name: install-lvms
---
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
annotations:
policy.open-cluster-management.io/categories: CM Configuration Management
policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
policy.open-cluster-management.io/standards: NIST SP 800-53
name: install-lvms
spec:
disabled: false
remediationAction: enforce
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: install-lvms
spec:
object-templates:
- complianceType: musthave
objectDefinition:
apiVersion: v1
kind: Namespace
metadata:
labels:
openshift.io/cluster-monitoring: "true"
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/warn: privileged
name: openshift-storage
- complianceType: musthave
objectDefinition:
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-storage-operatorgroup
namespace: openshift-storage
spec:
targetNamespaces:
- openshift-storage
- complianceType: musthave
objectDefinition:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: lvms
namespace: openshift-storage
spec:
installPlanApproval: Automatic
name: lvms-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
remediationAction: enforce
severity: low
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: lvms
spec:
object-templates:
- complianceType: musthave
objectDefinition:
apiVersion: lvm.topolvm.io/v1alpha1
kind: LVMCluster
metadata:
name: my-lvmcluster
namespace: openshift-storage
spec:
storage:
deviceClasses:
- name: vg1
default: true
deviceSelector:
paths:
- /dev/disk/by-path/pci-0000:87:00.0-nvme-1
- /dev/disk/by-path/pci-0000:88:00.0-nvme-1
- /dev/disk/by-path/pci-0000:89:00.0-nvme-1 # new disk is added
thinPoolConfig:
name: thin-pool-1
sizePercent: 90
overprovisionRatio: 10
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: app
operator: In
values:
- test1
remediationAction: enforce
severity: low
Edit the policy by running the following command:
# oc edit -f policy-lvms-operator.yaml -ns lvms-policy-ns (1)
1 The policy-lvms-operator.yaml
is the name of the existing policy.This uses the new disk specified in the
LVMCluster
CR to provision storage.
Additional resources
Expanding PVCs
To leverage the new storage after adding additional capacity, you can expand existing persistent volume claims (PVCs) with LVM Storage.
Prerequisites
Dynamic provisioning is used.
The controlling
StorageClass
object hasallowVolumeExpansion
set totrue
.
Procedure
Modify the
.spec.resources.requests.storage
field in the desired PVC resource to the new size by running the following command:oc patch <pvc_name> -n <application_namespace> -p '{ "spec": { "resources": { "requests": { "storage": "<desired_size>" }}}}'
Watch the
status.conditions
field of the PVC to see if the resize has completed. OKD adds theResizing
condition to the PVC during expansion, which is removed after the expansion completes.
Additional resources
Scaling up storage by adding capacity to your single-node OpenShift cluster
Scaling up storage by adding capacity to your single-node OpenShift cluster using RHACM
Upgrading LVM Storage on single-node OpenShift clusters
Currently, it is not possible to upgrade from OpenShift Data Foundation Logical Volume Manager Operator 4.11 to LVM Storage 4.12 on single-node OpenShift clusters.
The data will not be preserved during this process. |
Procedure
Back up any data that you want to preserve on the persistent volume claims (PVCs).
Delete all PVCs provisioned by the OpenShift Data Foundation Logical Volume Manager Operator and their pods.
Reinstall LVM Storage on OKD 4.12.
Recreate the workloads.
Copy the backup data to the PVCs created after upgrading to 4.12.
Volume snapshots for single-node OpenShift
You can take volume snapshots of persistent volumes (PVs) that are provisioned by LVM Storage. You can also create volume snapshots of the cloned volumes. Volume snapshots help you to do the following:
Back up your application data.
Volume snapshots are located on the same devices as the original data. To use the volume snapshots as backups, you need to move the snapshots to a secure location. You can use OpenShift API for Data Protection backup and restore solutions.
Revert to a state at which the volume snapshot was taken.
Additional resources
Creating volume snapshots in single-node OpenShift
You can create volume snapshots based on the available capacity of the thin pool and the overprovisioning limits. LVM Storage creates a VolumeSnapshotClass
with the lvms-<deviceclass-name>
name.
Prerequisites
You ensured that the persistent volume claim (PVC) is in
Bound
state. This is required for a consistent snapshot.You stopped all the I/O to the PVC before taking the snapshot.
Procedure
Log in to the single-node OpenShift for which you need to run the
oc
command.Save the following YAML to a file with a name such as
lvms-vol-snapshot.yaml
.Example YAML to create a volume snapshot
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: lvm-block-1-snap
spec:
volumeSnapshotClassName: lvms-vg1
source:
persistentVolumeClaimName: lvm-block-1
Create the snapshot by running the following command in the same namespace as the PVC:
# oc create -f lvms-vol-snapshot.yaml
A read-only copy of the PVC is created as a volume snapshot.
Restoring volume snapshots in single-node OpenShift
When you restore a volume snapshot, a new persistent volume claim (PVC) is created. The restored PVC is independent of the volume snapshot and the source PVC.
Prerequisites
The storage class must be the same as that of the source PVC.
The size of the requested PVC must be the same as that of the source volume of the snapshot.
A snapshot must be restored to a PVC of the same size as the source volume of the snapshot. If a larger PVC is required, you can resize the PVC after the snapshot is restored successfully.
Procedure
Identify the storage class name of the source PVC and volume snapshot name.
Save the following YAML to a file with a name such as
lvms-vol-restore.yaml
to restore the snapshot.Example YAML to restore a PVC.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: lvm-block-1-restore
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
Resources:
Requests:
storage: 2Gi
storageClassName: lvms-vg1
dataSource:
name: lvm-block-1-snap
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
Create the policy by running the following command in the same namespace as the snapshot:
# oc create -f lvms-vol-restore.yaml
Deleting volume snapshots in single-node OpenShift
You can delete volume snapshots resources and persistent volume claims (PVCs).
Procedure
Delete the volume snapshot resource by running the following command:
# oc delete volumesnapshot <volume_snapshot_name> -n <namespace>
When you delete a persistent volume claim (PVC), the snapshots of the PVC are not deleted.
To delete the restored volume snapshot, delete the PVC that was created to restore the volume snapshot by running the following command:
# oc delete pvc <pvc_name> -n <namespace>
Volume cloning for single-node OpenShift
A clone is a duplicate of an existing storage volume that can be used like any standard volume.
Creating volume clones in single-node OpenShift
You create a clone of a volume to make a point-in-time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size.
The cloned PVC has write access. |
Prerequisites
You ensured that the PVC is in
Bound
state. This is required for a consistent snapshot.You ensured that the
StorageClass
is the same as that of the source PVC.
Procedure
Identify the storage class of the source PVC.
To create a volume clone, save the following YAML to a file with a name such as
lvms-vol-clone.yaml
:Example YAML to clone a volume
apiVersion: v1
kind: PersistentVolumeClaim
Metadata:
name: lvm-block-1-clone
Spec:
storageClassName: lvms-vg1
dataSource:
name: lvm-block-1
kind: PersistentVolumeClaim
accessModes:
- ReadWriteOnce
volumeMode: Block
Resources:
Requests:
storage: 2Gi
Create the policy in the same namespace as the source PVC by running the following command:
# oc create -f lvms-vol-clone.yaml
Deleting cloned volumes in single-node OpenShift
You can delete cloned volumes.
Procedure
To delete the cloned volume, delete the cloned PVC by running the following command:
# oc delete pvc <clone_pvc_name> -n <namespace>
Downloading log files and diagnostic information using must-gather
When LVM Storage is unable to automatically resolve a problem, use the must-gather tool to collect the log files and diagnostic information so that you or the Red Hat Support can review the problem and determine a solution.
Run the must-gather command from the client connected to LVM Storage cluster by running the following command:
$ oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel8:v4.13 --dest-dir=<directory-name>
Additional resources
LVM Storage reference YAML file
The sample LVMCluster
custom resource (CR) describes all the fields in the YAML file.
Example LVMCluster CR
apiVersion: lvm.topolvm.io/v1alpha1
kind: LVMCluster
metadata:
name: my-lvmcluster
spec:
tolerations:
- effect: NoSchedule
key: xyz
operator: Equal
value: "true"
storage:
deviceClasses: (1)
- name: vg1 (2)
default: true
nodeSelector: (3)
nodeSelectorTerms: (4)
- matchExpressions:
- key: mykey
operator: In
values:
- ssd
deviceSelector: (5)
paths:
- /dev/disk/by-path/pci-0000:87:00.0-nvme-1
- /dev/disk/by-path/pci-0000:88:00.0-nvme-1
- /dev/disk/by-path/pci-0000:89:00.0-nvme-1
thinPoolConfig: (6)
name: thin-pool-1 (7)
sizePercent: 90 (8)
overprovisionRatio: 10 (9)
status:
deviceClassStatuses: (10)
- name: vg1
nodeStatus: (11)
- devices: (12)
- /dev/nvme0n1
- /dev/nvme1n1
- /dev/nvme2n1
node: my-node.example.com (13)
status: Ready (14)
ready: true (15)
state: Ready (16)
1 | The LVM volume groups to be created on the cluster. Currently, only a single deviceClass is supported. |
2 | The name of the LVM volume group to be created on the nodes. |
3 | The nodes on which to create the LVM volume group. If the field is empty, all nodes are considered. |
4 | A list of node selector requirements. |
5 | A list of device paths which is used to create the LVM volume group. If this field is empty, all unused disks on the node will be used. |
6 | The LVM thin pool configuration. |
7 | The name of the thin pool to be created in the LVM volume group. |
8 | The percentage of remaining space in the LVM volume group that should be used for creating the thin pool. |
9 | The factor by which additional storage can be provisioned compared to the available storage in the thin pool. |
10 | The status of the deviceClass . |
11 | The status of the LVM volume group on each node. |
12 | The list of devices used to create the LVM volume group. |
13 | The node on which the deviceClass was created. |
14 | The status of the LVM volume group on the node. |
15 | This field is deprecated. |
16 | The status of the LVMCluster . |