- Post-installation storage configuration
- Dynamic provisioning
- Defining a storage class
- Changing the default storage class
- Optimizing storage
- Available persistent storage options
- Recommended configurable storage technology
- Deploy Red Hat OpenShift Data Foundation
- Additional resources
Post-installation storage configuration
After installing OKD, you can further expand and customize your cluster to your requirements, including storage configuration.
Dynamic provisioning
About dynamic provisioning
The StorageClass
resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass
objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators (cluster-admin
) or Storage Administrators (storage-admin
) define and create the StorageClass
objects that users can request without needing any detailed knowledge about the underlying storage volume sources.
The OKD persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure.
Many storage types are available for use as persistent volumes in OKD. While all of them can be statically provisioned by an administrator, some types of storage are created dynamically using the built-in provider and plugin APIs.
Available dynamic provisioning plugins
OKD provides the following provisioner plugins, which have generic implementations for dynamic provisioning that use the cluster’s configured provider’s API to create new storage resources:
Storage type | Provisioner plugin name | Notes |
---|---|---|
OpenStack Cinder |
| |
OpenStack Manila Container Storage Interface (CSI) |
| Once installed, the OpenStack Manila CSI Driver Operator and ManilaDriver automatically create the required storage classes for all available Manila share types needed for dynamic provisioning. |
AWS Elastic Block Store (EBS) |
| For dynamic provisioning when using multiple clusters in different zones, tag each node with |
Azure Disk |
| |
Azure File |
| The |
GCE Persistent Disk (gcePD) |
| In multi-zone configurations, it is advisable to run one OKD cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. |
|
Any chosen provisioner plugin also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. |
Defining a storage class
StorageClass
objects are currently a globally scoped object and must be created by cluster-admin
or storage-admin
users.
The Cluster Storage Operator might install a default storage class depending on the platform in use. This storage class is owned and controlled by the Operator. It cannot be deleted or modified beyond defining annotations and labels. If different behavior is desired, you must define a custom storage class. |
The following sections describe the basic definition for a StorageClass
object and specific examples for each of the supported plugin types.
Basic StorageClass object definition
The following resource shows the parameters and default values that you use to configure a storage class. This example uses the AWS ElasticBlockStore (EBS) object definition.
Sample StorageClass
definition
kind: StorageClass (1)
apiVersion: storage.k8s.io/v1 (2)
metadata:
name: <storage-class-name> (3)
annotations: (4)
storageclass.kubernetes.io/is-default-class: 'true'
...
provisioner: kubernetes.io/aws-ebs (5)
parameters: (6)
type: gp3
...
1 | (required) The API object type. |
2 | (required) The current apiVersion. |
3 | (required) The name of the storage class. |
4 | (optional) Annotations for the storage class. |
5 | (required) The type of provisioner associated with this storage class. |
6 | (optional) The parameters required for the specific provisioner, this will change from plugin to plug-iin. |
Storage class annotations
To set a storage class as the cluster-wide default, add the following annotation to your storage class metadata:
storageclass.kubernetes.io/is-default-class: "true"
For example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
...
This enables any persistent volume claim (PVC) that does not specify a specific storage class to automatically be provisioned through the default storage class. However, your cluster can have more than one storage class, but only one of them can be the default storage class.
The beta annotation |
To set a storage class description, add the following annotation to your storage class metadata:
kubernetes.io/description: My Storage Class Description
For example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubernetes.io/description: My Storage Class Description
...
OpenStack Cinder object definition
cinder-storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: <storage-class-name> (1)
provisioner: kubernetes.io/cinder
parameters:
type: fast (2)
availability: nova (3)
fsType: ext4 (4)
1 | Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. |
2 | Volume type created in Cinder. Default is empty. |
3 | Availability Zone. If not specified, volumes are generally round-robined across all active zones where the OKD cluster has a node. |
4 | File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4 . |
AWS Elastic Block Store (EBS) object definition
aws-ebs-storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: <storage-class-name> (1)
provisioner: kubernetes.io/aws-ebs
parameters:
type: io1 (2)
iopsPerGB: "10" (3)
encrypted: "true" (4)
kmsKeyId: keyvalue (5)
fsType: ext4 (6)
1 | (required) Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. |
2 | (required) Select from io1 , gp3 , sc1 , st1 . The default is gp3 . See the AWS documentation for valid Amazon Resource Name (ARN) values. |
3 | Optional: Only for io1 volumes. I/O operations per second per GiB. The AWS volume plugin multiplies this with the size of the requested volume to compute IOPS of the volume. The value cap is 20,000 IOPS, which is the maximum supported by AWS. See the AWS documentation for further details. |
4 | Optional: Denotes whether to encrypt the EBS volume. Valid values are true or false . |
5 | Optional: The full ARN of the key to use when encrypting the volume. If none is supplied, but encypted is set to true , then AWS generates a key. See the AWS documentation for a valid ARN value. |
6 | Optional: File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4 . |
Azure Disk object definition
azure-advanced-disk-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: <storage-class-name> (1)
provisioner: kubernetes.io/azure-disk
volumeBindingMode: WaitForFirstConsumer (2)
allowVolumeExpansion: true
parameters:
kind: Managed (3)
storageaccounttype: Premium_LRS (4)
reclaimPolicy: Delete
1 | Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. | ||
2 | Using WaitForFirstConsumer is strongly recommended. This provisions the volume while allowing enough storage to schedule the pod on a free worker node from an available zone. | ||
3 | Possible values are Shared (default), Managed , and Dedicated .
| ||
4 | Azure storage account SKU tier. Default is empty. Note that Premium VMs can attach both Standard_LRS and Premium_LRS disks, Standard VMs can only attach Standard_LRS disks, Managed VMs can only attach managed disks, and unmanaged VMs can only attach unmanaged disks.
|
Azure File object definition
The Azure File storage class uses secrets to store the Azure storage account name and the storage account key that are required to create an Azure Files share. These permissions are created as part of the following procedure.
Procedure
Define a
ClusterRole
object that allows access to create and view secrets:apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
# name: system:azure-cloud-provider
name: <persistent-volume-binder-role> (1)
rules:
- apiGroups: ['']
resources: ['secrets']
verbs: ['get','create']
1 The name of the cluster role to view and create secrets. Add the cluster role to the service account:
$ oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binder
Create the Azure File
StorageClass
object:kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: <azure-file> (1)
provisioner: kubernetes.io/azure-file
parameters:
location: eastus (2)
skuName: Standard_LRS (3)
storageAccount: <storage-account> (4)
reclaimPolicy: Delete
volumeBindingMode: Immediate
1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Location of the Azure storage account, such as eastus
. Default is empty, meaning that a new Azure storage account will be created in the OKD cluster’s location.3 SKU tier of the Azure storage account, such as Standard_LRS
. Default is empty, meaning that a new Azure storage account will be created with theStandard_LRS
SKU.4 Name of the Azure storage account. If a storage account is provided, then skuName
andlocation
are ignored. If no storage account is provided, then the storage class searches for any storage account that is associated with the resource group for any accounts that match the definedskuName
andlocation
.
Considerations when using Azure File
The following file system features are not supported by the default Azure File storage class:
Symlinks
Hard links
Extended attributes
Sparse files
Named pipes
Additionally, the owner user identifier (UID) of the Azure File mounted directory is different from the process UID of the container. The uid
mount option can be specified in the StorageClass
object to define a specific user identifier to use for the mounted directory.
The following StorageClass
object demonstrates modifying the user and group identifier, along with enabling symlinks for the mounted directory.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azure-file
mountOptions:
- uid=1500 (1)
- gid=1500 (2)
- mfsymlinks (3)
provisioner: kubernetes.io/azure-file
parameters:
location: eastus
skuName: Standard_LRS
reclaimPolicy: Delete
volumeBindingMode: Immediate
1 | Specifies the user identifier to use for the mounted directory. |
2 | Specifies the group identifier to use for the mounted directory. |
3 | Enables symlinks. |
GCE PersistentDisk (gcePD) object definition
gce-pd-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: <storage-class-name> (1)
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard (2)
replication-type: none
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Delete
1 | Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. |
2 | Select either pd-standard or pd-ssd . The default is pd-standard . |
VMware vSphere object definition
vsphere-storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: <storage-class-name> (1)
provisioner: kubernetes.io/vsphere-volume (2)
parameters:
diskformat: thin (3)
1 | Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. |
2 | For more information about using VMware vSphere with OKD, see the VMware vSphere documentation. |
3 | diskformat : thin , zeroedthick and eagerzeroedthick are all valid disk formats. See vSphere docs for additional details regarding the disk format types. The default value is thin . |
oVirt object definition
OKD creates a default object of type StorageClass
named ovirt-csi-sc
which is used for creating dynamically provisioned persistent volumes.
To create additional storage classes for different configurations, create and save a file with the StorageClass
object described by the following sample YAML:
ovirt-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: <storage_class_name> (1)
annotations:
storageclass.kubernetes.io/is-default-class: "<boolean>" (2)
provisioner: csi.ovirt.org
allowVolumeExpansion: <boolean> (3)
reclaimPolicy: Delete (4)
volumeBindingMode: Immediate (5)
parameters:
storageDomainName: <rhv-storage-domain-name> (6)
thinProvisioning: "<boolean>" (7)
csi.storage.k8s.io/fstype: <file_system_type> (8)
1 | Name of the storage class. |
2 | Set to false if the storage class is the default storage class in the cluster. If set to true , the existing default storage class must be edited and set to false . |
3 | true enables dynamic volume expansion, false prevents it. true is recommended. |
4 | Dynamically provisioned persistent volumes of this storage class are created with this reclaim policy. This default policy is Delete . |
5 | Indicates how to provision and bind PersistentVolumeClaims . When not set, VolumeBindingImmediate is used. This field is only applied by servers that enable the VolumeScheduling feature. |
6 | The oVirt storage domain name to use. |
7 | If true , the disk is thin provisioned. If false , the disk is preallocated. Thin provisioning is recommended. |
8 | Optional: File system type to be created. Possible values: ext4 (default) or xfs . |
Changing the default storage class
Use the following procedure to change the default storage class.
For example, if you have two defined storage classes, gp3
and standard
, and you want to change the default storage class from gp3
to standard
.
Prerequisites
- Access to the cluster with cluster-admin privileges.
Procedure
To change the default storage class:
List the storage classes:
$ oc get storageclass
Example output
NAME TYPE
gp3 (default) kubernetes.io/aws-ebs (1)
standard kubernetes.io/aws-ebs
1 (default)
indicates the default storage class.Make the desired storage class the default.
For the desired storage class, set the
storageclass.kubernetes.io/is-default-class
annotation totrue
by running the following command:$ oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
You can have multiple default storage classes for a short time. However, you should ensure that only one default storage class exists eventually.
With multiple default storage classes present, any persistent volume claim (PVC) requesting the default storage class (
pvc.spec.storageClassName
=nil) gets the most recently created default storage class, regardless of the default status of that storage class, and the administrator receives an alert in the alerts dashboard that there are multiple default storage classes,MultipleDefaultStorageClasses
.Remove the default storage class setting from the old default storage class.
For the old default storage class, change the value of the
storageclass.kubernetes.io/is-default-class
annotation tofalse
by running the following command:$ oc patch storageclass gp3 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
Verify the changes:
$ oc get storageclass
Example output
NAME TYPE
gp3 kubernetes.io/aws-ebs
standard (default) kubernetes.io/aws-ebs
Optimizing storage
Optimizing storage helps to minimize storage use across all resources. By optimizing storage, administrators help ensure that existing storage resources are working in an efficient manner.
Available persistent storage options
Understand your persistent storage options so that you can optimize your OKD environment.
Storage type | Description | Examples |
---|---|---|
Block |
| AWS EBS and VMware vSphere support dynamic persistent volume (PV) provisioning natively in OKD. |
File |
| RHEL NFS, NetApp NFS [1], and Vendor NFS |
Object |
| AWS S3 |
- NetApp NFS supports dynamic PV provisioning when using the Trident plugin.
Currently, CNS is not supported in OKD 4.13. |
Recommended configurable storage technology
The following table summarizes the recommended and configurable storage technologies for the given OKD cluster application.
Storage type | ROX1 | RWX2 | Registry | Scaled registry | Metrics3 | Logging | Apps |
---|---|---|---|---|---|---|---|
Block | Yes4 | No | Configurable | Not configurable | Recommended | Recommended | Recommended |
File | Yes4 | Yes | Configurable | Configurable | Configurable5 | Configurable6 | Recommended |
Object | Yes | Yes | Recommended | Recommended | Not configurable | Not configurable | Not configurable7 |
1 2 3 Prometheus is the underlying technology used for metrics. 4 This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk. 5 For metrics, using file storage with the 6 For logging, review the recommended storage solution in Configuring persistent storage for the log store section. Using NFS storage as a persistent volume or through NAS, such as Gluster, can corrupt the data. Hence, NFS is not supported for Elasticsearch storage and LokiStack log store in OKD Logging. You must use one persistent volume type per log store. 7 Object storage is not consumed through OKD’s PVs or PVCs. Apps must integrate with the object storage REST API. |
A scaled registry is an OpenShift image registry where two or more pod replicas are running. |
Specific application storage recommendations
Testing shows issues with using the NFS server on Fedora as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using Fedora NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OKD core components. |
Registry
In a non-scaled/high-availability (HA) OpenShift image registry cluster deployment:
The storage technology does not have to support RWX access mode.
The storage technology must ensure read-after-write consistency.
The preferred storage technology is object storage followed by block storage.
File storage is not recommended for OpenShift image registry cluster deployment with production workloads.
Scaled registry
In a scaled/HA OpenShift image registry cluster deployment:
The storage technology must support RWX access mode.
The storage technology must ensure read-after-write consistency.
The preferred storage technology is object storage.
Red Hat OpenShift Data Foundation (ODF), Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Microsoft Azure Blob Storage, and OpenStack Swift are supported.
Object storage should be S3 or Swift compliant.
For non-cloud platforms, such as vSphere and bare metal installations, the only configurable technology is file storage.
Block storage is not configurable.
Metrics
In an OKD hosted metrics cluster deployment:
The preferred storage technology is block storage.
Object storage is not configurable.
It is not recommended to use file storage for a hosted metrics cluster deployment with production workloads. |
Logging
In an OKD hosted logging cluster deployment:
The preferred storage technology is block storage.
Object storage is not configurable.
Applications
Application use cases vary from application to application, as described in the following examples:
Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster.
Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer.
Other specific application storage recommendations
It is not recommended to use RAID configurations on |
OpenStack Cinder: OpenStack Cinder tends to be adept in ROX access mode use cases.
Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage.
The etcd database must have enough storage and adequate performance capacity to enable a large cluster. Information about monitoring and benchmarking tools to establish ample storage and a high-performance environment is described in Recommended etcd practices.
Additional resources
Deploy Red Hat OpenShift Data Foundation
Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OKD supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OKD for deployment, management, and monitoring.
If you are looking for Red Hat OpenShift Data Foundation information about… | See the following Red Hat OpenShift Data Foundation documentation: |
---|---|
What’s new, known issues, notable bug fixes, and Technology Previews | |
Supported workloads, layouts, hardware and software requirements, sizing and scaling recommendations | |
Instructions on deploying OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster | |
Instructions on deploying OpenShift Data Foundation to local storage on bare metal infrastructure | Deploying OpenShift Data Foundation 4.12 using bare metal infrastructure |
Instructions on deploying OpenShift Data Foundation on Red Hat OKD VMware vSphere clusters | |
Instructions on deploying OpenShift Data Foundation using Amazon Web Services for local or cloud storage | Deploying OpenShift Data Foundation 4.12 using Amazon Web Services |
Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OKD Google Cloud clusters | Deploying and managing OpenShift Data Foundation 4.12 using Google Cloud |
Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OKD Azure clusters | Deploying and managing OpenShift Data Foundation 4.12 using Microsoft Azure |
Instructions on deploying OpenShift Data Foundation to use local storage on IBM Power infrastructure | |
Instructions on deploying OpenShift Data Foundation to use local storage on IBM Z infrastructure | |
Allocating storage to core services and hosted applications in Red Hat OpenShift Data Foundation, including snapshot and clone | |
Managing storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa) | |
Safely replacing storage devices for Red Hat OpenShift Data Foundation | |
Safely replacing a node in a Red Hat OpenShift Data Foundation cluster | |
Scaling operations in Red Hat OpenShift Data Foundation | |
Monitoring a Red Hat OpenShift Data Foundation 4.12 cluster | |
Resolve issues encountered during operations | |
Migrating your OKD cluster from version 3 to version 4 |