- AWS Elastic File Service CSI Driver Operator
- Overview
- About CSI
- Installing the AWS EFS CSI Driver Operator
- Configuring AWS EFS CSI Driver Operator with Secure Token Service
- Creating the AWS EFS storage class
- Creating and configuring access to EFS volumes in AWS
- Dynamic provisioning for AWS EFS
- Creating static PVs with AWS EFS
- AWS EFS security
- AWS EFS troubleshooting
- Uninstalling the AWS EFS CSI Driver Operator
- Additional resources
AWS Elastic File Service CSI Driver Operator
Overview
OKD is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for AWS Elastic File Service (EFS).
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.
After installing the AWS EFS CSI Driver Operator, OKD installs the AWS EFS CSI Operator and the AWS EFS CSI driver by default in the openshift-cluster-csi-drivers
namespace. This allows the AWS EFS CSI Driver Operator to create CSI-provisioned PVs that mount to AWS EFS assets.
The AWS EFS CSI Driver Operator, after being installed, does not create a storage class by default to use to create persistent volume claims (PVCs). However, you can manually create the AWS EFS
StorageClass
. The AWS EFS CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand. This eliminates the need for cluster administrators to pre-provision storage.The AWS EFS CSI driver enables you to create and mount AWS EFS PVs.
AWS EFS only supports regional volumes, not zonal volumes. |
About CSI
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give OKD users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
Installing the AWS EFS CSI Driver Operator
The AWS EFS CSI Driver Operator is not installed in OKD by default. Use the following procedure to install and configure the AWS EFS CSI Driver Operator in your cluster.
Prerequisites
- Access to the OKD web console.
Procedure
To install the AWS EFS CSI Driver Operator from the web console:
Log in to the web console.
Install the AWS EFS CSI Operator:
Click Operators → OperatorHub.
Locate the AWS EFS CSI Operator by typing AWS EFS CSI in the filter box.
Click the AWS EFS CSI Driver Operator button.
Be sure to select the AWS EFS CSI Driver Operator and not the AWS EFS Operator. The AWS EFS Operator is a community Operator and is not supported by Red Hat.
On the AWS EFS CSI Driver Operator page, click Install.
On the Install Operator page, ensure that:
All namespaces on the cluster (default) is selected.
Installed Namespace is set to openshift-cluster-csi-drivers.
Click Install.
After the installation finishes, the AWS EFS CSI Operator is listed in the Installed Operators section of the web console.
If you are using AWS EFS with AWS Secure Token Service (STS), you must configure the AWS EFS CSI Driver with STS. For more information, see “Configuring AWS EFS CSI Driver with STS”.
Install the AWS EFS CSI Driver:
Click administration → CustomResourceDefinitions → ClusterCSIDriver.
On the Instances tab, click Create ClusterCSIDriver.
Use the following YAML file:
apiVersion: operator.openshift.io/v1
kind: ClusterCSIDriver
metadata:
name: efs.csi.aws.com
spec:
managementState: Managed
Click Create.
Wait for the following Conditions to change to a “true” status:
AWSEFSDriverCredentialsRequestControllerAvailable
AWSEFSDriverNodeServiceControllerAvailable
AWSEFSDriverControllerServiceControllerAvailable
Additional resources
Configuring AWS EFS CSI Driver Operator with Secure Token Service
This procedure explains how to configure the AWS EFS CSI Driver Operator with OKD on AWS Secure Token Service (STS).
Perform this procedure after installing the AWS EFS CSI Operator, but before installing the AWS EFS CSI driver as part of Installing the AWS EFS CSI Driver Operator procedure. If you perform this procedure after installing the driver and creating volumes, your volumes will fail to mount into pods.
Prerequisites
- AWS account credentials
Procedure
To configure the AWS EFS CSI Driver Operator with STS:
Extract the CCO utility (
ccoctl
) binary from the OKD release image, which you used to install the cluster with STS. For more information, see “Configuring the Cloud Credential Operator utility”.Create and save an EFS
CredentialsRequest
YAML file, such as shown in the following example, and then place it in thecredrequests
directory:Example
apiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
name: openshift-aws-efs-csi-driver
namespace: openshift-cloud-credential-operator
spec:
providerSpec:
apiVersion: cloudcredential.openshift.io/v1
kind: AWSProviderSpec
statementEntries:
- action:
- elasticfilesystem:*
effect: Allow
resource: '*'
secretRef:
name: aws-efs-cloud-credentials
namespace: openshift-cluster-csi-drivers
serviceAccountNames:
- aws-efs-csi-driver-operator
- aws-efs-csi-driver-controller-sa
Run the
ccoctl
tool to generate a new IAM role in AWS, and create a YAML file for it in the local file system (<path_to_ccoctl_output_dir>/manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml
).$ ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com
name=<name>
is the name used to tag any cloud resources that are created for tracking.region=<aws_region>
is the AWS region where cloud resources are created.dir=<path_to_directory_with_list_of_credentials_requests>/credrequests
is the directory containing the EFS CredentialsRequest file in previous step.<aws_account_id>
is the AWS account ID.Example
$ ccoctl aws create-iam-roles --name my-aws-efs --credentials-requests-dir credrequests --identity-provider-arn arn:aws:iam::123456789012:oidc-provider/my-aws-efs-oidc.s3.us-east-2.amazonaws.com
Example output
2022/03/21 06:24:44 Role arn:aws:iam::123456789012:role/my-aws-efs -openshift-cluster-csi-drivers-aws-efs-cloud- created
2022/03/21 06:24:44 Saved credentials configuration to: /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml
2022/03/21 06:24:45 Updated Role policy for Role my-aws-efs-openshift-cluster-csi-drivers-aws-efs-cloud-
Create the AWS EFS cloud credentials and secret:
$ oc create -f <path_to_ccoctl_output_dir>/manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml
Example
$ oc create -f /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml
Example output
secret/aws-efs-cloud-credentials created
Additional resources
Creating the AWS EFS storage class
Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes.
The AWS EFS CSI Driver Operator, after being installed, does not create a storage class by default. However, you can manually create the AWS EFS storage class.
Creating the AWS EFS storage class using the console
Procedure
In the OKD console, click Storage → StorageClasses.
On the StorageClasses page, click Create StorageClass.
On the StorageClass page, perform the following steps:
Enter a name to reference the storage class.
Optional: Enter the description.
Select the reclaim policy.
Select
efs.csi.aws.com
from the Provisioner drop-down list.Optional: Set the configuration parameters for the selected provisioner.
Click Create.
Creating the AWS EFS storage class using the CLI
Procedure
Create a
StorageClass
object:kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
parameters:
provisioningMode: efs-ap (1)
fileSystemId: fs-a5324911 (2)
directoryPerms: "700" (3)
gidRangeStart: "1000" (4)
gidRangeEnd: "2000" (4)
basePath: "/dynamic_provisioning" (5)
1 provisioningMode
must beefs-ap
to enable dynamic provisioning.2 fileSystemId
must be the ID of the EFS volume created manually.3 directoryPerms
is the default permission of the root directory of the volume. In this example, the volume is accessible only by the owner.4 gidRangeStart
andgidRangeEnd
set the range of POSIX Group IDs (GIDs) that are used to set the GID of the AWS access point. If not specified, the default range is 50000-7000000. Each provisioned volume, and thus AWS access point, is assigned a unique GID from this range.5 basePath
is the directory on the EFS volume that is used to create dynamically provisioned volumes. In this case, a PV is provisioned as “/dynamic_provisioning/<random uuid>” on the EFS volume. Only the subdirectory is mounted to pods that use the PV.A cluster admin can create several
StorageClass
objects, each using a different EFS volume.
Creating and configuring access to EFS volumes in AWS
This procedure explains how to create and configure EFS volumes in AWS so that you can use them in OKD.
Prerequisites
- AWS account credentials
Procedure
To create and configure access to an EFS volume in AWS:
On the AWS console, open https://console.aws.amazon.com/efs.
Click Create file system:
Enter a name for the file system.
For Virtual Private Cloud (VPC), select your OKD’s’ virtual private cloud (VPC).
Accept default settings for all other selections.
Wait for the volume and mount targets to finish being fully created:
Click your volume, and on the Network tab wait for all mount targets to become available (~1-2 minutes).
On the Network tab, copy the Security Group ID (you will need this in the next step).
Go to https://console.aws.amazon.com/ec2/v2/home#SecurityGroups, and find the Security Group used by the EFS volume.
On the Inbound rules tab, click Edit inbound rules, and then add a new rule with the following settings to allow OKD nodes to access EFS volumes :
Type: NFS
Protocol: TCP
Port range: 2049
Source: Custom/IP address range of your nodes (for example: “10.0.0.0/16”)
This step allows OKD to use NFS ports from the cluster.
Save the rule.
Dynamic provisioning for AWS EFS
The AWS EFS CSI Driver supports a different form of dynamic provisioning than other CSI drivers. It provisions new PVs as subdirectories of a pre-existing EFS volume. The PVs are independent of each other. However, they all share the same EFS volume. When the volume is deleted, all PVs provisioned out of it are deleted too. The EFS CSI driver creates an AWS Access Point for each such subdirectory. Due to AWS AccessPoint limits, you can only dynamically provision 120 PVs from a single StorageClass
/EFS volume.
Note that In the example below, you request 5 GiB of space. However, the created PV is limitless and can store any amount of data (like petabytes). A broken application, or even a rogue application, can cause significant expenses when it stores too much data on the volume. Using monitoring of EFS volume sizes in AWS is strongly recommended. |
Prerequisites
You have created AWS EFS volumes.
You have created the AWS EFS storage class.
Procedure
To enable dynamic provisioning:
Create a PVC (or StatefulSet or Template) as usual, referring to the
StorageClass
created above.apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test
spec:
storageClassName: efs-sc
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
If you have problems setting up dynamic provisioning, see AWS EFS troubleshooting.
Additional resources
Creating static PVs with AWS EFS
It is possible to use an AWS EFS volume as a single PV without any dynamic provisioning. The whole volume is mounted to pods.
Prerequisites
- You have created AWS EFS volumes.
Procedure
Create the PV using the following YAML file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv
spec:
capacity: (1)
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
csi:
driver: efs.csi.aws.com
volumeHandle: fs-ae66151a (2)
volumeAttributes:
encryptInTransit: "false" (3)
1 spec.capacity
does not have any meaning and is ignored by the CSI driver. It is used only when binding to a PVC. Applications can store any amount of data to the volume.2 volumeHandle
must be the same ID as the EFS volume you created in AWS. If you are providing your own access point,volumeHandle
should be<EFS volume ID>::<access point ID>
. For example:fs-6e633ada::fsap-081a1d293f0004630
.3 If desired, you can disable encryption in transit. Encryption is enabled by default.
If you have problems setting up static PVs, see AWS EFS troubleshooting.
AWS EFS security
The following information is important for AWS EFS security.
When using access points, for example, by using dynamic provisioning as described earlier, Amazon automatically replaces GIDs on files with the GID of the access point. In addition, EFS considers the user ID, group ID, and secondary group IDs of the access point when evaluating file system permissions. EFS ignores the NFS client’s IDs. For more information about access points, see https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html.
As a consequence, EFS volumes silently ignore FSGroup; OKD is not able to replace the GIDs of files on the volume with FSGroup. Any pod that can access a mounted EFS access point can access any file on it.
Unrelated to this, encryption in transit is enabled by default. For more information, see https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html.
AWS EFS troubleshooting
The following information provides guidance on how to troubleshoot issues with AWS EFS:
The AWS EFS Operator and CSI driver run in namespace
openshift-cluster-csi-drivers
.To initiate gathering of logs of the AWS EFS Operator and CSI driver, run the following command:
$ oc adm must-gather
[must-gather ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5
[must-gather ] OUT namespace/openshift-must-gather-xm4wq created
[must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-2bd8x created
[must-gather ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 created
To show AWS EFS Operator errors, view the
ClusterCSIDriver
status:$ oc get clustercsidriver efs.csi.aws.com -o yaml
If a volume cannot be mounted to a pod (as shown in the output of the following command):
$ oc describe pod
...
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m13s default-scheduler Successfully assigned default/efs-app to ip-10-0-135-94.ec2.internal
Warning FailedMount 13s kubelet MountVolume.SetUp failed for volume "pvc-d7c097e6-67ec-4fae-b968-7e7056796449" : rpc error: code = DeadlineExceeded desc = context deadline exceeded (1)
Warning FailedMount 10s kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage kube-api-access-9j477]: timed out waiting for the condition
1 Warning message indicating volume not mounted. This error is frequently caused by AWS dropping packets between an OKD node and AWS EFS.
Check that the following are correct:
AWS firewall and Security Groups
Networking: port number and IP addresses
Uninstalling the AWS EFS CSI Driver Operator
All EFS PVs are inaccessible after uninstalling the AWS EFS CSI Driver Operator.
Prerequisites
- Access to the OKD web console.
Procedure
To uninstall the AWS EFS CSI Driver Operator from the web console:
Log in to the web console.
Stop all applications that use AWS EFS PVs.
Delete all AWS EFS PVs:
Click Storage → PersistentVolumeClaims.
Select each PVC that is in use by the AWS EFS CSI Driver Operator, click the drop-down menu on the far right of the PVC, and then click Delete PersistentVolumeClaims.
Uninstall the AWS EFS CSI Driver:
Before you can uninstall the Operator, you must remove the CSI driver first.
Click administration → CustomResourceDefinitions → ClusterCSIDriver.
On the Instances tab, for efs.csi.aws.com, on the far left side, click the drop-down menu, and then click Delete ClusterCSIDriver.
When prompted, click Delete.
Uninstall the AWS EFS CSI Operator:
Click Operators → Installed Operators.
On the Installed Operators page, scroll or type AWS EFS CSI into the Search by name box to find the Operator, and then click it.
On the upper, right of the Installed Operators > Operator details page, click Actions → Uninstall Operator.
When prompted on the Uninstall Operator window, click the Uninstall button to remove the Operator from the namespace. Any applications deployed by the Operator on the cluster need to be cleaned up manually.
After uninstalling, the AWS EFS CSI Driver Operator is no longer listed in the Installed Operators section of the web console.
Before you can destroy a cluster ( |