Configuring persistent storage
Metering is a deprecated feature. Deprecated functionality is still included in OKD and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OKD, refer to the Deprecated and removed features section of the OKD release notes. |
Metering requires persistent storage to persist data collected by the Metering Operator and to store the results of reports. A number of different storage providers and storage formats are supported. Select your storage provider and modify the example configuration files to configure persistent storage for your metering installation.
Storing data in Amazon S3
Metering can use an existing Amazon S3 bucket or create a bucket for storage.
Metering does not manage or delete any S3 bucket data. You must manually clean up S3 buckets that are used to store metering data. |
Procedure
Edit the
spec.storage
section in thes3-storage.yaml
file:Example
s3-storage.yaml
fileapiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
storage:
type: "hive"
hive:
type: "s3"
s3:
bucket: "bucketname/path/" (1)
region: "us-west-1" (2)
secretName: "my-aws-secret" (3)
# Set to false if you want to provide an existing bucket, instead of
# having metering create the bucket on your behalf.
createBucket: true (4)
1 Specify the name of the bucket where you would like to store your data. Optional: Specify the path within the bucket. 2 Specify the region of your bucket. 3 The name of a secret in the metering namespace containing the AWS credentials in the data.aws-access-key-id
anddata.aws-secret-access-key
fields. See the exampleSecret
object below for more details.4 Set this field to false
if you want to provide an existing S3 bucket, or if you do not want to provide IAM credentials that haveCreateBucket
permissions.Use the following
Secret
object as a template:Example AWS
Secret
objectapiVersion: v1
kind: Secret
metadata:
name: my-aws-secret
data:
aws-access-key-id: "dGVzdAo="
aws-secret-access-key: "c2VjcmV0Cg=="
The values of the
aws-access-key-id
andaws-secret-access-key
must be base64 encoded.Create the secret:
$ oc create secret -n openshift-metering generic my-aws-secret \
--from-literal=aws-access-key-id=my-access-key \
--from-literal=aws-secret-access-key=my-secret-key
This command automatically base64 encodes your
aws-access-key-id
andaws-secret-access-key
values.
The aws-access-key-id
and aws-secret-access-key
credentials must have read and write access to the bucket. The following aws/read-write.json
file shows an IAM policy that grants the required permissions:
Example aws/read-write.json
file
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:HeadBucket",
"s3:ListBucket",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::operator-metering-data/*",
"arn:aws:s3:::operator-metering-data"
]
}
]
}
If spec.storage.hive.s3.createBucket
is set to true
or unset in your s3-storage.yaml
file, then you should use the aws/read-write-create.json
file that contains permissions for creating and deleting buckets:
Example aws/read-write-create.json
file
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:HeadBucket",
"s3:ListBucket",
"s3:CreateBucket",
"s3:DeleteBucket",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::operator-metering-data/*",
"arn:aws:s3:::operator-metering-data"
]
}
]
}
Storing data in S3-compatible storage
You can use S3-compatible storage such as Noobaa.
Procedure
Edit the
spec.storage
section in thes3-compatible-storage.yaml
file:Example
s3-compatible-storage.yaml
fileapiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
storage:
type: "hive"
hive:
type: "s3Compatible"
s3Compatible:
bucket: "bucketname" (1)
endpoint: "http://example:port-number" (2)
secretName: "my-aws-secret" (3)
1 Specify the name of your S3-compatible bucket. 2 Specify the endpoint for your storage. 3 The name of a secret in the metering namespace containing the AWS credentials in the data.aws-access-key-id
anddata.aws-secret-access-key
fields. See the exampleSecret
object below for more details.Use the following
Secret
object as a template:Example S3-compatible
Secret
objectapiVersion: v1
kind: Secret
metadata:
name: my-aws-secret
data:
aws-access-key-id: "dGVzdAo="
aws-secret-access-key: "c2VjcmV0Cg=="
Storing data in Microsoft Azure
To store data in Azure blob storage, you must use an existing container.
Procedure
Edit the
spec.storage
section in theazure-blob-storage.yaml
file:Example
azure-blob-storage.yaml
fileapiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
storage:
type: "hive"
hive:
type: "azure"
azure:
container: "bucket1" (1)
secretName: "my-azure-secret" (2)
rootDirectory: "/testDir" (3)
1 Specify the container name. 2 Specify a secret in the metering namespace. See the example Secret
object below for more details.3 Optional: Specify the directory where you would like to store your data. Use the following
Secret
object as a template:Example Azure
Secret
objectapiVersion: v1
kind: Secret
metadata:
name: my-azure-secret
data:
azure-storage-account-name: "dGVzdAo="
azure-secret-access-key: "c2VjcmV0Cg=="
Create the secret:
$ oc create secret -n openshift-metering generic my-azure-secret \
--from-literal=azure-storage-account-name=my-storage-account-name \
--from-literal=azure-secret-access-key=my-secret-key
Storing data in Google Cloud Storage
To store your data in Google Cloud Storage, you must use an existing bucket.
Procedure
Edit the
spec.storage
section in thegcs-storage.yaml
file:Example
gcs-storage.yaml
fileapiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
storage:
type: "hive"
hive:
type: "gcs"
gcs:
bucket: "metering-gcs/test1" (1)
secretName: "my-gcs-secret" (2)
1 Specify the name of the bucket. You can optionally specify the directory within the bucket where you would like to store your data. 2 Specify a secret in the metering namespace. See the example Secret
object below for more details.Use the following
Secret
object as a template:Example Google Cloud Storage
Secret
objectapiVersion: v1
kind: Secret
metadata:
name: my-gcs-secret
data:
gcs-service-account.json: "c2VjcmV0Cg=="
Create the secret:
$ oc create secret -n openshift-metering generic my-gcs-secret \
--from-file gcs-service-account.json=/path/to/my/service-account-key.json
Storing data in shared volumes
Metering does not configure storage by default. However, you can use any ReadWriteMany persistent volume (PV) or any storage class that provisions a ReadWriteMany PV for metering storage.
NFS is not recommended to use in production. Using an NFS server on RHEL as a storage back end can fail to meet metering requirements and to provide the performance that is needed for the Metering Operator to work appropriately. Other NFS implementations on the marketplace might not have these issues, such as a Parallel Network File System (pNFS). pNFS is an NFS implementation with distributed and parallel capability. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against OKD core components. |
Procedure
Modify the
shared-storage.yaml
file to use a ReadWriteMany persistent volume for storage:Example
shared-storage.yaml
fileapiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
storage:
type: "hive"
hive:
type: "sharedPVC"
sharedPVC:
claimName: "metering-nfs" (1)
# Uncomment the lines below to provision a new PVC using the specified storageClass. (2)
# createPVC: true
# storageClass: "my-nfs-storage-class"
# size: 5Gi
Select one of the configuration options below:
1 Set storage.hive.sharedPVC.claimName
to the name of an existing ReadWriteMany persistent volume claim (PVC). This configuration is necessary if you do not have dynamic volume provisioning or want to have more control over how the persistent volume is created.2 Set storage.hive.sharedPVC.createPVC
totrue
and set thestorage.hive.sharedPVC.storageClass
to the name of a storage class with ReadWriteMany access mode. This configuration uses dynamic volume provisioning to create a volume automatically.Create the following resource objects that are required to deploy an NFS server for metering. Use the
oc create -f <file-name>.yaml
command to create the object YAML files.Configure a
PersistentVolume
resource object:Example
nfs_persistentvolume.yaml
fileapiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
labels:
role: nfs-server
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
storageClassName: nfs-server (1)
nfs:
path: "/"
server: REPLACEME
persistentVolumeReclaimPolicy: Delete
1 Must exactly match the [kind: StorageClass].metadata.name
field value.Configure a
Pod
resource object with thenfs-server
role:Example
nfs_server.yaml
fileapiVersion: v1
kind: Pod
metadata:
name: nfs-server
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: <image_name> (1)
imagePullPolicy: IfNotPresent
ports:
- name: nfs
containerPort: 2049
securityContext:
privileged: true
volumeMounts:
- mountPath: "/mnt/data"
name: local
volumes:
- name: local
emptyDir: {}
1 Install your NFS server image. Configure a
Service
resource object with thenfs-server
role:Example
nfs_service.yaml
fileapiVersion: v1
kind: Service
metadata:
name: nfs-service
labels:
role: nfs-server
spec:
ports:
- name: 2049-tcp
port: 2049
protocol: TCP
targetPort: 2049
selector:
role: nfs-server
sessionAffinity: None
type: ClusterIP
Configure a
StorageClass
resource object:Example
nfs_storageclass.yaml
fileapiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-server (1)
provisioner: example.com/nfs
parameters:
archiveOnDelete: "false"
reclaimPolicy: Delete
volumeBindingMode: Immediate
1 Must exactly match the [kind: PersistentVolume].spec.storageClassName
field value.
Configuration of your NFS storage, and any relevant resource objects, will vary depending on the NFS server image that you use for metering storage. |