- Configuring the registry for vSphere
- Image registry removed during installation
- Changing the image registry’s management state
- Image registry storage configuration
- Configuring registry storage for VMware vSphere
- Configuring storage for the image registry in non-production clusters
- Configuring block registry storage for VMware vSphere
- Configuring the Image Registry Operator to use Ceph RGW storage with Red Hat OpenShift Data Foundation
- Configuring the Image Registry Operator to use Noobaa storage with Red Hat OpenShift Data Foundation
- Configuring the Image Registry Operator to use CephFS storage with Red Hat OpenShift Data Foundation
- Additional resources
Configuring the registry for vSphere
Image registry removed during installation
On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed
. This allows openshift-installer
to complete installations on these platform types.
After installation, you must edit the Image Registry Operator configuration to switch the managementState
from Removed
to Managed
.
The Prometheus console provides an “Image Registry has been removed. |
Changing the image registry’s management state
To start the image registry, you must change the Image Registry Operator configuration’s managementState
from Removed
to Managed
.
Procedure
Change
managementState
Image Registry Operator configuration fromRemoved
toManaged
. For example:$ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}'
Image registry storage configuration
The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available.
Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters.
Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate
rollout strategy during upgrades.
Configuring registry storage for VMware vSphere
As a cluster administrator, following installation you must configure your registry to use storage.
Prerequisites
Cluster administrator permissions.
A cluster on VMware vSphere.
Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation.
OKD supports
ReadWriteOnce
access for image registry storage when you have only one replica.ReadWriteOnce
access also requires that the registry uses theRecreate
rollout strategy. To deploy an image registry that supports high availability with two or more replicas,ReadWriteMany
access is required.Must have “100Gi” capacity.
Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OKD core components. |
Procedure
To configure your registry to use storage, change the
spec.storage.pvc
in theconfigs.imageregistry/cluster
resource.When using shared storage, review your security settings to prevent outside access.
Verify that you do not have a registry pod:
$ oc get pod -n openshift-image-registry -l docker-registry=default
Example output
No resourses found in openshift-image-registry namespace
If you do have a registry pod in your output, you do not need to continue with this procedure.
Check the registry configuration:
$ oc edit configs.imageregistry.operator.openshift.io
Example output
storage:
pvc:
claim: (1)
1 Leave the claim
field blank to allow the automatic creation of animage-registry-storage
PVC.Check the
clusteroperator
status:$ oc get clusteroperator image-registry
Example output
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
image-registry 4.7 True False False 6h50m
Configuring storage for the image registry in non-production clusters
You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry.
Procedure
To set the image registry storage to an empty directory:
$ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'
Configure this option for only non-production clusters.
If you run this command before the Image Registry Operator initializes its components, the
oc patch
command fails with the following error:Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found
Wait a few minutes and run the command again.
Configuring block registry storage for VMware vSphere
To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate
rollout strategy.
Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. |
Procedure
To set the image registry storage as a block storage type, patch the registry so that it uses the
Recreate
rollout strategy and runs with only1
replica:$ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}'
Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode.
Create a
pvc.yaml
file with the following contents to define a VMware vSpherePersistentVolumeClaim
object:kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: image-registry-storage (1)
namespace: openshift-image-registry (2)
spec:
accessModes:
- ReadWriteOnce (3)
resources:
requests:
storage: 100Gi (4)
1 A unique name that represents the PersistentVolumeClaim
object.2 The namespace for the PersistentVolumeClaim
object, which isopenshift-image-registry
.3 The access mode of the persistent volume claim. With ReadWriteOnce
, the volume can be mounted with read and write permissions by a single node.4 The size of the persistent volume claim. Create the
PersistentVolumeClaim
object from the file:$ oc create -f pvc.yaml -n openshift-image-registry
Edit the registry configuration so that it references the correct PVC:
$ oc edit config.imageregistry.operator.openshift.io -o yaml
Example output
storage:
pvc:
claim: (1)
1 Creating a custom PVC allows you to leave the claim
field blank for the default automatic creation of animage-registry-storage
PVC.
For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere.
Configuring the Image Registry Operator to use Ceph RGW storage with Red Hat OpenShift Data Foundation
Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the internal image registry:
Ceph, a shared and distributed file system and on-premises object storage
NooBaa, providing a Multicloud Object Gateway
This document outlines the procedure to configure the image registry to use Ceph RGW storage.
Prerequisites
You have access to the cluster as a user with the
cluster-admin
role.You have access to the OKD web console.
You installed the
oc
CLI.You installed the OpenShift Data Foundation Operator to provide object storage and Ceph RGW object storage.
Procedure
Create the object bucket claim using the
ocs-storagecluster-ceph-rgw
storage class. For example:cat <<EOF | oc apply -f -
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: rgwtest
namespace: openshift-storage
spec:
storageClassName: ocs-storagecluster-ceph-rgw
generateBucketName: rgwtest
EOF
Get the bucket name by entering the following command:
$ bucket_name=$(oc get obc -n openshift-storage rgwtest -o jsonpath='{.spec.bucketName}')
Get the AWS credentials by entering the following commands:
$ AWS_ACCESS_KEY_ID=$(oc get secret -n openshift-storage rgwtest -o yaml | grep -w "AWS_ACCESS_KEY_ID:" | head -n1 | awk '{print $2}' | base64 --decode)
$ AWS_SECRET_ACCESS_KEY=$(oc get secret -n openshift-storage rgwtest -o yaml | grep -w "AWS_SECRET_ACCESS_KEY:" | head -n1 | awk '{print $2}' | base64 --decode)
Create the secret
image-registry-private-configuration-user
with the AWS credentials for the new bucket underopenshift-image-registry project
by entering the following command:$ oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=${AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=${AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry
Create a encryption route for Ceph RGW by entering the following command:
$ oc create route reencrypt <route_name> --service=rook-ceph-rgw-ocs-storagecluster-cephobjectstore --port=https -n openshift-storage
Get the route host by entering the following command:
$ route_host=$(oc get route <route_name> -n openshift-storage -o=jsonpath='{.spec.host}')
Create a config map that uses an ingress certificate by entering the following commands:
$ oc extract secret/router-certs-default -n openshift-ingress --confirm
$ oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config
Configure the image registry to use the Ceph RGW object storage by entering the following command:
$ oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"${bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://${route_host}\"',"virtualHostedStyle":false,"encrypt":true,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge
Configuring the Image Registry Operator to use Noobaa storage with Red Hat OpenShift Data Foundation
Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the internal image registry:
Ceph, a shared and distributed file system and on-premises object storage
NooBaa, providing a Multicloud Object Gateway
This document outlines the procedure to configure the image registry to use Noobaa storage.
Prerequisites
You have access to the cluster as a user with the
cluster-admin
role.You have access to the OKD web console.
You installed the
oc
CLI.You installed the OpenShift Data Foundation Operator to provide object storage and Noobaa object storage.
Procedure
Create the object bucket claim using the
openshift-storage.noobaa.io
storage class. For example:cat <<EOF | oc apply -f -
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: noobaatest
namespace: openshift-storage
spec:
storageClassName: openshift-storage.noobaa.io
generateBucketName: noobaatest
EOF
Get the bucket name by entering the following command:
$ bucket_name=$(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')
Get the AWS credentials by entering the following commands:
$ AWS_ACCESS_KEY_ID=$(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_ACCESS_KEY_ID:" | head -n1 | awk '{print $2}' | base64 --decode)
$ AWS_SECRET_ACCESS_KEY=$(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_SECRET_ACCESS_KEY:" | head -n1 | awk '{print $2}' | base64 --decode)
Create the secret
image-registry-private-configuration-user
with the AWS credentials for the new bucket underopenshift-image-registry project
by entering the following command:$ oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=${AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=${AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry
Get the route host by entering the following command:
$ route_host=$(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')
Create a config map that uses an ingress certificate by entering the following commands:
$ oc extract secret/router-certs-default -n openshift-ingress --confirm
$ oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config
Configure the image registry to use the Nooba object storage by entering the following command:
$ oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"${bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://${route_host}\"',"virtualHostedStyle":false,"encrypt":true,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge
Configuring the Image Registry Operator to use CephFS storage with Red Hat OpenShift Data Foundation
Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the internal image registry:
Ceph, a shared and distributed file system and on-premises object storage
NooBaa, providing a Multicloud Object Gateway
This document outlines the procedure to configure the image registry to use CephFS storage.
CephFS uses persistent volume claim (PVC) storage. It is not recommended to use PVCs for image registry storage if there are other options are available, such as Ceph RGW or Noobaa. |
Prerequisites
You have access to the cluster as a user with the
cluster-admin
role.You have access to the OKD web console.
You installed the
oc
CLI.You installed the OpenShift Data Foundation Operator to provide object storage and CephFS file storage.
Procedure
Create a PVC to use the
cephfs
storage class. For example:cat <<EOF | oc apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: registry-storage-pvc
namespace: openshift-image-registry
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
storageClassName: ocs-storagecluster-cephfs
EOF
Configure the image registry to use the CephFS file system storage by entering the following command:
$ oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","pvc":{"claim":"registry-storage-pvc"}}}}' --type=merge