- Docker tasks
- Increasing container storage
- Managing container registry certificates
- Managing container registries
Docker tasks
OKD uses container engines (CRI-O or Docker) to run applications in pods that are composed by any number of containers.
As a cluster administrator, sometimes container engines requires some extra configuration in order to efficiently run elements of the OKD installation.
Increasing container storage
Increasing the amount of storage available ensures continued deployment without any outages. To do so, a free partition must be made available that contains an appropriate amount of free capacity.
Evacuating the node
Procedure
From a master instance, or as a cluster administrator, allow the evacuation of any pod from the node and disable scheduling of other pods on that node:
$ NODE=ose-app-node01.example.com
$ oc adm manage-node ${NODE} --schedulable=false
NAME STATUS AGE VERSION
ose-app-node01.example.com Ready,SchedulingDisabled 20m v1.6.1+5115d708d7
$ oc adm drain ${NODE} --ignore-daemonsets
node "ose-app-node01.example.com" already cordoned
pod "perl-1-build" evicted
pod "perl-1-3lnsh" evicted
pod "perl-1-9jzd8" evicted
node "ose-app-node01.example.com" drained
If there are containers running with local volumes that will not migrate, run the following command:
oc adm drain ${NODE} —ignore-daemonsets —delete-local-data
.List the pods on the node to verify that they have been removed:
$ oc adm manage-node ${NODE} --list-pods
Listing matched pods on node: ose-app-node01.example.com
NAME READY STATUS RESTARTS AGE
Repeat the previous two steps for each node.
For more information on evacuating and draining pods or nodes, see Node maintenance.
Increasing storage
You can increase Docker storage in two ways: attaching a new disk, or extending the existing disk.
Increasing storage with a new disk
Prerequisites
A new disk must be available to the existing instance that requires more storage. In the following steps, the original disk is labeled
/dev/xvdb
, and the new disk is labeled/dev/xvdd
, as shown in the /etc/sysconfig/docker-storage-setup file:# vi /etc/sysconfig/docker-storage-setup
DEVS="/dev/xvdb /dev/xvdd"
The process may differ depending on the underlying OKD infrastructure.
Procedure
Stop the
docker
:# systemctl stop docker
Stop the node service by removing the pod definition and rebooting the host:
# mkdir -p /etc/origin/node/pods-stopped
# mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
Run the
docker-storage-setup
command to extend the volume groups and logical volumes associated with container storage:# docker-storage-setup
For thin pool setups, you should see the following output and can proceed to the next step:
INFO: Volume group backing root filesystem could not be determined
INFO: Device /dev/xvdb is already partitioned and is part of volume group docker_vol
INFO: Device node /dev/xvdd1 exists.
Physical volume "/dev/xvdd1" successfully created.
Volume group "docker_vol" successfully extended
For XFS setups that use the Overlay2 file system, the increase shown in the previous output will not be visible.
You must perform the following steps to extend and grow the XFS storage:
Run the
lvextend
command to grow the logical volume to use of all the available space in the volume group:# lvextend -r -l +100%FREE /dev/mapper/docker_vol-dockerlv
If the requirement is to use lesser space, choose the
FREE
percentage accordingly.Run the
xfs_growfs
command to grow the file system to use the available space:# xfs_growfs /dev/mapper/docker_vol-dockerlv
XFS file systems cannot be shrunk.
Run the
docker-storage-setup
command again:# docker-storage-setup
You should now see the extended volume groups and logical volumes in the output.
INFO: Device /dev/vdb is already partitioned and is part of volume group docker_vg
INFO: Found an already configured thin pool /dev/mapper/docker_vg-docker--pool in /etc/sysconfig/docker-storage
INFO: Device node /dev/mapper/docker_vg-docker--pool exists.
Logical volume docker_vg/docker-pool changed.
Start the Docker services:
# systemctl start docker
# vgs
VG #PV #LV #SN Attr VSize VFree
docker_vol 2 1 0 wz--n- 64.99g <55.00g
Restart the node service by rebooting the host:
# systemctl restart atomic-openshift-node.service
A benefit in adding a disk compared to creating a new volume group and re-running
docker-storage-setup
is that the images that were used on the system still exist after the new storage has been added:# container images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker-registry.default.svc:5000/tet/perl latest 8b0b0106fb5e 13 minutes ago 627.4 MB
registry.redhat.io/rhscl/perl-524-rhel7 <none> 912b01ac7570 6 days ago 559.5 MB
registry.redhat.io/openshift3/ose-deployer v3.6.173.0.21 89fd398a337d 5 weeks ago 970.2 MB
registry.redhat.io/openshift3/ose-pod v3.6.173.0.21 63accd48a0d7 5 weeks ago 208.6 MB
With the increase in storage capacity, enable the node to be schedulable in order to accept new incoming pods.
As a cluster administrator, run the following from a master instance:
$ oc adm manage-node ${NODE} --schedulable=true
ose-master01.example.com Ready,SchedulingDisabled 24m v1.6.1+5115d708d7
ose-master02.example.com Ready,SchedulingDisabled 24m v1.6.1+5115d708d7
ose-master03.example.com Ready,SchedulingDisabled 24m v1.6.1+5115d708d7
ose-infra-node01.example.com Ready 24m v1.6.1+5115d708d7
ose-infra-node02.example.com Ready 24m v1.6.1+5115d708d7
ose-infra-node03.example.com Ready 24m v1.6.1+5115d708d7
ose-app-node01.example.com Ready 24m v1.6.1+5115d708d7
ose-app-node02.example.com Ready 24m v1.6.1+5115d708d7
Increasing storage with a new disk
Evacuate the node following the previous steps.
Stop the
docker
:# systemctl stop docker
Stop the node service by removing the pod definition:
# mkdir -p /etc/origin/node/pods-stopped
# mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
Resize the existing disk as desired. This can depend on your environment:
If you are using LVM (Logical Volume Manager):
-
# lvremove /dev/docker_vg/docker/lv
Remove the Docker volume group:
# vgremove docker_vg
-
# pvremove /dev/<my_previous_disk_device>
-
- If you are using a cloud provider, you can detach the disk, destroy the disk, then create a new bigger disk, and attach it to the instance.
- For a non-cloud environment, the disk and file system can be resized. See the following solution for more information:
- [https://access.redhat.com/solutions/199573](https://access.redhat.com/solutions/199573)
Verify that the /etc/sysconfig/container-storage-setup file is correctly configured for the new disk by checking the device name, size, etc.
Run
docker-storage-setup
to reconfigure the new disk:# docker-storage-setup
INFO: Volume group backing root filesystem could not be determined
INFO: Device /dev/xvdb is already partitioned and is part of volume group docker_vol
INFO: Device node /dev/xvdd1 exists.
Physical volume "/dev/xvdd1" successfully created.
Volume group "docker_vol" successfully extended
Start the Docker services:
# systemctl start docker
# vgs
VG #PV #LV #SN Attr VSize VFree
docker_vol 2 1 0 wz--n- 64.99g <55.00g
Restart the node service by rebooting the host:
# systemctl restart atomic-openshift-node.service
Changing the storage backend
With the advancements of services and file systems, changes in a storage backend may be necessary to take advantage of new features. The following steps provide an example of changing a device mapper backend to an overlay2
storage backend. overlay2
offers increased speed and density over traditional device mapper.
Evacuating the node
From a master instance, or as a cluster administrator, allow the evacuation of any pod from the node and disable scheduling of other pods on that node:
$ NODE=ose-app-node01.example.com
$ oc adm manage-node ${NODE} --schedulable=false
NAME STATUS AGE VERSION
ose-app-node01.example.com Ready,SchedulingDisabled 20m v1.6.1+5115d708d7
$ oc adm drain ${NODE} --ignore-daemonsets
node "ose-app-node01.example.com" already cordoned
pod "perl-1-build" evicted
pod "perl-1-3lnsh" evicted
pod "perl-1-9jzd8" evicted
node "ose-app-node01.example.com" drained
If there are containers running with local volumes that will not migrate, run the following command:
oc adm drain ${NODE} —ignore-daemonsets —delete-local-data
List the pods on the node to verify that they have been removed:
$ oc adm manage-node ${NODE} --list-pods
Listing matched pods on node: ose-app-node01.example.com
NAME READY STATUS RESTARTS AGE
For more information on evacuating and draining pods or nodes, see Node maintenance.
With no containers currently running on the instance, stop the
docker
service:# systemctl stop docker
Stop the node service by removing the pod definition:
# mkdir -p /etc/origin/node/pods-stopped
# mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
Verify the name of the volume group, logical volume name, and physical volume name:
# vgs
VG #PV #LV #SN Attr VSize VFree
docker_vol 1 1 0 wz--n- <25.00g 15.00g
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
dockerlv docker_vol -wi-ao---- <10.00g
# lvremove /dev/docker_vol/docker-pool -y
# vgremove docker_vol -y
# pvs
PV VG Fmt Attr PSize PFree
/dev/xvdb1 docker_vol lvm2 a-- <25.00g 15.00g
# pvremove /dev/xvdb1 -y
# rm -Rf /var/lib/docker/*
# rm -f /etc/sysconfig/docker-storage
Modify the
docker-storage-setup
file to specify theSTORAGE_DRIVER
.When a system is upgraded from Red Hat Enterprise Linux version 7.3 to 7.4, the
docker
service attempts to use/var
with theSTORAGE_DRIVER
of extfs. The use of extfs as theSTORAGE_DRIVER
causes errors. See the following bug for more info regarding the error:DEVS=/dev/xvdb
VG=docker_vol
DATA_SIZE=95%VG
STORAGE_DRIVER=overlay2
CONTAINER_ROOT_LV_NAME=dockerlv
CONTAINER_ROOT_LV_MOUNT_PATH=/var/lib/docker
CONTAINER_ROOT_LV_SIZE=100%FREE
Set up the storage:
# docker-storage-setup
Start the
docker
:# systemctl start docker
Restart the node service by rebooting the host:
# systemctl restart atomic-openshift-node.service
With the storage modified to use
overlay2
, enable the node to be schedulable in order to accept new incoming pods.From a master instance, or as a cluster administrator:
$ oc adm manage-node ${NODE} --schedulable=true
ose-master01.example.com Ready,SchedulingDisabled 24m v1.6.1+5115d708d7
ose-master02.example.com Ready,SchedulingDisabled 24m v1.6.1+5115d708d7
ose-master03.example.com Ready,SchedulingDisabled 24m v1.6.1+5115d708d7
ose-infra-node01.example.com Ready 24m v1.6.1+5115d708d7
ose-infra-node02.example.com Ready 24m v1.6.1+5115d708d7
ose-infra-node03.example.com Ready 24m v1.6.1+5115d708d7
ose-app-node01.example.com Ready 24m v1.6.1+5115d708d7
ose-app-node02.example.com Ready 24m v1.6.1+5115d708d7
Managing container registry certificates
An OKD internal registry is created as a pod. However, containers may be pulled from external registries if desired. By default, registries listen on TCP port 5000. Registries provide the option of securing exposed images via TLS or running a registry without encrypting traffic.
Docker interprets |
Installing a certificate authority certificate for external registries
In order to use OKD with an external registry, the registry certificate authority (CA) certificate must be trusted for all the nodes that can pull images from the registry.
Depending on the Docker version, the process to trust a container image registry varies. The latest versions of Docker’s root certificate authorities are merged with system defaults. Prior to |
Procedure
Copy the CA certificate to
/etc/pki/ca-trust/source/anchors/
:$ sudo cp myregistry.example.com.crt /etc/pki/ca-trust/source/anchors/
Extract and add the CA certificate to the list of trusted certificates authorities:
$ sudo update-ca-trust extract
Verify the SSL certificate using the
openssl
command:$ openssl verify myregistry.example.com.crt
myregistry.example.com.crt: OK
Once the certificate is in place and the trust is updated, restart the
docker
service to ensure the new certificates are properly set:$ sudo systemctl restart docker.service
For Docker versions prior to 1.13, perform the following additional steps for trusting certificates of authority:
On every node create a new directory in
/etc/docker/certs.d
where the name of the directory is the host name of the container image registry:$ sudo mkdir -p /etc/docker/certs.d/myregistry.example.com
The port number is not required unless the container image registry cannot be accessed without a port number. Addressing the port to the original Docker registry is as follows:
myregistry.example.com:port
Accessing the container image registry via IP address requires the creation of a new directory within
/etc/docker/certs.d
on every node where the name of the directory is the IP of the container image registry:$ sudo mkdir -p /etc/docker/certs.d/10.10.10.10
Copy the CA certificate to the newly created Docker directories from the previous steps:
$ sudo cp myregistry.example.com.crt \
/etc/docker/certs.d/myregistry.example.com/ca.crt
$ sudo cp myregistry.example.com.crt /etc/docker/certs.d/10.10.10.10/ca.crt
Once the certificates have been copied, restart the
docker
service to ensure the new certificates are used:$ sudo systemctl restart docker.service
Docker certificates backup
When performing a node host backup, ensure to include the certificates for external registries.
Procedure
If using
/etc/docker/certs.d
, copy all the certificates included in the directory and store the files:$ sudo tar -czvf docker-registry-certs-$(hostname)-$(date +%Y%m%d).tar.gz /etc/docker/certs.d/
If using a system trust, store the certificates prior to adding them within the system trust. Once the store is complete, extract the certificate for restoration using the
trust
command. Identify the system trust CAs and note thepkcs11
ID:$ trust list
...[OUTPUT OMMITED]...
pkcs11:id=%a5%b3%e1%2b%2b%49%b6%d7%73%a1%aa%94%f5%01%e7%73%65%4c%ac%50;type=cert
type: certificate
label: MyCA
trust: anchor
category: authority
...[OUTPUT OMMITED]...
Extract the certificate in
pem
format and provide it a name. For example,myca.crt
.$ trust extract --format=pem-bundle \
--filter="%a5%b3%e1%2b%2b%49%b6%d7%73%a1%aa%94%f5%01%e7%73%65%4c%ac%50;type=cert" myca.crt
Verify the certificate has been properly extracted via
openssl
:$ openssl verify myca.crt
Repeat the procedure for all the required certificates and store the files in a remote location.
Docker certificates restore
In the event of the deletion or corruption of the Docker certificates for the external registries, the restore mechanism uses the same steps as the installation method using the files from the backups performed previously.
Managing container registries
You can configure OKD to use external docker
registries to pull images. However, you can use configuration files to allow or deny certain images or registries.
If the external registry is exposed using certificates for the network traffic, it can be named as a secure registry. Otherwise, traffic between the registry and host is plain text and not encrypted, meaning it is an insecure registry.
Docker search external registries
By default, the docker
daemon has the ability to pull images from any registry, but the search operation is performed against docker.io/
and registry.redhat.io
. The daemon can be configured to search images from other registries using the --add-registry
option with the docker
daemon.
The ability to search images from the Red Hat Registry |
Procedure
To allow users to search for images using
docker search
with other registries, add those registries to the/etc/containers/registries.conf
file under theregistries
parameter:registries:
- registry.redhat.io
- my.registry.example.com
Restart the
docker
daemon to allow formy.registry.example.com
to be used:$ sudo systemctl restart docker.service
Restarting the
docker
daemon causes thedocker
containers to restart.Using the Ansible installer, this can be configured using the
openshift_docker_additional_registries
variable in the Ansible hosts file:openshift_docker_additional_registries=registry.redhat.io,my.registry.example.com
Docker external registries whitelist and blacklist
Docker can be configured to block operations from external registries by configuring the registries
and block_registries
flags for the docker
daemon.
Procedure
Add the allowed registries to the
/etc/containers/registries.conf
file with theregistries
flag:registries:
- registry.redhat.io
- my.registry.example.com
The
docker.io
registry can be added using the same method.Block the rest of the registries:
block_registries:
- all
Block the rest of the registries in older versions:
BLOCK_REGISTRY='--block-registry=all'
Restart the
docker
daemon:$ sudo systemctl restart docker.service
Restarting the
docker
daemon causes thedocker
containers to restart.In this example, the
docker.io
registry has been blacklisted, so any operation regarding that registry fails:$ sudo docker pull hello-world
Using default tag: latest
Trying to pull repository registry.redhat.io/hello-world ...
Trying to pull repository my.registry.example.com/hello-world ...
Trying to pull repository registry.redhat.io/hello-world ...
unknown: Not Found
$ sudo docker pull docker.io/hello-world
Using default tag: latest
Trying to pull repository docker.io/library/hello-world ...
All endpoints blocked.
Add
docker.io
back to theregistries
variable by modifying the file again and restarting the service.registries:
- registry.redhat.io
- my.registry.example.com
- docker.io
block_registries:
- all
or
ADD_REGISTRY="--add-registry=registry.redhat.io --add-registry=my.registry.example.com --add-registry=docker.io"
BLOCK_REGISTRY='--block-registry=all'
Restart the Docker service:
$ sudo systemctl restart docker
To verify that the image is now available to be pulled:
$ sudo docker pull docker.io/hello-world
Using default tag: latest
Trying to pull repository docker.io/library/hello-world ...
latest: Pulling from docker.io/library/hello-world
9a0669468bf7: Pull complete
Digest: sha256:0e06ef5e1945a718b02a8c319e15bae44f47039005530bc617a5d071190ed3fc
If using an external registry is required, for example to modify the
docker
daemon configuration file in all the node hosts that require to use that registry, create a blacklist on those nodes to avoid malicious containers from being executed.Using the Ansible installer, this can be configured using the
openshift_docker_additional_registries
andopenshift_docker_blocked_registries
variables in the Ansible hosts file:openshift_docker_additional_registries=registry.redhat.io,my.registry.example.com
openshift_docker_blocked_registries=all
Secure registries
In order to be able to pull images from an external registry, it is required to trust the registry certificates, otherwise the pull image operation fails.
In order to do so, see the Installing a Certificate Authority Certificate for External Registries section.
If using a whitelist, the external registries should be added to the registries
variable, as explained above.
Insecure registries
External registries that use non-trusted certificates, or without certificates at all, should be avoided.
However, any insecure registries should be added using the --insecure-registry
option to allow for the docker
daemon to pull images from the repository. This is the same as the --add-registry
option, but the docker
operation is not verified.
The registry should be added using both options to enable search, and, if there is a blacklist, to perform other operations, such as pulling images.
For testing purposes, an example is shown on how to add a localhost
insecure registry.
Procedure
Modify
/etc/containers/registries.conf
configuration file to add the localhost insecure registry:[registries.search]
registries = ['registry.redhat.io', 'my.registry.example.com', 'docker.io', 'localhost:5000' ]
[registries.insecure]
registries = ['localhost:5000']
[registries.block]
registries = ['all']
Add insecure registries to both the
registries.search
section as well as theregistries.insecure
section to ensure they are marked as insecure and whitelisted. Any registry added to theregisteries.block
section will be blocked unless it is also whitelisted by being added to theregistries.search
section.Restart the
docker
daemon to use the registry:$ sudo systemctl restart docker.service
Restarting the
docker
daemon causes thedocker
containers to be restarted.Run a container image registry pod at
localhost
:$ sudo docker run -p 5000:5000 registry:2
Pull an image:
$ sudo docker pull openshift/hello-openshift
Tag the image:
$ sudo docker tag docker.io/openshift/hello-openshift:latest localhost:5000/hello-openshift-local:latest
Push the image to the local registry:
$ sudo docker push localhost:5000/hello-openshift-local:latest
Using the Ansible installer, this can be configured using the
openshift_docker_additional_registries
,openshift_docker_blocked_registries
, andopenshift_docker_insecure_registries
variables in theAnsible
hosts file:openshift_docker_additional_registries=registry.redhat.io,my.registry.example.com,localhost:5000
openshift_docker_insecure_registries=localhost:5000
openshift_docker_blocked_registries=all
You can also set the
openshift_docker_insecure_registries
variable to the IP address of the host.0.0.0.0/0
is not a valid setting.
Authenticated registries
Using authenticated registries with docker
requires the docker
daemon to log in to the registry. With OKD, a different set of steps must be performed, because the users can not run docker login
commands on the host. Authenticated registries can be used to limit the images users can pull or who can access the external registries.
If an external docker
registry requires authentication, create a special secret in the project that uses that registry and then use that secret to perform the docker
operations.
Procedure
Create a
dockercfg
secret in the project where the user is going to log in to thedocker
registry:$ oc project <my_project>
$ oc create secret docker-registry <my_registry> --docker-server=<my.registry.example.com> --docker-username=<username> --docker-password=<my_password> --docker-email=<me@example.com>
If a
.dockercfg
file exists, create the secret using theoc
command:$ oc create secret generic <my_registry> --from-file=.dockercfg=<path/to/.dockercfg> --type=kubernetes.io/dockercfg
Populate the
$HOME/.docker/config.json
file:$ oc create secret generic <my_registry> --from-file=.dockerconfigjson=<path/to/.dockercfg> --type=kubernetes.io/dockerconfigjson
Use the
dockercfg
secret to pull images from the authenticated registry by linking the secret to the service account performing the pull operations. The default service account to pull images is nameddefault
:$ oc secrets link default <my_registry> --for=pull
For pushing images using the S2I feature, the
dockercfg
secret is mounted in the S2I pod, so it needs to be linked to the proper service account that performs the build. The default service account used to build images is namedbuilder
.$ oc secrets link builder <my_registry>
In the
buildconfig
, the secret should be specified for push or pull operations:"type": "Source",
"sourceStrategy": {
"from": {
"kind": "DockerImage",
"name": "*my.registry.example.com*/myproject/myimage:stable"
},
"pullSecret": {
"name": "*mydockerregistry*"
},
...[OUTPUT ABBREVIATED]...
"output": {
"to": {
"kind": "DockerImage",
"name": "*my.registry.example.com*/myproject/myimage:latest"
},
"pushSecret": {
"name": "*mydockerregistry*"
},
...[OUTPUT ABBREVIATED]...
If the external registry delegates authentication to external services, create both
dockercfg
secrets: the registry one using the registry URL and the external authentication system using its own URL. Both secrets should be added to the service accounts.$ oc project <my_project>
$ oc create secret docker-registry <my_registry> --docker-server=*<my_registry_example.com> --docker-username=<username> --docker-password=<my_password> --docker-email=<me@example.com>
$ oc create secret docker-registry <my_docker_registry_ext_auth> --docker-server=<my.authsystem.example.com> --docker-username=<username> --docker-password=<my_password> --docker-email=<me@example.com>
$ oc secrets link default <my_registry> --for=pull
$ oc secrets link default <my_docker_registry_ext_auth> --for=pull
$ oc secrets link builder <my_registry>
$ oc secrets link builder <my_docker_registry_ext_auth>
ImagePolicy admission plug-in
An admission control plug-in intercepts requests to the API, and performs checks depending on the configured rules and allows or denies certain actions based on those rules. OKD can limit the allowed images running in the environment using the ImagePolicy
admission plug-in where it can control:
The source of images: which registries can be used to pull images
Image resolution: force pods to run with immutable digests to ensure the image does not change due to a re-tag
Container image label restrictions: force an image to have or not have particular labels
Image annotation restrictions: force an image in the integrated container registry to have or not have particular annotations
|
Procedure
If the
ImagePolicy
plug-in is enabled, it needs to be modified to allow the external registries to be used by modifying the/etc/origin/master/master-config.yaml
file on every master node:admissionConfig:
pluginConfig:
openshift.io/ImagePolicy:
configuration:
kind: ImagePolicyConfig
apiVersion: v1
executionRules:
- name: allow-images-from-other-registries
onResources:
- resource: pods
- resource: builds
matchRegistries:
- docker.io
- <my.registry.example.com>
- registry.redhat.io
Enabling
ImagePolicy
requires users to specify the registry when deploying an application likeoc new-app docker.io/kubernetes/guestbook
insteadoc new-app kubernetes/guestbook
, otherwise it fails.To enable the admission plug-ins at installation time, the
openshift_master_admission_plugin_config
variable can be used with ajson
formatted string including all thepluginConfig
configuration:openshift_master_admission_plugin_config={"openshift.io/ImagePolicy":{"configuration":{"kind":"ImagePolicyConfig","apiVersion":"v1","executionRules":[{"name":"allow-images-from-other-registries","onResources":[{"resource":"pods"},{"resource":"builds"}],"matchRegistries":["docker.io","*my.registry.example.com*","registry.redhat.io"]}]}}}
Import images from external registries
Application developers can import images to create imagestreams
using the oc import-image
command, and OKD can be configured to allow or deny image imports from external registries.
Procedure
To configure the allowed registries where users can import images, add the following to the
/etc/origin/master/master-config.yaml
file:imagePolicyConfig:
allowedRegistriesForImport:
- domainName: docker.io
- domainName: '\*.docker.io'
- domainName: '*.redhat.com'
- domainName: 'my.registry.example.com'
To import images from an external authenticated registry, create a secret within the desired project.
Even if not recommended, if the external authenticated registry is insecure or the certificates can not be trusted, the
oc import-image
command can be used with the--insecure=true
option.If the external authenticated registry is secure, the registry certificate should be trusted in the master hosts as they run the registry import controller as:
Copy the certificate in the
/etc/pki/ca-trust/source/anchors/
:$ sudo cp <my.registry.example.com.crt> /etc/pki/ca-trust/source/anchors/<my.registry.example.com.crt>
Run
update-ca-trust
command:$ sudo update-ca-trust
Restart the master services on all the master hosts:
$ sudo master-restart api
$ sudo master-restart controllers
The certificate for the external registry should be trusted in the OKD registry:
$ for i in pem openssl java; do
oc create configmap ca-trust-extracted-${i} --from-file /etc/pki/ca-trust/extracted/${i}
oc set volume dc/docker-registry --add -m /etc/pki/ca-trust/extracted/${i} --configmap-name=ca-trust-extracted-${i} --name ca-trust-extracted-${i}
done
There is no official procedure currently for adding the certificate to the registry pod, but the above workaround can be used.
This workaround creates
configmaps
with all the trusted certificates from the system running those commands, so the recommendation is to run it from a clean system where just the required certificates are trusted.Alternatively, modify the registry image in order to trust the proper certificates rebuilding the image using a
Dockerfile
as:FROM registry.redhat.io/openshift3/ose-docker-registry:v3.6
ADD <my.registry.example.com.crt> /etc/pki/ca-trust/source/anchors/
USER 0
RUN update-ca-trust extract
USER 1001
Rebuild the image, push it to a
docker
registry, and use that image asspec.template.spec.containers["name":"registry"].image
in the registrydeploymentconfig
:$ oc patch dc docker-registry -p '{"spec":{"template":{"spec":{"containers":[{"name":"registry","image":"*myregistry.example.com/openshift3/ose-docker-registry:latest*"}]}}}}'
To add the
|
For more information about the ImagePolicy
, see the ImagePolicy
admission plug-in section.
OKD registry integration
You can install OKD as a stand-alone container image registry to provide only the registry capabilities, but with the advantages of running in an OKD platform.
For more information about the OKD registry, see Installing a Stand-alone Deployment of OpenShift Container Registry.
To integrate the OKD registry, all previous sections apply. From the OKD point of view, it is treated as an external registry, but there are some extra tasks that need to be performed, because it is a multi-tenant registry and the authorization model from OKD applies so when a new project is created, the registry does not create a project within its environment as it is independent.
Connect the registry project with the cluster
As the registry is a full OKD environment with a registry pod and a web interface, the process to create a new project in the registry is performed using the oc new-project
or oc create
command line or via the web interface.
Once the project has been created, the usual service accounts (builder
, default
, and deployer
) are created automatically, as well as the project administrator user is granted permissions. Different users can be authorized to push/pull images as well as “anonymous” users.
There can be several use cases, such as allowing all the users to pull images from this new project within the registry, but if you want to have a 1:1 project relationship between OKD and the registry, where the users can push and pull images from that specific project, some steps are required.
The registry web console shows a token to be used for pull/push operations, but the token showed there is a session token, so it expires. Creating a service account with specific permissions allows the administrator to limit the permissions for the service account, so that, for example, different service accounts can be used for push or pull images. Then, a user does not have to configure for token expiration, secret recreation, and other tasks, as the service account tokens will not expire. |
Procedure
Create a new project:
$ oc new-project <my_project>
Create a registry project:
$ oc new-project <registry_project>
Create a service account in the registry project:
$ oc create serviceaccount <my_serviceaccount> -n <registry_project>
Give permissions to push and pull images using the
registry-editor
role:$ oc adm policy add-role-to-user registry-editor -z <my_serviceaccount> -n <registry_project>
If only pull permissions are required, the
registry-viewer
role can be used.Get the service account token:
$ TOKEN=$(oc sa get-token <my_serviceaccount> -n <registry_project>)
Use the token as the password to create a
dockercfg
secret:$ oc create secret docker-registry <my_registry> \
--docker-server=<myregistry.example.com> --docker-username=<notused> --docker-password=${TOKEN} --docker-email=<me@example.com>
Use the
dockercfg
secret to pull images from the registry by linking the secret to the service account performing the pull operations. The default service account to pull images is nameddefault
:$ oc secrets link default <my_registry> --for=pull
For pushing images using the S2I feature, the
dockercfg
secret is mounted in the S2I pod, so it needs to be linked to the proper service account that performs the build. The default service account used to build images is namedbuilder
:$ oc secrets link builder <my_registry>
In the
buildconfig
, the secret should be specified for push or pull operations:"type": "Source",
"sourceStrategy": {
"from": {
"kind": "DockerImage",
"name": "<myregistry.example.com/registry_project/my_image:stable>"
},
"pullSecret": {
"name": "<my_registry>"
},
...[OUTPUT ABBREVIATED]...
"output": {
"to": {
"kind": "DockerImage",
"name": "<myregistry.example.com/registry_project/my_image:latest>"
},
"pushSecret": {
"name": "<my_registry>"
},
...[OUTPUT ABBREVIATED]...