- Configuring multi-architecture compute machines on an OKD cluster
- Verifying cluster compatibility
- Creating a cluster with multi-architecture compute machine on Azure
- Creating a cluster with multi-architecture compute machines on AWS
- Creating a cluster with multi-architecture compute machine on bare metal (Technology Preview)
- Importing manifest lists in image streams on your multi-architecture compute machines
Configuring multi-architecture compute machines on an OKD cluster
An OKD cluster with multi-architecture compute machines is a cluster that supports compute machines with different architectures. Clusters with multi-architecture compute machines are available only on AWS or Azure installer-provisioned infrastructures and bare metal user-provisioned infrastructures with x86_64
control plane machines.
When there are nodes with multiple architectures in your cluster, the architecture of your image must be consistent with the architecture of the node. You need to ensure that the pod is assigned to the node with the appropriate architecture and that it matches the image architecture. For more information on assigning pods to nodes, see Assigning pods to nodes. |
The Cluster Samples Operator is not supported on clusters with multi-architecture compute machines. Your cluster can be created without this capability. For more information, see Enabling cluster capabilities |
For information on migrating your single-architecture cluster to a cluster that supports multi-architecture compute machines, see Migrating to a cluster with multi-architecture compute machines.
Verifying cluster compatibility
Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible.
Prerequisites
- You installed the OpenShift CLI (
oc
)
Procedure
You can check that your cluster uses the architecture payload by running the following command:
$ oc adm release info -o json | jq .metadata.metadata
Verification
If you see the following output, then your cluster is using the multi-architecture payload:
$ "release.openshift.io/architecture": "multi"
You can then begin adding multi-arch compute nodes to your cluster.
If you see the following output, then your cluster is not using the multi-architecture payload:
$ null
To migrate your cluster to one that supports multi-architecture compute machines, follow the procedure in “Migrating to a cluster with multi-architecture compute machines”.
Additional resources
Creating a cluster with multi-architecture compute machine on Azure
To deploy an Azure cluster with multi-architecture compute machines, you must first create a single-architecture Azure installer-provisioned cluster that uses the multi-architecture installer binary. For more information on Azure installations, see Installing a cluster on Azure with customizations. You can then add an arm64
compute machine set to your cluster to create a cluster with multi-architecture compute machines.
The following procedures explain how to generate an arm64
boot image and create an Azure compute machine set that uses the arm64
boot image. This adds arm64
compute nodes to your cluster and deploys the amount of arm64
virtual machines (VM) that you need.
Creating an arm64 boot image using the Azure image gallery
The following procedure describes how to manually generate an arm64
boot image.
Prerequisites
You installed the Azure CLI (
az
).You created a single-architecture Azure installer-provisioned cluster with the multi-architecture installer binary.
Procedure
Log in to your Azure account:
$ az login
Create a storage account and upload the
arm64
virtual hard disk (VHD) to your storage account. The OKD installation program creates a resource group, however, the boot image can also be uploaded to a custom named resource group:$ az storage account create -n ${STORAGE_ACCOUNT_NAME} -g ${RESOURCE_GROUP} -l westus --sku Standard_LRS (1)
1 The westus
object is an example region.Create a storage container using the storage account you generated:
$ az storage container create -n ${CONTAINER_NAME} --account-name ${STORAGE_ACCOUNT_NAME}
You must use the OKD installation program JSON file to extract the URL and
arch64
VHD name:Extract the
URL
field and set it toRHCOS_VHD_ORIGIN_URL
as the file name by running the following command:$ RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".url')
Extract the
aarch64
VHD name and set it toBLOB_NAME
as the file name by running the following command:$ BLOB_NAME=rhcos-$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".release')-azure.aarch64.vhd
Generate a shared access signature (SAS) token. Use this token to upload the FCOS VHD to your storage container with the following commands:
$ end=`date -u -d "30 minutes" '+%Y-%m-%dT%H:%MZ'`
$ sas=`az storage container generate-sas -n ${CONTAINER_NAME} --account-name ${STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry $end -o tsv`
Copy the FCOS VHD into the storage container:
$ az storage blob copy start --account-name ${STORAGE_ACCOUNT_NAME} --sas-token "$sas" \
--source-uri "${RHCOS_VHD_ORIGIN_URL}" \
--destination-blob "${BLOB_NAME}" --destination-container ${CONTAINER_NAME}
You can check the status of the copying process with the following command:
$ az storage blob show -c ${CONTAINER_NAME} -n ${BLOB_NAME} --account-name ${STORAGE_ACCOUNT_NAME} | jq .properties.copy
Example output
{
"completionTime": null,
"destinationSnapshot": null,
"id": "1fd97630-03ca-489a-8c4e-cfe839c9627d",
"incrementalCopy": null,
"progress": "17179869696/17179869696",
"source": "https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd",
"status": "success", (1)
"statusDescription": null
}
1 If the status parameter displays the success
object, the copying process is complete.Create an image gallery using the following command:
$ az sig create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME}
Use the image gallery to create an image definition. In the following example command,
rhcos-arm64
is the name of the image definition.$ az sig image-definition create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME} --gallery-image-definition rhcos-arm64 --publisher RedHat --offer arm --sku arm64 --os-type linux --architecture Arm64 --hyper-v-generation V2
To get the URL of the VHD and set it to
RHCOS_VHD_URL
as the file name, run the following command:$ RHCOS_VHD_URL=$(az storage blob url --account-name ${STORAGE_ACCOUNT_NAME} -c ${CONTAINER_NAME} -n "${BLOB_NAME}" -o tsv)
Use the
RHCOS_VHD_URL
file, your storage account, resource group, and image gallery to create an image version. In the following example,1.0.0
is the image version.$ az sig image-version create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account ${STORAGE_ACCOUNT_NAME} --os-vhd-uri ${RHCOS_VHD_URL}
Your
arm64
boot image is now generated. You can access the ID of your image with the following command:$ az sig image-version show -r $GALLERY_NAME -g $RESOURCE_GROUP -i rhcos-arm64 -e 1.0.0
The following example image ID is used in the
recourseID
parameter of the compute machine set:Example
resourceID
/resourceGroups/${RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/${GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0
Adding a multi-architecture compute machine set to your cluster using the arm64 boot image
To add arm64
compute nodes to your cluster, you must create an Azure compute machine set that uses the arm64
boot image. To create your own custom compute machine set on Azure, see “Creating a compute machine set on Azure”.
Prerequisites
- You installed the OpenShift CLI (
oc
).
Procedure
Create a compute machine set and modify the
resourceID
andvmSize
parameters with the following command. This compute machine set will control thearm64
worker nodes in your cluster:$ oc create -f arm64-machine-set-0.yaml
Sample YAML compute machine set with
arm64
boot imageapiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: worker
machine.openshift.io/cluster-api-machine-type: worker
name: <infrastructure_id>-arm64-machine-set-0
namespace: openshift-machine-api
spec:
replicas: 2
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-arm64-machine-set-0
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: worker
machine.openshift.io/cluster-api-machine-type: worker
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-arm64-machine-set-0
spec:
lifecycleHooks: {}
metadata: {}
providerSpec:
value:
acceleratedNetworking: true
apiVersion: machine.openshift.io/v1beta1
credentialsSecret:
name: azure-cloud-credentials
namespace: openshift-machine-api
image:
offer: ""
publisher: ""
resourceID: /resourceGroups/${RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/${GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0 (1)
sku: ""
version: ""
kind: AzureMachineProviderSpec
location: <region>
managedIdentity: <infrastructure_id>-identity
networkResourceGroup: <infrastructure_id>-rg
osDisk:
diskSettings: {}
diskSizeGB: 128
managedDisk:
storageAccountType: Premium_LRS
osType: Linux
publicIP: false
publicLoadBalancer: <infrastructure_id>
resourceGroup: <infrastructure_id>-rg
subnet: <infrastructure_id>-worker-subnet
userDataSecret:
name: worker-user-data
vmSize: Standard_D4ps_v5 (2)
vnet: <infrastructure_id>-vnet
zone: "<zone>"
1 Set the resourceID
parameter to thearm64
boot image.2 Set the vmSize
parameter to the instance type used in your installation. Some example instance types areStandard_D4ps_v5
orD8ps
.
Verification
Verify that the new ARM64 machines are running by entering the following command:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE
<infrastructure_id>-arm64-machine-set-0 2 2 2 2 10m
You can check that the nodes are ready and scheduable with the following command:
$ oc get nodes
Additional resources
Creating a cluster with multi-architecture compute machines on AWS
To create an AWS cluster with multi-architecture compute machines, you must first create a single-architecture AWS installer-provisioned cluster with the multi-architecture installer binary. For more information on AWS installations, refer to Installing a cluster on AWS with customizations. You can then add a ARM64 compute machine set to your AWS cluster.
Adding an ARM64 compute machine set to your cluster
To configure a cluster with multi-architecture compute machines, you must create a AWS ARM64 compute machine set. This adds ARM64 compute nodes to your cluster so that your cluster has multi-architecture compute machines.
Prerequisites
You installed the OpenShift CLI (
oc
).You used the installation program to create an AMD64 single-architecture AWS cluster with the multi-architecture installer binary.
Procedure
Create and modify a compute machine set, this will control the ARM64 compute nodes in your cluster.
$ oc create -f aws-arm64-machine-set-0.yaml
Sample YAML compute machine set to deploy an ARM64 compute node
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
name: <infrastructure_id>-aws-arm64-machine-set-0 (1)
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> (2)
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: <role> (3)
machine.openshift.io/cluster-api-machine-type: <role> (3)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> (2)
spec:
metadata:
labels:
node-role.kubernetes.io/<role>: ""
providerSpec:
value:
ami:
id: ami-02a574449d4f4d280 (4)
apiVersion: awsproviderconfig.openshift.io/v1beta1
blockDevices:
- ebs:
iops: 0
volumeSize: 120
volumeType: gp2
credentialsSecret:
name: aws-cloud-credentials
deviceIndex: 0
iamInstanceProfile:
id: <infrastructure_id>-worker-profile (1)
instanceType: m6g.xlarge (5)
kind: AWSMachineProviderConfig
placement:
availabilityZone: us-east-1a (6)
region: <region> (7)
securityGroups:
- filters:
- name: tag:Name
values:
- <infrastructure_id>-worker-sg (1)
subnet:
filters:
- name: tag:Name
values:
- <infrastructure_id>-private-<zone>
tags:
- name: kubernetes.io/cluster/<infrastructure_id> (1)
value: owned
- name: <custom_tag_name>
value: <custom_tag_value>
userDataSecret:
name: worker-user-data
1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: $ oc get -o jsonpath=‘{.status.infrastructureName}{“\n”}’ infrastructure cluster
2 Specify the infrastructure ID, role node label, and zone. 3 Specify the role node label to add. 4 Specify an ARM64 supported Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for your AWS zone for your OpenShift Container Platform nodes. $ oc get configmap/coreos-bootimages /
-n openshift-machine-config-operator /
-o jsonpath=’{.data.stream}’ | jq /
-r ‘.architectures.<arch>.images.aws.regions.”<region>”.image’
5 Specify an ARM64 supported machine type. For more information, refer to “Tested instance types for AWS 64-bit ARM” 6 Specify the zone, for example us-east-1a
. Ensure that the zone you select offers 64-bit ARM machines.7 Specify the region, for example, us-east-1
. Ensure that the zone you select offers 64-bit ARM machines.
Verification
View the list of compute machine sets by entering the following command:
$ oc get machineset -n openshift-machine-api
You can then see your created ARM64 machine set.
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE
<infrastructure_id>-aws-arm64-machine-set-0 2 2 2 2 10m
You can check that the nodes are ready and scheduable with the following command:
$ oc get nodes
Additional resources
Creating a cluster with multi-architecture compute machine on bare metal (Technology Preview)
To create a cluster with multi-architecture compute machines on bare metal, you must have an existing single-architecture bare metal cluster. For more information on bare metal installations, see Installing a user provisioned cluster on bare metal. You can then add 64-bit ARM compute machines to your OKD cluster on bare metal.
Before you can add 64-bit ARM nodes to your bare metal cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines.
The following procedures explain how to create a FCOS compute machine using an ISO image or network PXE booting. This will allow you to add ARM64 nodes to your bare metal cluster and deploy a cluster with multi-architecture compute machines.
Clusters with multi-architecture compute machines on bare metal user-provisioned installations is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
Creating FCOS machines using an ISO image
You can create more Fedora CoreOS (FCOS) compute machines for your bare metal cluster by using an ISO image to create the machines.
Prerequisites
Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation.
You must have the OpenShift CLI (
oc
) installed.
Procedure
Extract the Ignition config file from the cluster by running the following command:
$ oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
Upload the
worker.ign
Ignition config file you exported from your cluster to your HTTP server. Note the URLs of these files.You can validate that the ignition files are available on the URLs. The following example gets the Ignition config files for the compute node:
$ curl -k http://<HTTP_server>/worker.ign
You can access the ISO image for booting your new machine by running to following command:
RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')
Use the ISO file to install FCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster:
Burn the ISO image to a disk and boot it directly.
Use ISO redirection with a LOM interface.
Boot the FCOS ISO image without specifying any options, or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the FCOS live environment.
You can interrupt the FCOS installation boot process to add kernel arguments. However, for this ISO procedure you must use the
coreos-installer
command as outlined in the following steps, instead of adding kernel arguments.Run the
coreos-installer
command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to:$ sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> (1) (2)
1 You must run the coreos-installer
command by usingsudo
, because thecore
user does not have the required root privileges to perform the installation.2 The —ignition-hash
option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node.<digest>
is the Ignition config file SHA512 digest obtained in a preceding step.If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running
coreos-installer
.The following example initializes a bootstrap node installation to the
/dev/sda
device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2:$ sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b
Monitor the progress of the FCOS installation on the console of the machine.
Ensure that the installation is successful on each node before commencing with the OKD installation. Observing the installation process can also help to determine the cause of FCOS installation issues that might arise.
Continue to create more compute machines for your cluster.
Creating FCOS machines by PXE or iPXE booting
You can create more Fedora CoreOS (FCOS) compute machines for your bare metal cluster by using PXE or iPXE booting.
Prerequisites
Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation.
Obtain the URLs of the FCOS ISO image, compressed metal BIOS,
kernel
, andinitramfs
files that you uploaded to your HTTP server during cluster installation.You have access to the PXE booting infrastructure that you used to create the machines for your OKD cluster during installation. The machines must boot from their local disks after FCOS is installed on them.
If you use UEFI, you have access to the
grub.conf
file that you modified during OKD installation.
Procedure
Confirm that your PXE or iPXE installation for the FCOS images is correct.
For PXE:
DEFAULT pxeboot
TIMEOUT 20
PROMPT 0
LABEL pxeboot
KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> (1)
APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img (2)
1 Specify the location of the live kernel
file that you uploaded to your HTTP server.2 Specify locations of the FCOS files that you uploaded to your HTTP server. The initrd
parameter value is the location of the liveinitramfs
file, thecoreos.inst.ignition_url
parameter value is the location of the worker Ignition config file, and thecoreos.live.rootfs_url
parameter value is the location of the liverootfs
file. Thecoreos.inst.ignition_url
andcoreos.live.rootfs_url
parameters only support HTTP and HTTPS.This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more
console=
arguments to theAPPEND
line. For example, addconsole=tty0 console=ttyS0
to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux?.For iPXE (
x86_64
+aarch64
):kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign (1) (2)
initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img (3)
boot
1 Specify the locations of the FCOS files that you uploaded to your HTTP server. The kernel
parameter value is the location of thekernel
file, theinitrd=main
argument is needed for booting on UEFI systems, thecoreos.live.rootfs_url
parameter value is the location of therootfs
file, and thecoreos.inst.ignition_url
parameter value is the location of the worker Ignition config file.2 If you use multiple NICs, specify a single interface in the ip
option. For example, to use DHCP on a NIC that is namedeno1
, setip=eno1:dhcp
.3 Specify the location of the initramfs
file that you uploaded to your HTTP server.This configuration does not enable serial console access on machines with a graphical console To configure a different console, add one or more
console=
arguments to thekernel
line. For example, addconsole=tty0 console=ttyS0
to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and “Enabling the serial console for PXE and ISO installation” in the “Advanced FCOS installation configuration” section.To network boot the CoreOS
kernel
onaarch64
architecture, you need to use a version of iPXE build with theIMAGE_GZIP
option enabled. See IMAGE_GZIP option in iPXE.For PXE (with UEFI and GRUB as second stage) on
aarch64
:menuentry 'Install CoreOS' {
linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign (1) (2)
initrd rhcos-<version>-live-initramfs.<architecture>.img (3)
}
1 Specify the locations of the FCOS files that you uploaded to your HTTP/TFTP server. The kernel
parameter value is the location of thekernel
file on your TFTP server. Thecoreos.live.rootfs_url
parameter value is the location of therootfs
file, and thecoreos.inst.ignition_url
parameter value is the location of the worker Ignition config file on your HTTP Server.2 If you use multiple NICs, specify a single interface in the ip
option. For example, to use DHCP on a NIC that is namedeno1
, setip=eno1:dhcp
.3 Specify the location of the initramfs
file that you uploaded to your TFTP server.
Use the PXE or iPXE infrastructure to create the required compute machines for your cluster.
Approving the certificate signing requests for your machines
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
Prerequisites
- You added machines to your cluster.
Procedure
Confirm that the cluster recognizes the machines:
$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION
master-0 Ready master 63m v1.26.0
master-1 Ready master 63m v1.26.0
master-2 Ready master 64m v1.26.0
The output lists all of the machines that you created.
The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.
Review the pending CSRs and ensure that you see the client requests with the
Pending
orApproved
status for each machine that you added to the cluster:$ oc get csr
Example output
NAME AGE REQUESTOR CONDITION
csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending
csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending
...
In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending
status, approve the CSRs for your cluster machines:Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the
machine-approver
if the Kubelet requests a new certificate with identical parameters.For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the
oc exec
,oc rsh
, andoc logs
commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by thenode-bootstrapper
service account in thesystem:node
orsystem:admin
groups, and confirm the identity of the node.To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> (1)
1 <csr_name>
is the name of a CSR from the list of current CSRs.To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
Some Operators might not become available until some CSRs are approved.
Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:
$ oc get csr
Example output
NAME AGE REQUESTOR CONDITION
csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending
csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending
...
If the remaining CSRs are not approved, and are in the
Pending
status, approve the CSRs for your cluster machines:To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> (1)
1 <csr_name>
is the name of a CSR from the list of current CSRs.To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
After all client and server CSRs have been approved, the machines have the
Ready
status. Verify this by running the following command:$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION
master-0 Ready master 73m v1.26.0
master-1 Ready master 73m v1.26.0
master-2 Ready master 74m v1.26.0
worker-0 Ready worker 11m v1.26.0
worker-1 Ready worker 11m v1.26.0
It can take a few minutes after approval of the server CSRs for the machines to transition to the
Ready
status.
Additional information
- For more information on CSRs, see Certificate Signing Requests.
Importing manifest lists in image streams on your multi-architecture compute machines
On an OKD 4.13 cluster with multi-architecture compute machines, the image streams in the cluster do not import manifest lists automatically. You must manually change the default importMode
option to the PreserveOriginal
option in order to import the manifest list.
Prerequisites
- You installed the OKD CLI (
oc
).
Procedure
The following example command shows how to patch the
ImageStream
cli-artifacts so that thecli-artifacts:latest
image stream tag is imported as a manifest list.$ oc patch is/cli-artifacts -n openshift -p '{"spec":{"tags":[{"name":"latest","importPolicy":{"importMode":"PreserveOriginal"}}]}}'
Verification
You can check that the manifest lists imported properly by inspecting the image stream tag. The following command will list the individual architecture manifests for a particular tag.
$ oc get istag cli-artifacts:latest -n openshift -oyaml
If the
dockerImageManifests
object is present, then the manifest list import was successful.Example output of the
dockerImageManifests
objectdockerImageManifests:
- architecture: amd64
digest: sha256:16d4c96c52923a9968fbfa69425ec703aff711f1db822e4e9788bf5d2bee5d77
manifestSize: 1252
mediaType: application/vnd.docker.distribution.manifest.v2+json
os: linux
- architecture: arm64
digest: sha256:6ec8ad0d897bcdf727531f7d0b716931728999492709d19d8b09f0d90d57f626
manifestSize: 1252
mediaType: application/vnd.docker.distribution.manifest.v2+json
os: linux
- architecture: ppc64le
digest: sha256:65949e3a80349cdc42acd8c5b34cde6ebc3241eae8daaeea458498fedb359a6a
manifestSize: 1252
mediaType: application/vnd.docker.distribution.manifest.v2+json
os: linux
- architecture: s390x
digest: sha256:75f4fa21224b5d5d511bea8f92dfa8e1c00231e5c81ab95e83c3013d245d1719
manifestSize: 1252
mediaType: application/vnd.docker.distribution.manifest.v2+json
os: linux