- Working with nodes
- Understanding how to evacuate pods on nodes
- Understanding how to update labels on nodes
- Understanding how to mark nodes as unschedulable or schedulable
- Configuring control plane nodes as schedulable
- Deleting nodes
- Setting SELinux booleans
- Adding kernel arguments to nodes
- Enabling Linux control groups version 2 (cgroups v2)
- Enabling swap memory use on nodes
- Migrating control plane nodes from one RHOSP host to another
Working with nodes
As an administrator, you can perform a number of tasks to make your clusters more efficient.
Understanding how to evacuate pods on nodes
Evacuating pods allows you to migrate all or selected pods from a given node or nodes.
You can only evacuate pods backed by a replication controller. The replication controller creates new pods on other nodes and removes the existing pods from the specified node(s).
Bare pods, meaning those not backed by a replication controller, are unaffected by default. You can evacuate a subset of pods by specifying a pod-selector. Pod selectors are based on labels, so all the pods with the specified label will be evacuated.
Procedure
Mark the nodes unschedulable before performing the pod evacuation.
Mark the node as unschedulable:
$ oc adm cordon <node1>
Example output
node/<node1> cordoned
Check that the node status is
NotReady,SchedulingDisabled
:$ oc get node <node1>
Example output
NAME STATUS ROLES AGE VERSION
<node1> NotReady,SchedulingDisabled worker 1d v1.24.0
Evacuate the pods using one of the following methods:
Evacuate all or selected pods on one or more nodes:
$ oc adm drain <node1> <node2> [--pod-selector=<pod_selector>]
Force the deletion of bare pods using the
--force
option. When set totrue
, deletion continues even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set:$ oc adm drain <node1> <node2> --force=true
Set a period of time in seconds for each pod to terminate gracefully, use
--grace-period
. If negative, the default value specified in the pod will be used:$ oc adm drain <node1> <node2> --grace-period=-1
Ignore pods managed by daemon sets using the
--ignore-daemonsets
flag set totrue
:$ oc adm drain <node1> <node2> --ignore-daemonsets=true
Set the length of time to wait before giving up using the
--timeout
flag. A value of0
sets an infinite length of time:$ oc adm drain <node1> <node2> --timeout=5s
Delete pods even if there are pods using
emptyDir
volumes by setting the--delete-emptydir-data
flag totrue
. Local data is deleted when the node is drained:$ oc adm drain <node1> <node2> --delete-emptydir-data=true
List objects that will be migrated without actually performing the evacuation, using the
--dry-run
option set totrue
:$ oc adm drain <node1> <node2> --dry-run=true
Instead of specifying specific node names (for example,
<node1> <node2>
), you can use the--selector=<node_selector>
option to evacuate pods on selected nodes.
Mark the node as schedulable when done.
$ oc adm uncordon <node1>
Understanding how to update labels on nodes
You can update any label on a node.
Node labels are not persisted after a node is deleted even if the node is backed up by a Machine.
Any change to a |
The following command adds or updates labels on a node:
$ oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n>
For example:
$ oc label nodes webconsole-7f7f6 unhealthy=true
You can alternatively apply the following YAML to apply the label:
kind: Node
apiVersion: v1
metadata:
name: webconsole-7f7f6
labels:
unhealthy: ‘true’
The following command updates all pods in the namespace:
$ oc label pods --all <key_1>=<value_1>
For example:
$ oc label pods --all status=unhealthy
Understanding how to mark nodes as unschedulable or schedulable
By default, healthy nodes with a Ready
status are marked as schedulable, which means that you can place new pods on the node. Manually marking a node as unschedulable blocks any new pods from being scheduled on the node. Existing pods on the node are not affected.
The following command marks a node or nodes as unschedulable:
Example output
$ oc adm cordon <node>
For example:
$ oc adm cordon node1.example.com
Example output
node/node1.example.com cordoned
NAME LABELS STATUS
node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled
The following command marks a currently unschedulable node or nodes as schedulable:
$ oc adm uncordon <node1>
Alternatively, instead of specifying specific node names (for example,
<node>
), you can use the--selector=<node_selector>
option to mark selected nodes as schedulable or unschedulable.
Configuring control plane nodes as schedulable
You can configure control plane nodes to be schedulable, meaning that new pods are allowed for placement on the master nodes. By default, control plane nodes are not schedulable.
You can set the masters to be schedulable, but must retain the worker nodes.
You can deploy OKD with no worker nodes on a bare metal cluster. In this case, the control plane nodes are marked schedulable by default. |
You can allow or disallow control plane nodes to be schedulable by configuring the mastersSchedulable
field.
When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes. |
Procedure
Edit the
schedulers.config.openshift.io
resource.$ oc edit schedulers.config.openshift.io cluster
Configure the
mastersSchedulable
field.apiVersion: config.openshift.io/v1
kind: Scheduler
metadata:
creationTimestamp: "2019-09-10T03:04:05Z"
generation: 1
name: cluster
resourceVersion: "433"
selfLink: /apis/config.openshift.io/v1/schedulers/cluster
uid: a636d30a-d377-11e9-88d4-0a60097bee62
spec:
mastersSchedulable: false (1)
status: {}
1 Set to true
to allow control plane nodes to be schedulable, orfalse
to disallow control plane nodes to be schedulable.Save the file to apply the changes.
Deleting nodes
Deleting nodes from a cluster
When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OKD. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods.
Procedure
To delete a node from the OKD cluster, edit the appropriate MachineSet
object:
If you are running cluster on bare metal, you cannot delete a node by editing |
View the machine sets that are in the cluster:
$ oc get machinesets -n openshift-machine-api
The machine sets are listed in the form of <clusterid>-worker-<aws-region-az>.
Scale the machine set:
$ oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
Or:
$ oc edit machineset <machineset> -n openshift-machine-api
You can alternatively apply the following YAML to scale the machine set:
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
name: <machineset>
namespace: openshift-machine-api
spec:
replicas: 2
Additional resources
- For more information on scaling your cluster using a MachineSet, see Manually scaling a MachineSet.
Deleting nodes from a bare metal cluster
When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OKD. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods.
Procedure
Delete a node from an OKD cluster running on bare metal by completing the following steps:
Mark the node as unschedulable:
$ oc adm cordon <node_name>
Drain all pods on the node:
$ oc adm drain <node_name> --force=true
This step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed.
Delete the node from the cluster:
$ oc delete node <node_name>
Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node.
If you powered down the physical hardware, turn it back on so that the node can rejoin the cluster.
Setting SELinux booleans
OKD allows you to enable and disable an SELinux boolean on a Fedora CoreOS (FCOS) node. The following procedure explains how to modify SELinux booleans on nodes using the Machine Config Operator (MCO). This procedure uses container_manage_cgroup
as the example boolean. You can modify this value to whichever boolean you need.
Prerequisites
- You have installed the OpenShift CLI (oc).
Procedure
Create a new YAML file with a
MachineConfig
object, displayed in the following example:apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 99-worker-setsebool
spec:
config:
ignition:
version: 2.2.0
systemd:
units:
- contents: |
[Unit]
Description=Set SELinux booleans
Before=kubelet.service
[Service]
Type=oneshot
ExecStart=/sbin/setsebool container_manage_cgroup=on
RemainAfterExit=true
[Install]
WantedBy=multi-user.target graphical.target
enabled: true
name: setsebool.service
Create the new
MachineConfig
object by running the following command:$ oc create -f 99-worker-setsebool.yaml
Applying any changes to the |
Adding kernel arguments to nodes
In some special cases, you might want to add kernel arguments to a set of nodes in your cluster. This should only be done with caution and clear understanding of the implications of the arguments you set.
Improper use of kernel arguments can result in your systems becoming unbootable. |
Examples of kernel arguments you could set include:
enforcing=0: Configures Security Enhanced Linux (SELinux) to run in permissive mode. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not supported for production systems, permissive mode can be helpful for debugging.
nosmt: Disables symmetric multithreading (SMT) in the kernel. Multithreading allows multiple logical threads for each CPU. You could consider
nosmt
in multi-tenant environments to reduce risks from potential cross-thread attacks. By disabling SMT, you essentially choose security over performance.systemd.unified_cgroup_hierarchy: Enables Linux control groups version 2 (cgroups v2). Cgroup v2 is the next version of the kernel control groups and offers multiple improvements.
See Kernel.org kernel parameters for a list and descriptions of kernel arguments.
In the following procedure, you create a MachineConfig
object that identifies:
A set of machines to which you want to add the kernel argument. In this case, machines with a worker role.
Kernel arguments that are appended to the end of the existing kernel arguments.
A label that indicates where in the list of machine configs the change is applied.
Prerequisites
- Have administrative privilege to a working OKD cluster.
Procedure
List existing
MachineConfig
objects for your OKD cluster to determine how to label your machine config:$ oc get MachineConfig
Example output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE
00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
99-master-ssh 3.2.0 40m
99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
99-worker-ssh 3.2.0 40m
rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
Create a
MachineConfig
object file that identifies the kernel argument (for example,05-worker-kernelarg-selinuxpermissive.yaml
)apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker(1)
name: 05-worker-kernelarg-selinuxpermissive(2)
spec:
config:
ignition:
version: 3.2.0
kernelArguments:
- enforcing=0(3)
1 Applies the new kernel argument only to worker nodes. 2 Named to identify where it fits among the machine configs (05) and what it does (adds a kernel argument to configure SELinux permissive mode). 3 Identifies the exact kernel argument as enforcing=0
.Create the new machine config:
$ oc create -f 05-worker-kernelarg-selinuxpermissive.yaml
Check the machine configs to see that the new one was added:
$ oc get MachineConfig
Example output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE
00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
05-worker-kernelarg-selinuxpermissive 3.2.0 105s
99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
99-master-ssh 3.2.0 40m
99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
99-worker-ssh 3.2.0 40m
rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
Check the nodes:
$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION
ip-10-0-136-161.ec2.internal Ready worker 28m v1.24.0
ip-10-0-136-243.ec2.internal Ready master 34m v1.24.0
ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.24.0
ip-10-0-142-249.ec2.internal Ready master 34m v1.24.0
ip-10-0-153-11.ec2.internal Ready worker 28m v1.24.0
ip-10-0-153-150.ec2.internal Ready master 34m v1.24.0
You can see that scheduling on each worker node is disabled as the change is being applied.
Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in
/proc/cmdline
on the host):$ oc debug node/ip-10-0-141-105.ec2.internal
Example output
Starting pod/ip-10-0-141-105ec2internal-debug ...
To use host binaries, run `chroot /host`
sh-4.2# cat /host/proc/cmdline
BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8
rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16...
coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0
sh-4.2# exit
You should see the
enforcing=0
argument added to the other kernel arguments.
Enabling Linux control groups version 2 (cgroups v2)
You can enable Linux control groups version 2 (cgroups v2) on specific nodes in your cluster by using a machine config. The OKD process for enabling cgroups v2 disables all cgroups version 1 controllers and hierarchies.
The OKD cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. |
Prerequisites
You have a running OKD cluster that uses version 4.10 or later.
You are logged in to the cluster as a user with administrative privileges.
You have the
node-role.kubernetes.io
value for the node(s) you want to configure.$ oc describe node <node-name>
Example output
Name: ci-ln-v05w5m2-72292-5s9ht-worker-a-r6fpg
Roles: worker
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=n1-standard-4
beta.kubernetes.io/os=linux
failure-domain.beta.kubernetes.io/region=us-central1
failure-domain.beta.kubernetes.io/zone=us-central1-a
kubernetes.io/arch=amd64
kubernetes.io/hostname=ci-ln-v05w5m2-72292-5s9ht-worker-a-r6fpg
kubernetes.io/os=linux
node-role.kubernetes.io/worker= (1)
#...
1 This value is the node role you need.
Procedure
Enable cgroups v2 on nodes:
Create a machine config file YAML, such as
worker-cgroups-v2.yaml
:apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: "worker" (1)
name: worker-enable-cgroups-v2
spec:
kernelArguments:
- systemd.unified_cgroup_hierarchy=1 (2)
- cgroup_no_v1="all" (3)
1 Specifies the node-role.kubernetes.io
value for the nodes you want to configure, such asmaster
,worker
, orinfra
.2 Enables cgroups v2 in systemd. 3 Disables cgroups v1. Create the new machine config:
$ oc create -f worker-enable-cgroups-v2.yaml
Check the machine configs to see that the new one was added:
$ oc get MachineConfig
Example output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE
00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
99-master-ssh 3.2.0 40m
99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
99-worker-ssh 3.2.0 40m
rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
worker-enable-cgroups-v2 3.2.0 10s
Check the nodes to see that scheduling on each affected node is disabled. This indicates that the change is being applied:
$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION
ci-ln-fm1qnwt-72292-99kt6-master-0 Ready master 58m v1.24.0
ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.24.0
ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.24.0
ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.24.0
ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.24.0
ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.24.0
After a node returns to the
Ready
state, you can verify that cgroups v2 is enabled by checking that thesys/fs/cgroup/cgroup.controllers
file is present on the node. This file is created by cgroups v2.Start a debug session for that node:
$ oc debug node/<node_name>
Locate the
sys/fs/cgroup/cgroup.controllers
file. If this file is present, cgroups v2 is enabled on that node.Example output
cgroup.controllers cgroup.stat cpuset.cpus.effective io.stat pids
cgroup.max.depth cgroup.subtree_control cpuset.mems.effective kubepods.slice system.slice
cgroup.max.descendants cgroup.threads init.scope memory.pressure user.slice
cgroup.procs cpu.pressure io.pressure memory.stat
Additional resources
- For information about enabling cgroups v2 during installation, see the Optional parameters table in the Installation configuration parameters section of your installation process.
Enabling swap memory use on nodes
Enabling swap memory use on nodes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/. |
You can enable swap memory use for OKD workloads on a per-node basis.
Enabling swap memory can negatively impact workload performance and out-of-resource handling. Do not enable swap memory on control plane nodes. |
To enable swap memory, create a kubeletconfig
custom resource (CR) to set the swapbehavior
parameter. You can set limited or unlimited swap memory:
Limited: Use the
LimitedSwap
value to limit how much swap memory workloads can use. Any workloads on the node that are not managed by OKD can still use swap memory. TheLimitedSwap
behavior depends on whether the node is running with Linux control groups version 1 (cgroups v1) or version 2 (cgroups v2):cgroups v1: OKD workloads can use any combination of memory and swap, up to the pod’s memory limit, if set.
cgroups v2: OKD workloads cannot use swap memory.
Unlimited: Use the
UnlimitedSwap
value to allow workloads to use as much swap memory as they request, up to the system limit.
Because the kubelet will not start in the presence of swap memory without this configuration, you must enable swap memory in OKD before enabling swap memory on the nodes. If there is no swap memory present on a node, enabling swap memory in OKD has no effect.
Prerequisites
You have a running OKD cluster that uses version 4.10 or later.
You are logged in to the cluster as a user with administrative privileges.
You have enabled the
TechPreviewNoUpgrade
feature set on the cluster (see Nodes → Enabling features using feature gates).Enabling the
TechPreviewNoUpgrade
feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters.If cgroups v2 is enabled on a node, you must enable swap accounting on the node, by setting the
swapaccount=1
kernel argument.
Procedure
Apply a custom label to the machine config pool where you want to allow swap memory.
$ oc label machineconfigpool worker kubelet-swap=enabled
Create a custom resource (CR) to enable and configure swap settings.
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: swap-config
spec:
machineConfigPoolSelector:
matchLabels:
kubelet-swap: enabled
kubeletConfig:
failSwapOn: false (1)
memorySwap:
swapBehavior: LimitedSwap (2)
1 Set to false
to enable swap memory use on the associated nodes. Set totrue
to disable swap memory use.2 Specify the swap memory behavior. If unspecified, the default is LimitedSwap
.Enable swap memory on the machines.
Migrating control plane nodes from one RHOSP host to another
You can run a script that moves a control plane node from one OpenStack node to another.
Prerequisites
The environment variable
OS_CLOUD
refers to aclouds
entry that has administrative credentials in aclouds.yaml
file.The environment variable
KUBECONFIG
refers to a configuration that contains administrative OKD credentials.
Procedure
- From a command line, run the following script:
#!/usr/bin/env bash
set -Eeuo pipefail
if [ $# -lt 1 ]; then
echo "Usage: '$0 node_name'"
exit 64
fi
# Check for admin OpenStack credentials
openstack server list --all-projects >/dev/null || { >&2 echo "The script needs OpenStack admin credentials. Exiting"; exit 77; }
# Check for admin OpenShift credentials
oc adm top node >/dev/null || { >&2 echo "The script needs OpenShift admin credentials. Exiting"; exit 77; }
set -x
declare -r node_name="$1"
declare server_id
server_id="$(openstack server list --all-projects -f value -c ID -c Name | grep "$node_name" | cut -d' ' -f1)"
readonly server_id
# Drain the node
oc adm cordon "$node_name"
oc adm drain "$node_name" --delete-emptydir-data --ignore-daemonsets --force
# Power off the server
oc debug "node/${node_name}" -- chroot /host shutdown -h 1
# Verify the server is shut off
until openstack server show "$server_id" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done
# Migrate the node
openstack server migrate --wait "$server_id"
# Resize the VM
openstack server resize confirm "$server_id"
# Wait for the resize confirm to finish
until openstack server show "$server_id" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done
# Restart the VM
openstack server start "$server_id"
# Wait for the node to show up as Ready:
until oc get node "$node_name" | grep -q "^${node_name}[[:space:]]\+Ready"; do sleep 5; done
# Uncordon the node
oc adm uncordon "$node_name"
# Wait for cluster operators to stabilize
until oc get co -o go-template='statuses: {{ range .items }}{{ range .status.conditions }}{{ if eq .type "Degraded" }}{{ if ne .status "False" }}DEGRADED{{ end }}{{ else if eq .type "Progressing"}}{{ if ne .status "False" }}PROGRESSING{{ end }}{{ else if eq .type "Available"}}{{ if ne .status "True" }}NOTAVAILABLE{{ end }}{{ end }}{{ end }}{{ end }}' | grep -qv '\(DEGRADED\|PROGRESSING\|NOTAVAILABLE\)'; do sleep 5; done
If the script completes, the control plane machine is migrated to a new OpenStack node.