Understanding OKD update duration
OKD update duration varies based on the deployment topology. This page helps you understand the factors that affect update duration and estimate how long the cluster update takes in your environment.
Prerequisites
- You are familiar with OpenShift Container Platform architecture and OpenShift Container Platform updates.
Factors affecting update duration
The following factors can affect your cluster update duration:
The reboot of compute nodes to the new machine configuration by Machine Config Operator (MCO)
The value of
MaxUnavailable
in the machine config poolThe minimum number or percentages of replicas set in pod disruption budget (PDB)
The number of nodes in the cluster
The health of the cluster nodes
Cluster update phases
In OKD, the cluster update happens in two phases:
Cluster Version Operator (CVO) target update payload deployment
Machine Config Operator (MCO) node updates
Cluster Version Operator target update payload deployment
The Cluster Version Operator (CVO) retrieves the target update release image and applies to the cluster. All components which run as pods are updated during this phase, whereas the host components are updated by the Machine Config Operator (MCO). This process might take 60 to 120 minutes.
The CVO phase of the update does not restart the nodes. |
Additional resources
Machine Config Operator node updates
The Machine Config Operator (MCO) applies a new machine configuration to each control plane and compute node. During this process, the MCO performs the following sequential actions on each node of the cluster:
Cordon and drain all the nodes
Update the operating system (OS)
Reboot the nodes
Uncordon all nodes and schedule workloads on the node
When a node is cordoned, workloads cannot be scheduled to it. |
The time to complete this process depends on several factors including the node and infrastructure configuration. This process might take 5 or more minutes to complete per node.
In addition to MCO, you should consider the impact of the following parameters:
The control plane node update duration is predictable and oftentimes shorter than compute nodes, because the control plane workloads are tuned for graceful updates and quick drains.
You can update the compute nodes in parallel by setting the
maxUnavailable
field to greater than1
in the Machine Config Pool (MCP). The MCO cordons the number of nodes specified inmaxUnavailable
and marks them unavailable for update.When you increase
maxUnavailable
on the MCP, it can help the pool to update more quickly. However, ifmaxUnavailable
is set too high, and several nodes are cordoned simultaneously, the pod disruption budget (PDB) guarded workloads could fail to drain because a schedulable node cannot be found to run the replicas. If you increasemaxUnavailable
for the MCP, ensure that you still have sufficient schedulable nodes to allow PDB guarded workloads to drain.Before you begin the update, you must ensure that all the nodes are available. Any unavailable nodes can significantly impact the update duration because the node unavailability affects the
maxUnavailable
and pod disruption budgets.To check the status of nodes from the terminal, run the following command:
$ oc get node
Example Output
NAME STATUS ROLES AGE VERSION
ip-10-0-137-31.us-east-2.compute.internal Ready,SchedulingDisabled worker 12d v1.23.5+3afdacb
ip-10-0-151-208.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb
ip-10-0-176-138.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb
ip-10-0-183-194.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb
ip-10-0-204-102.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb
ip-10-0-207-224.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb
If the status of the node is
NotReady
orSchedulingDisabled
, then the node is not available and this impacts the update duration.You can check the status of nodes from the Administrator perspective in the web console by expanding Compute → Node.
Additional resources
Estimating cluster update time
Historical update duration of similar clusters provides you the best estimate for the future cluster updates. However, if the historical data is not available, you can use the following convention to estimate your cluster update time:
Cluster update time = CVO target update payload deployment time + (# node update iterations x MCO node update time)
A node update iteration consists of one or more nodes updated in parallel. The control plane nodes are always updated in parallel with the compute nodes. In addition, one or more compute nodes can be updated in parallel based on the maxUnavailable
value.
For example, to estimate the update time, consider an OKD cluster with three control plane nodes and six compute nodes and each host takes about 5 minutes to reboot.
The time it takes to reboot a particular node varies significantly. In cloud instances, the reboot might take about 1 to 2 minutes, whereas in physical bare metal hosts the reboot might take more than 15 minutes. |
Scenario-1
When you set maxUnavailable
to 1
for both the control plane and compute nodes Machine Config Pool (MCP), then all the six compute nodes will update one after another in each iteration:
Cluster update time = 60 + (6 x 5) = 90 minutes
Scenario-2
When you set maxUnavailable
to 2
for the compute node MCP, then two compute nodes will update in parallel in each iteration. Therefore it takes total three iterations to update all the nodes.
Cluster update time = 60 + (3 x 5) = 75 minutes
The default setting for |
Fedora compute nodes
Fedora compute nodes require an additional usage of openshift-ansible
to update node binary components. The actual time spent updating Fedora compute nodes should not be significantly different from Fedora CoreOS (FCOS) compute nodes.
Additional resources