- Post-installation cluster tasks
- Available cluster customizations
- Updating the global cluster pull secret
- Adjust worker nodes
- Creating infrastructure machine sets for production environments
- Assigning machine set resources to infrastructure nodes
- Moving resources to infrastructure machine sets
- About the cluster autoscaler
- About the machine autoscaler
- Enabling Technology Preview features using FeatureGates
- etcd tasks
- Pod disruption budgets
- Rotating or removing cloud provider credentials
- Configuring image streams for a disconnected cluster
Post-installation cluster tasks
- Available cluster customizations
- Updating the global cluster pull secret
- Adjust worker nodes
- Creating infrastructure machine sets for production environments
- Assigning machine set resources to infrastructure nodes
- Moving resources to infrastructure machine sets
- About the cluster autoscaler
- About the machine autoscaler
- Enabling Technology Preview features using FeatureGates
- etcd tasks
- Pod disruption budgets
- Rotating or removing cloud provider credentials
- Configuring image streams for a disconnected cluster
After installing OKD, you can further expand and customize your cluster to your requirements.
Available cluster customizations
You complete most of the cluster configuration and customization after you deploy your OKD cluster. A number of configuration resources are available.
If you install your cluster on IBM Z, not all features and functions are available. |
You modify the configuration resources to configure the major features of the cluster, such as the image registry, networking configuration, image build behavior, and the identity provider.
For current documentation of the settings that you control by using these resources, use the oc explain
command, for example oc explain builds --api-version=config.openshift.io/v1
Cluster configuration resources
All cluster configuration resources are globally scoped (not namespaced) and named cluster
.
Resource name | Description |
---|---|
| Provides API server configuration such as certificates and certificate authorities. |
| Controls the identity provider and authentication configuration for the cluster. |
| Controls default and enforced configuration for all builds on the cluster. |
| Configures the behavior of the web console interface, including the logout behavior. |
| Enables FeatureGates so that you can use Tech Preview features. |
| Configures how specific image registries should be treated (allowed, disallowed, insecure, CA details). |
| Configuration details related to routing such as the default domain for routes. |
| Configures identity providers and other behavior related to internal OAuth server flows. |
| Configures how projects are created including the project template. |
| Defines proxies to be used by components needing external network access. Note: not all components currently consume this value. |
| Configures scheduler behavior such as policies and default node selectors. |
Operator configuration resources
These configuration resources are cluster-scoped instances, named cluster
, which control the behavior of a specific component as owned by a particular Operator.
Resource name | Description |
---|---|
| Controls console appearance such as branding customizations |
| Configures internal image registry settings such as public routing, log levels, proxy settings, resource constraints, replica counts, and storage type. |
| Configures the Samples Operator to control which example image streams and templates are installed on the cluster. |
Additional configuration resources
These configuration resources represent a single instance of a particular component. In some cases, you can request multiple instances by creating multiple instances of the resource. In other cases, the Operator can use only a specific resource instance name in a specific namespace. Reference the component-specific documentation for details on how and when you can create additional resource instances.
Resource name | Instance name | Namespace | Description |
---|---|---|---|
|
|
| Controls the Alertmanager deployment parameters. |
|
|
| Configures Ingress Operator behavior such as domain, number of replicas, certificates, and controller placement. |
Informational Resources
You use these resources to retrieve information about the cluster. Some configurations might require you to edit these resources directly.
Resource name | Instance name | Description |
---|---|---|
|
| In OKD 4.9, you must not customize the |
|
| You cannot modify the DNS settings for your cluster. You can view the DNS Operator status. |
|
| Configuration details allowing the cluster to interact with its cloud provider. |
|
| You cannot modify your cluster networking after installation. To customize your network, follow the process to customize networking during installation. |
Updating the global cluster pull secret
You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret.
To transfer your cluster to another owner, you must first initiate the transfer in Red Hat OpenShift Cluster Manager, and then update the pull secret on the cluster. Updating a cluster’s pull secret without initiating the transfer in OpenShift Cluster Manager causes the cluster to stop reporting Telemetry metrics in OpenShift Cluster Manager. For more information about transferring cluster ownership, see “Transferring cluster ownership” in the Red Hat OpenShift Cluster Manager documentation. |
Cluster resources must adjust to the new pull secret, which can temporarily limit the usability of the cluster. |
Prerequisites
- You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Optional: To append a new pull secret to the existing pull secret, complete the following steps:
Enter the following command to download the pull secret:
$ oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location> (1)
1 Provide the path to the pull secret file. Enter the following command to add the new pull secret:
$ oc registry login --registry="<registry>" \ (1)
--auth-basic="<username>:<password>" \ (2)
--to=<pull_secret_location> (3)
1 Provide the new registry. You can include multiple repositories within the same registry, for example: —registry=”<registry/my-namespace/my-repository>”
.2 Provide the credentials of the new registry. 3 Provide the path to the pull secret file. Alternatively, you can perform a manual update to the pull secret file.
Enter the following command to update the global pull secret for your cluster:
$ oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> (1)
1 Provide the path to the new pull secret file. This update is rolled out to all nodes, which can take some time depending on the size of your cluster.
As of OKD 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot.
Adjust worker nodes
If you incorrectly sized the worker nodes during deployment, adjust them by creating one or more new machine sets, scale them up, then scale the original machine set down before removing them.
Understanding the difference between machine sets and the machine config pool
MachineSet
objects describe OKD nodes with respect to the cloud or machine provider.
The MachineConfigPool
object allows MachineConfigController
components to define and provide the status of machines in the context of upgrades.
The MachineConfigPool
object allows users to configure how upgrades are rolled out to the OKD nodes in the machine config pool.
The NodeSelector
object can be replaced with a reference to the MachineSet
object.
Scaling a machine set manually
To add or remove an instance of a machine in a machine set, you can manually scale the machine set.
This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have machine sets.
Prerequisites
Install an OKD cluster and the
oc
command line.Log in to
oc
as a user withcluster-admin
permission.
Procedure
View the machine sets that are in the cluster:
$ oc get machinesets -n openshift-machine-api
The machine sets are listed in the form of
<clusterid>-worker-<aws-region-az>
.Scale the machine set:
$ oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
Or:
$ oc edit machineset <machineset> -n openshift-machine-api
You can scale the machine set up or down. It takes several minutes for the new machines to be available.
The machine set deletion policy
Random
, Newest
, and Oldest
are the three supported deletion options. The default is Random
, meaning that random machines are chosen and deleted when scaling machine sets down. The deletion policy can be set according to the use case by modifying the particular machine set:
spec:
deletePolicy: <delete_policy>
replicas: <desired_replica_count>
Specific machines can also be prioritized for deletion by adding the annotation machine.openshift.io/cluster-api-delete-machine
to the machine of interest, regardless of the deletion policy.
By default, the OKD router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker machine set to |
Custom machine sets can be used for use cases requiring that services run on specific nodes and that those services are ignored by the controller when the worker machine sets are scaling down. This prevents service disruption. |
Creating default cluster-wide node selectors
You can use default cluster-wide node selectors on pods together with labels on nodes to constrain all pods created in a cluster to specific nodes.
With cluster-wide node selectors, when you create a pod in that cluster, OKD adds the default node selectors to the pod and schedules the pod on nodes with matching labels.
You configure cluster-wide node selectors by editing the Scheduler Operator custom resource (CR). You add labels to a node, a machine set, or a machine config. Adding the label to the machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down.
You can add additional key/value pairs to a pod. But you cannot add a different value for a default key. |
Procedure
To add a default cluster-wide node selector:
Edit the Scheduler Operator CR to add the default cluster-wide node selectors:
$ oc edit scheduler cluster
Example Scheduler Operator CR with a node selector
apiVersion: config.openshift.io/v1
kind: Scheduler
metadata:
name: cluster
...
spec:
defaultNodeSelector: type=user-node,region=east (1)
mastersSchedulable: false
policy:
name: ""
1 Add a node selector with the appropriate <key>:<value>
pairs.After making this change, wait for the pods in the
openshift-kube-apiserver
project to redeploy. This can take several minutes. The default cluster-wide node selector does not take effect until the pods redeploy.Add labels to a node by using a machine set or editing the node directly:
Use a machine set to add labels to nodes managed by the machine set when a node is created:
Run the following command to add labels to a
MachineSet
object:$ oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api (1)
1 Add a <key>/<value>
pair for each label.For example:
$ oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api
Verify that the labels are added to the
MachineSet
object by using theoc edit
command:For example:
$ oc edit MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api
Example output
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
...
spec:
...
template:
metadata:
...
spec:
metadata:
labels:
region: east
type: user-node
Redeploy the nodes associated with that machine set by scaling down to
0
and scaling up the nodes:For example:
$ oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api
$ oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api
When the nodes are ready and available, verify that the label is added to the nodes by using the
oc get
command:$ oc get nodes -l <key>=<value>
For example:
$ oc get nodes -l type=user-node
Example output
NAME STATUS ROLES AGE VERSION
ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.22.1
Add labels directly to a node:
Edit the
Node
object for the node:$ oc label nodes <name> <key>=<value>
For example, to label a node:
$ oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east
Verify that the labels are added to the node using the
oc get
command:$ oc get nodes -l <key>=<value>,<key>=<value>
For example:
$ oc get nodes -l type=user-node,region=east
Example output
NAME STATUS ROLES AGE VERSION
ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.22.1
Creating infrastructure machine sets for production environments
You can create a machine set to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment.
In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different machine sets, one for each availability zone.
For information on infrastructure nodes and which components can run on infrastructure nodes, see Creating infrastructure machine sets.
For sample machine sets that you can use with these procedures, see Creating machine sets for different clouds.
Creating a machine set
In addition to the ones created by the installation program, you can create your own machine sets to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
Deploy an OKD cluster.
Install the OpenShift CLI (
oc
).Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.If you are not sure which value to set for a specific field, you can check an existing machine set from your cluster:
$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE
agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1d 0 0 55m
agl030519-vplxk-worker-us-east-1e 0 0 55m
agl030519-vplxk-worker-us-east-1f 0 0 55m
Check values of a specific machine set:
$ oc get machineset <machineset_name> -n \
openshift-machine-api -o yaml
Example output
...
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: agl030519-vplxk (1)
machine.openshift.io/cluster-api-machine-role: worker (2)
machine.openshift.io/cluster-api-machine-type: worker
machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a
1 The cluster ID. 2 A default node label.
Create the new
MachineSet
CR:$ oc create -f <file_name>.yaml
View the list of machine sets:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE
agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m
agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1d 0 0 55m
agl030519-vplxk-worker-us-east-1e 0 0 55m
agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new machine set is available, the
DESIRED
andCURRENT
values match. If the machine set is not available, wait a few minutes and run the command again.
Creating an infrastructure node
See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API. |
Requirements of the cluster dictate that infrastructure, also called infra
nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app
, nodes through labeling.
Procedure
Add a label to the worker node that you want to act as application node:
$ oc label node <node-name> node-role.kubernetes.io/app=""
Add a label to the worker nodes that you want to act as infrastructure nodes:
$ oc label node <node-name> node-role.kubernetes.io/infra=""
Check to see if applicable nodes now have the
infra
role andapp
roles:$ oc get nodes
Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod’s selector.
If the default node selector key conflicts with the key of a pod’s label, then the default node selector is not applied.
However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as
node-role.kubernetes.io/infra=””
, when a pod’s label is set to a different node role, such asnode-role.kubernetes.io/master=””
, can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles.You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts.
Edit the
Scheduler
object:$ oc edit scheduler cluster
Add the
defaultNodeSelector
field with the appropriate node selector:apiVersion: config.openshift.io/v1
kind: Scheduler
metadata:
name: cluster
...
spec:
defaultNodeSelector: topology.kubernetes.io/region=us-east-1 (1)
...
1 This example node selector deploys pods on nodes in the us-east-1
region by default.Save the file to apply the changes.
Move infrastructure resources to the newly labeled
infra
nodes.
Additional resources
- For information on how to configure project node selectors to avoid cluster-wide node selector key conflicts, see Project node selectors.
Creating a machine config pool for infrastructure machines
If you need infrastructure machines to have dedicated configurations, you must create an infra pool.
Procedure
Add a label to the node you want to assign as the infra node with a specific label:
$ oc label node <node_name> <label>
$ oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=
Create a machine config pool that contains both the worker role and your custom role as machine config selector:
$ cat infra.mcp.yaml
Example output
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
name: infra
spec:
machineConfigSelector:
matchExpressions:
- {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} (1)
nodeSelector:
matchLabels:
node-role.kubernetes.io/infra: "" (2)
1 Add the worker role and your custom role. 2 Add the label you added to the node as a nodeSelector
.Custom machine config pools inherit machine configs from the worker pool. Custom pools use any machine config targeted for the worker pool, but add the ability to also deploy changes that are targeted at only the custom pool. Because a custom pool inherits resources from the worker pool, any change to the worker pool also affects the custom pool.
After you have the YAML file, you can create the machine config pool:
$ oc create -f infra.mcp.yaml
Check the machine configs to ensure that the infrastructure configuration rendered successfully:
$ oc get machineconfig
Example output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED
00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
99-master-ssh 3.2.0 31d
99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
99-worker-ssh 3.2.0 31d
rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m
rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d
rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d
rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d
rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d
rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h
rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d
rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d
rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d
rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h
rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d
rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d
rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d
You should see a new machine config, with the
rendered-infra-*
prefix.Optional: To deploy changes to a custom pool, create a machine config that uses the custom pool name as the label, such as
infra
. Note that this is not required and only shown for instructional purposes. In this manner, you can apply any custom configurations specific to only your infra nodes.After you create the new machine config pool, the MCO generates a new rendered config for that pool, and associated nodes of that pool reboot to apply the new configuration.
Create a machine config:
$ cat infra.mc.yaml
Example output
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 51-infra
labels:
machineconfiguration.openshift.io/role: infra (1)
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- path: /etc/infratest
mode: 0644
contents:
source: data:,infra
1 Add the label you added to the node as a nodeSelector
.Apply the machine config to the infra-labeled nodes:
$ oc create -f infra.mc.yaml
Confirm that your new machine config pool is available:
$ oc get mcp
Example output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s
master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m
worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m
In this example, a worker node was changed to an infra node.
Additional resources
- See Node configuration management with machine config pools for more information on grouping infra machines in a custom pool.
Assigning machine set resources to infrastructure nodes
After creating an infrastructure machine set, the worker
and infra
roles are applied to new infra nodes. Nodes with the infra
role are not counted toward the total number of subscriptions that are required to run the environment, even when the worker
role is also applied.
However, when an infra node is assigned the worker role, there is a chance that user workloads can get assigned inadvertently to the infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods that you want to control.
Binding infrastructure node workloads using taints and tolerations
If you have an infra node that has the infra
and worker
roles assigned, you must configure the node so that user workloads are not assigned to it.
It is recommended that you preserve the dual |
Prerequisites
- Configure additional
MachineSet
objects in your OKD cluster.
Procedure
Add a taint to the infra node to prevent scheduling user workloads on it:
Determine if the node has the taint:
$ oc describe nodes <node_name>
Sample output
oc describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l
Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l
Roles: worker
...
Taints: node-role.kubernetes.io/infra:NoSchedule
...
This example shows that the node has a taint. You can proceed with adding a toleration to your pod in the next step.
If you have not configured a taint to prevent scheduling user workloads on it:
$ oc adm taint nodes <node_name> <key>:<effect>
For example:
$ oc adm taint nodes node1 node-role.kubernetes.io/infra:NoSchedule
This example places a taint on
node1
that has keynode-role.kubernetes.io/infra
and taint effectNoSchedule
. Nodes with theNoSchedule
effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node.If a descheduler is used, pods violating node taints could be evicted from the cluster.
Add tolerations for the pod configurations you want to schedule on the infra node, like router, registry, and monitoring workloads. Add the following code to the
Pod
object specification:tolerations:
- effect: NoSchedule (1)
key: node-role.kubernetes.io/infra (2)
operator: Exists (3)
1 Specify the effect that you added to the node. 2 Specify the key that you added to the node. 3 Specify the Exists
Operator to require a taint with the keynode-role.kubernetes.io/infra
to be present on the node.This toleration matches the taint created by the
oc adm taint
command. A pod with this toleration can be scheduled onto the infra node.Moving pods for an Operator installed via OLM to an infra node is not always possible. The capability to move Operator pods depends on the configuration of each Operator.
Schedule the pod to the infra node using a scheduler. See the documentation for Controlling pod placement onto nodes for details.
Additional resources
- See Controlling pod placement using the scheduler for general information on scheduling a pod to a node.
Moving resources to infrastructure machine sets
Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created.
Moving the router
You can deploy the router pod to a different machine set. By default, the pod is deployed to a worker node.
Prerequisites
- Configure additional machine sets in your OKD cluster.
Procedure
View the
IngressController
custom resource for the router Operator:$ oc get ingresscontroller default -n openshift-ingress-operator -o yaml
The command output resembles the following text:
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
creationTimestamp: 2019-04-18T12:35:39Z
finalizers:
- ingresscontroller.operator.openshift.io/finalizer-ingresscontroller
generation: 1
name: default
namespace: openshift-ingress-operator
resourceVersion: "11341"
selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default
uid: 79509e05-61d6-11e9-bc55-02ce4781844a
spec: {}
status:
availableReplicas: 2
conditions:
- lastTransitionTime: 2019-04-18T12:36:15Z
status: "True"
type: Available
domain: apps.<cluster>.example.com
endpointPublishingStrategy:
type: LoadBalancerService
selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default
Edit the
ingresscontroller
resource and change thenodeSelector
to use theinfra
label:$ oc edit ingresscontroller default -n openshift-ingress-operator
Add the
nodeSelector
stanza that references theinfra
label to thespec
section, as shown:spec:
nodePlacement:
nodeSelector:
matchLabels:
node-role.kubernetes.io/infra: ""
Confirm that the router pod is running on the
infra
node.View the list of router pods and note the node name of the running pod:
$ oc get pod -n openshift-ingress -o wide
Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none>
router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>
In this example, the running pod is on the
ip-10-0-217-226.ec2.internal
node.View the node status of the running pod:
$ oc get node <node_name> (1)
1 Specify the <node_name>
that you obtained from the pod list.Example output
NAME STATUS ROLES AGE VERSION
ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.22.1
Because the role list includes
infra
, the pod is running on the correct node.
Moving the default registry
You configure the registry Operator to deploy its pods to different nodes.
Prerequisites
- Configure additional machine sets in your OKD cluster.
Procedure
View the
config/instance
object:$ oc get configs.imageregistry.operator.openshift.io/cluster -o yaml
Example output
apiVersion: imageregistry.operator.openshift.io/v1
kind: Config
metadata:
creationTimestamp: 2019-02-05T13:52:05Z
finalizers:
- imageregistry.operator.openshift.io/finalizer
generation: 1
name: cluster
resourceVersion: "56174"
selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
uid: 36fd3724-294d-11e9-a524-12ffeee2931b
spec:
httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623
logging: 2
managementState: Managed
proxy: {}
replicas: 1
requests:
read: {}
write: {}
storage:
s3:
bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c
region: us-east-1
status:
...
Edit the
config/instance
object:$ oc edit configs.imageregistry.operator.openshift.io/cluster
Modify the
spec
section of the object to resemble the following YAML:spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
namespaces:
- openshift-image-registry
topologyKey: kubernetes.io/hostname
weight: 100
logLevel: Normal
managementState: Managed
nodeSelector:
node-role.kubernetes.io/infra: ""
Verify the registry pod has been moved to the infrastructure node.
Run the following command to identify the node where the registry pod is located:
$ oc get pods -o wide -n openshift-image-registry
Confirm the node has the label you specified:
$ oc describe node <node_name>
Review the command output and confirm that
node-role.kubernetes.io/infra
is in theLABELS
list.
Moving the monitoring solution
By default, the Prometheus Cluster Monitoring stack, which contains Prometheus, Grafana, and AlertManager, is deployed to provide cluster monitoring. It is managed by the Cluster Monitoring Operator. To move its components to different machines, you create and apply a custom config map.
Procedure
Save the following
ConfigMap
definition as thecluster-monitoring-configmap.yaml
file:apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |+
alertmanagerMain:
nodeSelector:
node-role.kubernetes.io/infra: ""
prometheusK8s:
nodeSelector:
node-role.kubernetes.io/infra: ""
prometheusOperator:
nodeSelector:
node-role.kubernetes.io/infra: ""
grafana:
nodeSelector:
node-role.kubernetes.io/infra: ""
k8sPrometheusAdapter:
nodeSelector:
node-role.kubernetes.io/infra: ""
kubeStateMetrics:
nodeSelector:
node-role.kubernetes.io/infra: ""
telemeterClient:
nodeSelector:
node-role.kubernetes.io/infra: ""
openshiftStateMetrics:
nodeSelector:
node-role.kubernetes.io/infra: ""
thanosQuerier:
nodeSelector:
node-role.kubernetes.io/infra: ""
Running this config map forces the components of the monitoring stack to redeploy to infrastructure nodes.
Apply the new config map:
$ oc create -f cluster-monitoring-configmap.yaml
Watch the monitoring pods move to the new machines:
$ watch 'oc get pod -n openshift-monitoring -o wide'
If a component has not moved to the
infra
node, delete the pod with this component:$ oc delete pod -n openshift-monitoring <pod>
The component from the deleted pod is re-created on the
infra
node.
Moving OpenShift Logging resources
You can configure the Cluster Logging Operator to deploy the pods for OpenShift Logging components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Cluster Logging Operator pod from its installed location.
For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.
Prerequisites
- OpenShift Logging and Elasticsearch must be installed. These features are not installed by default.
Procedure
Edit the
ClusterLogging
custom resource (CR) in theopenshift-logging
project:$ oc edit ClusterLogging instance
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
...
spec:
collection:
logs:
fluentd:
resources: null
type: fluentd
logStore:
elasticsearch:
nodeCount: 3
nodeSelector: (1)
node-role.kubernetes.io/infra: ''
redundancyPolicy: SingleRedundancy
resources:
limits:
cpu: 500m
memory: 16Gi
requests:
cpu: 500m
memory: 16Gi
storage: {}
type: elasticsearch
managementState: Managed
visualization:
kibana:
nodeSelector: (1)
node-role.kubernetes.io/infra: ''
proxy:
resources: null
replicas: 1
resources: null
type: kibana
...
1 | Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. |
Verification
To verify that a component has moved, you can use the oc get pod -o wide
command.
For example:
You want to move the Kibana pod from the
ip-10-0-147-79.us-east-2.compute.internal
node:$ oc get pod kibana-5b8bdf44f9-ccpq9 -o wide
Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>
You want to move the Kibana pod to the
ip-10-0-139-48.us-east-2.compute.internal
node, a dedicated infrastructure node:$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION
ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.22.1
ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.22.1
ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.22.1
ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.22.1
ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.22.1
ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.22.1
ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.22.1
Note that the node has a
node-role.kubernetes.io/infra: ''
label:$ oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml
Example output
kind: Node
apiVersion: v1
metadata:
name: ip-10-0-139-48.us-east-2.compute.internal
selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal
uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751
resourceVersion: '39083'
creationTimestamp: '2020-04-13T19:07:55Z'
labels:
node-role.kubernetes.io/infra: ''
...
To move the Kibana pod, edit the
ClusterLogging
CR to add a node selector:apiVersion: logging.openshift.io/v1
kind: ClusterLogging
...
spec:
...
visualization:
kibana:
nodeSelector: (1)
node-role.kubernetes.io/infra: ''
proxy:
resources: null
replicas: 1
resources: null
type: kibana
1 Add a node selector to match the label in the node specification. After you save the CR, the current Kibana pod is terminated and new pod is deployed:
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE
cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m
elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m
elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m
elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m
fluentd-42dzz 1/1 Running 0 28m
fluentd-d74rq 1/1 Running 0 28m
fluentd-m5vr9 1/1 Running 0 28m
fluentd-nkxl7 1/1 Running 0 28m
fluentd-pdvqb 1/1 Running 0 28m
fluentd-tflh6 1/1 Running 0 28m
kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s
kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s
The new pod is on the
ip-10-0-139-48.us-east-2.compute.internal
node:$ oc get pod kibana-7d85dcffc8-bfpfp -o wide
Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>
After a few moments, the original Kibana pod is removed.
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE
cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m
elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m
elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m
elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m
fluentd-42dzz 1/1 Running 0 29m
fluentd-d74rq 1/1 Running 0 29m
fluentd-m5vr9 1/1 Running 0 29m
fluentd-nkxl7 1/1 Running 0 29m
fluentd-pdvqb 1/1 Running 0 29m
fluentd-tflh6 1/1 Running 0 29m
kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s
About the cluster autoscaler
The cluster autoscaler adjusts the size of an OKD cluster to meet its current deployment needs. It uses declarative, Kubernetes-style arguments to provide infrastructure management that does not rely on objects of a specific cloud provider. The cluster autoscaler has a cluster scope, and is not associated with a particular namespace.
The cluster autoscaler increases the size of the cluster when there are pods that fail to schedule on any of the current worker nodes due to insufficient resources or when another node is necessary to meet deployment needs. The cluster autoscaler does not increase the cluster resources beyond the limits that you specify.
The cluster autoscaler computes the total memory, CPU, and GPU on all nodes the cluster, even though it does not manage the control plane nodes. These values are not single-machine oriented. They are an aggregation of all the resources in the entire cluster. For example, if you set the maximum memory resource limit, the cluster autoscaler includes all the nodes in the cluster when calculating the current memory usage. That calculation is then used to determine if the cluster autoscaler has the capacity to add more worker resources.
Ensure that the |
Every 10 seconds, the cluster autoscaler checks which nodes are unnecessary in the cluster and removes them. The cluster autoscaler considers a node for removal if the following conditions apply:
The sum of CPU and memory requests of all pods running on the node is less than 50% of the allocated resources on the node.
The cluster autoscaler can move all pods running on the node to the other nodes.
The cluster autoscaler does not have scale down disabled annotation.
If the following types of pods are present on a node, the cluster autoscaler will not remove the node:
Pods with restrictive pod disruption budgets (PDBs).
Kube-system pods that do not run on the node by default.
Kube-system pods that do not have a PDB or have a PDB that is too restrictive.
Pods that are not backed by a controller object such as a deployment, replica set, or stateful set.
Pods with local storage.
Pods that cannot be moved elsewhere because of a lack of resources, incompatible node selectors or affinity, matching anti-affinity, and so on.
Unless they also have a
"cluster-autoscaler.kubernetes.io/safe-to-evict": "true"
annotation, pods that have a"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"
annotation.
For example, you set the maximum CPU limit to 64 cores and configure the cluster autoscaler to only create machines that have 8 cores each. If your cluster starts with 30 cores, the cluster autoscaler can add up to 4 more nodes with 32 cores, for a total of 62.
If you configure the cluster autoscaler, additional usage restrictions apply:
Do not modify the nodes that are in autoscaled node groups directly. All nodes within the same node group have the same capacity and labels and run the same system pods.
Specify requests for your pods.
If you have to prevent pods from being deleted too quickly, configure appropriate PDBs.
Confirm that your cloud provider quota is large enough to support the maximum node pools that you configure.
Do not run additional node group autoscalers, especially the ones offered by your cloud provider.
The horizontal pod autoscaler (HPA) and the cluster autoscaler modify cluster resources in different ways. The HPA changes the deployment’s or replica set’s number of replicas based on the current CPU load. If the load increases, the HPA creates new replicas, regardless of the amount of resources available to the cluster. If there are not enough resources, the cluster autoscaler adds resources so that the HPA-created pods can run. If the load decreases, the HPA stops some replicas. If this action causes some nodes to be underutilized or completely empty, the cluster autoscaler deletes the unnecessary nodes.
The cluster autoscaler takes pod priorities into account. The Pod Priority and Preemption feature enables scheduling pods based on priorities if the cluster does not have enough resources, but the cluster autoscaler ensures that the cluster has resources to run all pods. To honor the intention of both features, the cluster autoscaler includes a priority cutoff function. You can use this cutoff to schedule “best-effort” pods, which do not cause the cluster autoscaler to increase resources but instead run only when spare resources are available.
Pods with priority lower than the cutoff value do not cause the cluster to scale up or prevent the cluster from scaling down. No new nodes are added to run the pods, and nodes running these pods might be deleted to free resources.
ClusterAutoscaler resource definition
This ClusterAutoscaler
resource definition shows the parameters and sample values for the cluster autoscaler.
apiVersion: "autoscaling.openshift.io/v1"
kind: "ClusterAutoscaler"
metadata:
name: "default"
spec:
podPriorityThreshold: -10 (1)
resourceLimits:
maxNodesTotal: 24 (2)
cores:
min: 8 (3)
max: 128 (4)
memory:
min: 4 (5)
max: 256 (6)
gpus:
- type: nvidia.com/gpu (7)
min: 0 (8)
max: 16 (9)
- type: amd.com/gpu (7)
min: 0 (8)
max: 4 (9)
scaleDown: (10)
enabled: true (11)
delayAfterAdd: 10m (12)
delayAfterDelete: 5m (13)
delayAfterFailure: 30s (14)
unneededTime: 5m (15)
1 | Specify the priority that a pod must exceed to cause the cluster autoscaler to deploy additional nodes. Enter a 32-bit integer value. The podPriorityThreshold value is compared to the value of the PriorityClass that you assign to each pod. |
2 | Specify the maximum number of nodes to deploy. This value is the total number of machines that are deployed in your cluster, not just the ones that the autoscaler controls. Ensure that this value is large enough to account for all of your control plane and compute machines and the total number of replicas that you specify in your MachineAutoscaler resources. |
3 | Specify the minimum number of cores to deploy in the cluster. |
4 | Specify the maximum number of cores to deploy in the cluster. |
5 | Specify the minimum amount of memory, in GiB, in the cluster. |
6 | Specify the maximum amount of memory, in GiB, in the cluster. |
7 | Optionally, specify the type of GPU node to deploy. Only nvidia.com/gpu and amd.com/gpu are valid types. |
8 | Specify the minimum number of GPUs to deploy in the cluster. |
9 | Specify the maximum number of GPUs to deploy in the cluster. |
10 | In this section, you can specify the period to wait for each action by using any valid ParseDuration interval, including ns , us , ms , s , m , and h . |
11 | Specify whether the cluster autoscaler can remove unnecessary nodes. |
12 | Optionally, specify the period to wait before deleting a node after a node has recently been added. If you do not specify a value, the default value of 10m is used. |
13 | Specify the period to wait before deleting a node after a node has recently been deleted. If you do not specify a value, the default value of 10s is used. |
14 | Specify the period to wait before deleting a node after a scale down failure occurred. If you do not specify a value, the default value of 3m is used. |
15 | Specify the period before an unnecessary node is eligible for deletion. If you do not specify a value, the default value of 10m is used. |
When performing a scaling operation, the cluster autoscaler remains within the ranges set in the The minimum and maximum CPUs, memory, and GPU values are determined by calculating those resources on all nodes in the cluster, even if the cluster autoscaler does not manage the nodes. For example, the control plane nodes are considered in the total memory in the cluster, even though the cluster autoscaler does not manage the control plane nodes. |
Deploying the cluster autoscaler
To deploy the cluster autoscaler, you create an instance of the ClusterAutoscaler
resource.
Procedure
Create a YAML file for the
ClusterAutoscaler
resource that contains the customized resource definition.Create the resource in the cluster:
$ oc create -f <filename>.yaml (1)
1 <filename>
is the name of the resource file that you customized.
About the machine autoscaler
The machine autoscaler adjusts the number of Machines in the machine sets that you deploy in an OKD cluster. You can scale both the default worker
machine set and any other machine sets that you create. The machine autoscaler makes more Machines when the cluster runs out of resources to support more deployments. Any changes to the values in MachineAutoscaler
resources, such as the minimum or maximum number of instances, are immediately applied to the machine set they target.
You must deploy a machine autoscaler for the cluster autoscaler to scale your machines. The cluster autoscaler uses the annotations on machine sets that the machine autoscaler sets to determine the resources that it can scale. If you define a cluster autoscaler without also defining machine autoscalers, the cluster autoscaler will never scale your cluster. |
MachineAutoscaler resource definition
This MachineAutoscaler
resource definition shows the parameters and sample values for the machine autoscaler.
apiVersion: "autoscaling.openshift.io/v1beta1"
kind: "MachineAutoscaler"
metadata:
name: "worker-us-east-1a" (1)
namespace: "openshift-machine-api"
spec:
minReplicas: 1 (2)
maxReplicas: 12 (3)
scaleTargetRef: (4)
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet (5)
name: worker-us-east-1a (6)
1 | Specify the machine autoscaler name. To make it easier to identify which machine set this machine autoscaler scales, specify or include the name of the machine set to scale. The machine set name takes the following form: <clusterid>-<machineset>-<aws-region-az> . | ||
2 | Specify the minimum number machines of the specified type that must remain in the specified zone after the cluster autoscaler initiates cluster scaling. If running in AWS, GCP, Azure, RHOSP, or vSphere, this value can be set to 0 . For other providers, do not set this value to 0 .You can save on costs by setting this value to
| ||
3 | Specify the maximum number machines of the specified type that the cluster autoscaler can deploy in the specified AWS zone after it initiates cluster scaling. Ensure that the maxNodesTotal value in the ClusterAutoscaler resource definition is large enough to allow the machine autoscaler to deploy this number of machines. | ||
4 | In this section, provide values that describe the existing machine set to scale. | ||
5 | The kind parameter value is always MachineSet . | ||
6 | The name value must match the name of an existing machine set, as shown in the metadata.name parameter value. |
Deploying the machine autoscaler
To deploy the machine autoscaler, you create an instance of the MachineAutoscaler
resource.
Procedure
Create a YAML file for the
MachineAutoscaler
resource that contains the customized resource definition.Create the resource in the cluster:
$ oc create -f <filename>.yaml (1)
1 <filename>
is the name of the resource file that you customized.
Enabling Technology Preview features using FeatureGates
You can turn on a subset of the current Technology Preview features on for all nodes in the cluster by editing the FeatureGate
custom resource (CR).
Understanding feature gates
You can use the FeatureGate
custom resource (CR) to enable specific feature sets in your cluster. A feature set is a collection of OKD features that are not enabled by default.
For example, the TechPreviewNoUpgrade
feature set allows you to enable a subset of the current Technology Preview features on test clusters, where you can fully test them, while leaving the features disabled on production clusters.
You can activate any of the following feature sets by using the FeatureGate
CR:
Feature Set | Description |
---|---|
| Enables Technology Preview features that are not part of the default features. Enabling this feature set cannot be undone and prevents upgrades. This feature set is not recommended on production clusters. The following Technology Preview features are enabled by this feature set:
|
Enabling feature sets using the CLI
You can use the OpenShift CLI (oc
) to enable feature sets for all of the nodes in a cluster by editing the FeatureGate
custom resource (CR).
Prerequisites
- You have installed the OpenShift CLI (
oc
).
Procedure
To enable feature sets:
Edit the
FeatureGate
CR namedcluster
:$ oc edit featuregate cluster
Sample FeatureGate custom resource
apiVersion: config.openshift.io/v1
kind: FeatureGate
metadata:
name: cluster (1)
spec:
featureSet: TechPreviewNoUpgrade (2)
1 The name of the FeatureGate
CR must becluster
.2 Add the feature sets that you want to enable in a comma-separated list: TechPreviewNoUpgrade
enables specific Technology Preview features.
Enabling the
TechPreviewNoUpgrade
feature set cannot be undone and prevents upgrades. These feature sets are not recommended on production clusters.
etcd tasks
Back up etcd, enable or disable etcd encryption, or defragment etcd data.
About etcd encryption
By default, etcd data is not encrypted in OKD. You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties.
When you enable etcd encryption, the following OpenShift API server and Kubernetes API server resources are encrypted:
Secrets
Config maps
Routes
OAuth access tokens
OAuth authorize tokens
When you enable etcd encryption, encryption keys are created. These keys are rotated on a weekly basis. You must have these keys to restore from an etcd backup.
Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted. |
Enabling etcd encryption
You can enable etcd encryption to encrypt sensitive resources in your cluster.
It is not recommended to take a backup of etcd until the initial encryption process is complete. If the encryption process has not completed, the backup might be only partially encrypted. |
Prerequisites
- Access to the cluster as a user with the
cluster-admin
role.
Procedure
Modify the
APIServer
object:$ oc edit apiserver
Set the
encryption
field type toaescbc
:spec:
encryption:
type: aescbc (1)
1 The aescbc
type means that AES-CBC with PKCS#7 padding and a 32 byte key is used to perform the encryption.Save the file to apply the changes.
The encryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster.
Verify that etcd encryption was successful.
Review the
Encrypted
status condition for the OpenShift API server to verify that its resources were successfully encrypted:$ oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
The output shows
EncryptionCompleted
upon successful encryption:EncryptionCompleted
All resources encrypted: routes.route.openshift.io
If the output shows
EncryptionInProgress
, encryption is still in progress. Wait a few minutes and try again.Review the
Encrypted
status condition for the Kubernetes API server to verify that its resources were successfully encrypted:$ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
The output shows
EncryptionCompleted
upon successful encryption:EncryptionCompleted
All resources encrypted: secrets, configmaps
If the output shows
EncryptionInProgress
, encryption is still in progress. Wait a few minutes and try again.Review the
Encrypted
status condition for the OpenShift OAuth API server to verify that its resources were successfully encrypted:$ oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
The output shows
EncryptionCompleted
upon successful encryption:EncryptionCompleted
All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io
If the output shows
EncryptionInProgress
, encryption is still in progress. Wait a few minutes and try again.
Disabling etcd encryption
You can disable encryption of etcd data in your cluster.
Prerequisites
- Access to the cluster as a user with the
cluster-admin
role.
Procedure
Modify the
APIServer
object:$ oc edit apiserver
Set the
encryption
field type toidentity
:spec:
encryption:
type: identity (1)
1 The identity
type is the default value and means that no encryption is performed.Save the file to apply the changes.
The decryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster.
Verify that etcd decryption was successful.
Review the
Encrypted
status condition for the OpenShift API server to verify that its resources were successfully decrypted:$ oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
The output shows
DecryptionCompleted
upon successful decryption:DecryptionCompleted
Encryption mode set to identity and everything is decrypted
If the output shows
DecryptionInProgress
, decryption is still in progress. Wait a few minutes and try again.Review the
Encrypted
status condition for the Kubernetes API server to verify that its resources were successfully decrypted:$ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
The output shows
DecryptionCompleted
upon successful decryption:DecryptionCompleted
Encryption mode set to identity and everything is decrypted
If the output shows
DecryptionInProgress
, decryption is still in progress. Wait a few minutes and try again.Review the
Encrypted
status condition for the OpenShift OAuth API server to verify that its resources were successfully decrypted:$ oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
The output shows
DecryptionCompleted
upon successful decryption:DecryptionCompleted
Encryption mode set to identity and everything is decrypted
If the output shows
DecryptionInProgress
, decryption is still in progress. Wait a few minutes and try again.
Backing up etcd data
Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd.
Only save a backup from a single control plane host. Do not take a backup from each control plane host in the cluster. |
Prerequisites
You have access to the cluster as a user with the
cluster-admin
role.You have checked whether the cluster-wide proxy is enabled.
You can check whether the proxy is enabled by reviewing the output of
oc get proxy cluster -o yaml
. The proxy is enabled if thehttpProxy
,httpsProxy
, andnoProxy
fields have values set.
Procedure
Start a debug session for a control plane node:
$ oc debug node/<node_name>
Change your root directory to the host:
sh-4.2# chroot /host
If the cluster-wide proxy is enabled, be sure that you have exported the
NO_PROXY
,HTTP_PROXY
, andHTTPS_PROXY
environment variables.Run the
cluster-backup.sh
script and pass in the location to save the backup to.The
cluster-backup.sh
script is maintained as a component of the etcd Cluster Operator and is a wrapper around theetcdctl snapshot save
command.sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup
Example script output
found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6
found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7
found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6
found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3
ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1
etcdctl version: 3.4.14
API version: 3.4
{"level":"info","ts":1624647639.0188997,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part"}
{"level":"info","ts":"2021-06-25T19:00:39.030Z","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":1624647639.0301006,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"https://10.0.0.5:2379"}
{"level":"info","ts":"2021-06-25T19:00:40.215Z","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"}
{"level":"info","ts":1624647640.6032252,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"https://10.0.0.5:2379","size":"114 MB","took":1.584090459}
{"level":"info","ts":1624647640.6047094,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db"}
Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db
{"hash":3866667823,"revision":31407,"totalKey":12828,"totalSize":114446336}
snapshot db and kube resources are successfully saved to /home/core/assets/backup
In this example, two files are created in the
/home/core/assets/backup/
directory on the control plane host:snapshot_<datetimestamp>.db
: This file is the etcd snapshot. Thecluster-backup.sh
script confirms its validity.static_kuberesources_<datetimestamp>.tar.gz
: This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot.If etcd encryption is enabled, it is recommended to store this second file separately from the etcd snapshot for security reasons. However, this file is required to restore from the etcd snapshot.
Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted.
Defragmenting etcd data
Defragment etcd data to reclaim disk space after events that cause disk fragmentation, such as etcd history compaction.
History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system.
Defragmentation occurs automatically, but you can also trigger it manually.
Automatic defragmentation is good for most cases, because the etcd operator uses cluster information to determine the most efficient operation for the user. |
Automatic defragmentation
The etcd Operator automatically defragments disks. No manual intervention is needed.
Verify that the defragmentation process is successful by viewing one of these logs:
etcd logs
cluster-etcd-operator pod
operator status error log
Example log output
I0907 08:43:12.171919 1
defragcontroller.go:198] etcd member "ip-
10-0-191-150.us-west-2.compute.internal"
backend store fragmented: 39.33 %, dbSize:
349138944
Manual defragmentation
You can monitor the etcd_db_total_size_in_bytes
metric to determine whether manual defragmentation is necessary.
Defragmenting etcd is a blocking action. The etcd member will not response until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover. |
Follow this procedure to defragment etcd data on each etcd member.
Prerequisites
- You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Determine which etcd member is the leader, because the leader should be defragmented last.
Get the list of etcd pods:
$ oc get pods -n openshift-etcd -o wide | grep -v quorum-guard | grep etcd
Example output
etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none>
etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none>
etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>
Choose a pod and run the following command to determine which etcd member is the leader:
$ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.us-west-1.compute.internal etcdctl endpoint status --cluster -w table
Example output
Defaulting container name to etcdctl.
Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod.
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | |
| https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | |
| https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
Based on the
IS LEADER
column of this output, thehttps://10.0.199.170:2379
endpoint is the leader. Matching this endpoint with the output of the previous step, the pod name of the leader isetcd-ip-10-0-199-170.example.redhat.com
.
Defragment an etcd member.
Connect to the running etcd container, passing in the name of a pod that is not the leader:
$ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com
Unset the
ETCDCTL_ENDPOINTS
environment variable:sh-4.4# unset ETCDCTL_ENDPOINTS
Defragment the etcd member:
sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag
Example output
Finished defragmenting etcd member[https://localhost:2379]
If a timeout error occurs, increase the value for
--command-timeout
until the command succeeds.Verify that the database size was reduced:
sh-4.4# etcdctl endpoint status -w table --cluster
Example output
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | |
| https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 41 MB | false | false | 7 | 91624 | 91624 | | (1)
| https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
This example shows that the database size for this etcd member is now 41 MB as opposed to the starting size of 104 MB.
Repeat these steps to connect to each of the other etcd members and defragment them. Always defragment the leader last.
Wait at least one minute between defragmentation actions to allow the etcd pod to recover. Until the etcd pod recovers, the etcd member will not respond.
If any
NOSPACE
alarms were triggered due to the space quota being exceeded, clear them.Check if there are any
NOSPACE
alarms:sh-4.4# etcdctl alarm list
Example output
memberID:12345678912345678912 alarm:NOSPACE
Clear the alarms:
sh-4.4# etcdctl alarm disarm
Restoring to a previous cluster state
You can use a saved etcd backup to restore a previous cluster state or restore a cluster that has lost the majority of control plane hosts.
When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OKD 4.7.2 cluster must use an etcd backup that was taken from 4.7.2. |
Prerequisites
Access to the cluster as a user with the
cluster-admin
role.A healthy control plane host to use as the recovery host.
SSH access to control plane hosts.
A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats:
snapshot_<datetimestamp>.db
andstatic_kuberesources_<datetimestamp>.tar.gz
.
Procedure
Select a control plane host to use as the recovery host. This is the host that you will run the restore operation on.
Establish SSH connectivity to each of the control plane nodes, including the recovery host.
The Kubernetes API server becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal.
If you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state.
Copy the etcd backup directory to the recovery control plane host.
This procedure assumes that you copied the
backup
directory containing the etcd snapshot and the resources for the static pods to the/home/core/
directory of your recovery control plane host.Stop the static pods on any other control plane nodes.
It is not required to manually stop the pods on the recovery host. The recovery script will stop the pods on the recovery host.
Access a control plane host that is not the recovery host.
Move the existing etcd pod file out of the kubelet manifest directory:
[core@ip-10-0-154-194 ~]$ sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmp
Verify that the etcd pods are stopped.
[core@ip-10-0-154-194 ~]$ sudo crictl ps | grep etcd | grep -v operator
The output of this command should be empty. If it is not empty, wait a few minutes and check again.
Move the existing Kubernetes API server pod file out of the kubelet manifest directory:
[core@ip-10-0-154-194 ~]$ sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp
Verify that the Kubernetes API server pods are stopped.
[core@ip-10-0-154-194 ~]$ sudo crictl ps | grep kube-apiserver | grep -v operator
The output of this command should be empty. If it is not empty, wait a few minutes and check again.
Move the etcd data directory to a different location:
[core@ip-10-0-154-194 ~]$ sudo mv /var/lib/etcd/ /tmp
Repeat this step on each of the other control plane hosts that is not the recovery host.
Access the recovery control plane host.
If the cluster-wide proxy is enabled, be sure that you have exported the
NO_PROXY
,HTTP_PROXY
, andHTTPS_PROXY
environment variables.You can check whether the proxy is enabled by reviewing the output of
oc get proxy cluster -o yaml
. The proxy is enabled if thehttpProxy
,httpsProxy
, andnoProxy
fields have values set.Run the restore script on the recovery control plane host and pass in the path to the etcd backup directory:
[core@ip-10-0-143-125 ~]$ sudo -E /usr/local/bin/cluster-restore.sh /home/core/backup
Example script output
...stopping kube-scheduler-pod.yaml
...stopping kube-controller-manager-pod.yaml
...stopping etcd-pod.yaml
...stopping kube-apiserver-pod.yaml
Waiting for container etcd to stop
.complete
Waiting for container etcdctl to stop
.............................complete
Waiting for container etcd-metrics to stop
complete
Waiting for container kube-controller-manager to stop
complete
Waiting for container kube-apiserver to stop
..........................................................................................complete
Waiting for container kube-scheduler to stop
complete
Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup
starting restore-etcd static pod
starting kube-apiserver-pod.yaml
static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml
starting kube-controller-manager-pod.yaml
static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml
starting kube-scheduler-pod.yaml
static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml
Restart the kubelet service on all control plane hosts.
From the recovery host, run the following command:
[core@ip-10-0-143-125 ~]$ sudo systemctl restart kubelet.service
Repeat this step on all other control plane hosts.
Approve the pending CSRs:
Get the list of current CSRs:
$ oc get csr
Example output
NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending (1)
csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending (1)
csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending (2)
csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending (2)
...
1 A pending kubelet service CSR (for user-provisioned installations). 2 A pending node-bootstrapper
CSR.Review the details of a CSR to verify that it is valid:
$ oc describe csr <csr_name> (1)
1 <csr_name>
is the name of a CSR from the list of current CSRs.Approve each valid
node-bootstrapper
CSR:$ oc adm certificate approve <csr_name>
For user-provisioned installations, approve each valid kubelet service CSR:
$ oc adm certificate approve <csr_name>
Verify that the single member control plane has started successfully.
From the recovery host, verify that the etcd container is running.
[core@ip-10-0-143-125 ~]$ sudo crictl ps | grep etcd | grep -v operator
Example output
3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0
From the recovery host, verify that the etcd pod is running.
[core@ip-10-0-143-125 ~]$ oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcd
If you attempt to run
oc login
prior to running this command and receive the following error, wait a few moments for the authentication controllers to start and try again.Unable to connect to the server: EOF
Example output
NAME READY STATUS RESTARTS AGE
etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s
If the status is
Pending
, or the output lists more than one running etcd pod, wait a few minutes and check again.Repeat this step for each lost control plane host that is not the recovery host.
Delete and recreate other non-recovery, control plane machines, one by one. After these machines are recreated, a new revision is forced and etcd scales up automatically.
If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps. Otherwise, you must create the new master node using the same method that was used to originally create it.
Do not delete and recreate the machine for the recovery host.
Obtain the machine for one of the lost control plane hosts.
In a terminal that has access to the cluster as a cluster-admin user, run the following command:
$ oc get machines -n openshift-machine-api -o wide
Example output:
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE
clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped (1)
clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running
clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running
clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running
clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running
clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running
1 This is the control plane machine for the lost control plane host, ip-10-0-131-183.ec2.internal
.Save the machine configuration to a file on your file system:
$ oc get machine clustername-8qw5l-master-0 \ (1)
-n openshift-machine-api \
-o yaml \
> new-master-machine.yaml
1 Specify the name of the control plane machine for the lost control plane host. Edit the
new-master-machine.yaml
file that was created in the previous step to assign a new name and remove unnecessary fields.Remove the entire
status
section:status:
addresses:
- address: 10.0.131.183
type: InternalIP
- address: ip-10-0-131-183.ec2.internal
type: InternalDNS
- address: ip-10-0-131-183.ec2.internal
type: Hostname
lastUpdated: "2020-04-20T17:44:29Z"
nodeRef:
kind: Node
name: ip-10-0-131-183.ec2.internal
uid: acca4411-af0d-4387-b73e-52b2484295ad
phase: Running
providerStatus:
apiVersion: awsproviderconfig.openshift.io/v1beta1
conditions:
- lastProbeTime: "2020-04-20T16:53:50Z"
lastTransitionTime: "2020-04-20T16:53:50Z"
message: machine successfully created
reason: MachineCreationSucceeded
status: "True"
type: MachineCreation
instanceId: i-0fdb85790d76d0c3f
instanceState: stopped
kind: AWSMachineProviderStatus
Change the
metadata.name
field to a new name.It is recommended to keep the same base name as the old machine and change the ending number to the next available number. In this example,
clustername-8qw5l-master-0
is changed toclustername-8qw5l-master-3
:apiVersion: machine.openshift.io/v1beta1
kind: Machine
metadata:
...
name: clustername-8qw5l-master-3
...
Update the
metadata.selfLink
field to use the new machine name from the previous step:apiVersion: machine.openshift.io/v1beta1
kind: Machine
metadata:
...
selfLink: /apis/machine.openshift.io/v1beta1/namespaces/openshift-machine-api/machines/clustername-8qw5l-master-3
...
Remove the
spec.providerID
field:providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f
Remove the
metadata.annotations
andmetadata.generation
fields:annotations:
machine.openshift.io/instance-state: running
...
generation: 2
Remove the
metadata.resourceVersion
andmetadata.uid
fields:resourceVersion: "13291"
uid: a282eb70-40a2-4e89-8009-d05dd420d31a
Delete the machine of the lost control plane host:
$ oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 (1)
1 Specify the name of the control plane machine for the lost control plane host. Verify that the machine was deleted:
$ oc get machines -n openshift-machine-api -o wide
Example output:
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE
clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running
clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running
clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running
clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running
clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running
Create the new machine using the
new-master-machine.yaml
file:$ oc apply -f new-master-machine.yaml
Verify that the new machine has been created:
$ oc get machines -n openshift-machine-api -o wide
Example output:
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE
clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running
clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running
clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running (1)
clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running
clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running
clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running
1 The new machine, clustername-8qw5l-master-3
is being created and is ready after the phase changes fromProvisioning
toRunning
.It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.
Repeat these steps for each lost control plane host that is not the recovery host.
In a separate terminal window, log in to the cluster as a user with the
cluster-admin
role by using the following command:$ oc login -u <cluster_admin> (1)
1 For <cluster_admin>
, specify a user name with thecluster-admin
role.Force etcd redeployment.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge (1)
1 The forceRedeploymentReason
value must be unique, which is why a timestamp is appended.When the etcd cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up.
Verify all nodes are updated to the latest revision.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:$ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Review the
NodeInstallerProgressing
status condition for etcd to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevision
upon successful update:AllNodesAtLatestRevision
3 nodes are at revision 7 (1)
1 In this example, the latest revision number is 7
.If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7
, this means that the update is still in progress. Wait a few minutes and try again.After etcd is redeployed, force new rollouts for the control plane. The Kubernetes API server will reinstall itself on the other nodes because the kubelet is connected to API servers using an internal load balancer.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following commands.Force a new rollout for the Kubernetes API server:
$ oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
Verify all nodes are updated to the latest revision.
$ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Review the
NodeInstallerProgressing
status condition to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevision
upon successful update:AllNodesAtLatestRevision
3 nodes are at revision 7 (1)
1 In this example, the latest revision number is 7
.If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7
, this means that the update is still in progress. Wait a few minutes and try again.Force a new rollout for the Kubernetes controller manager:
$ oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
Verify all nodes are updated to the latest revision.
$ oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Review the
NodeInstallerProgressing
status condition to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevision
upon successful update:AllNodesAtLatestRevision
3 nodes are at revision 7 (1)
1 In this example, the latest revision number is 7
.If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7
, this means that the update is still in progress. Wait a few minutes and try again.Force a new rollout for the Kubernetes scheduler:
$ oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
Verify all nodes are updated to the latest revision.
$ oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Review the
NodeInstallerProgressing
status condition to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevision
upon successful update:AllNodesAtLatestRevision
3 nodes are at revision 7 (1)
1 In this example, the latest revision number is 7
.If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7
, this means that the update is still in progress. Wait a few minutes and try again.
Verify that all control plane hosts have started and joined the cluster.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:$ oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcd
Example output
etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h
etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h
etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h
To ensure that all workloads return to normal operation following a recovery procedure, restart each pod that stores Kubernetes API information. This includes OKD components such as routers, Operators, and third-party components.
Note that it might take several minutes after completing this procedure for all services to be restored. For example, authentication by using oc login
might not immediately work until the OAuth server pods are restarted.
Issues and workarounds for restoring a persistent storage state
If your OKD cluster uses persistent storage of any form, a state of the cluster is typically stored outside etcd. It might be an Elasticsearch cluster running in a pod or a database running in a StatefulSet
object. When you restore from an etcd backup, the status of the workloads in OKD is also restored. However, if the etcd snapshot is old, the status might be invalid or outdated.
The contents of persistent volumes (PVs) are never part of the etcd snapshot. When you restore an OKD cluster from an etcd snapshot, non-critical workloads might gain access to critical data, or vice-versa. |
The following are some example scenarios that produce an out-of-date status:
MySQL database is running in a pod backed up by a PV object. Restoring OKD from an etcd snapshot does not bring back the volume on the storage provider, and does not produce a running MySQL pod, despite the pod repeatedly attempting to start. You must manually restore this pod by restoring the volume on the storage provider, and then editing the PV to point to the new volume.
Pod P1 is using volume A, which is attached to node X. If the etcd snapshot is taken while another pod uses the same volume on node Y, then when the etcd restore is performed, pod P1 might not be able to start correctly due to the volume still being attached to node Y. OKD is not aware of the attachment, and does not automatically detach it. When this occurs, the volume must be manually detached from node Y so that the volume can attach on node X, and then pod P1 can start.
Cloud provider or storage provider credentials were updated after the etcd snapshot was taken. This causes any CSI drivers or Operators that depend on the those credentials to not work. You might have to manually update the credentials required by those drivers or Operators.
A device is removed or renamed from OKD nodes after the etcd snapshot is taken. The Local Storage Operator creates symlinks for each PV that it manages from
/dev/disk/by-id
or/dev
directories. This situation might cause the local PVs to refer to devices that no longer exist.To fix this problem, an administrator must:
Manually remove the PVs with invalid devices.
Remove symlinks from respective nodes.
Delete
LocalVolume
orLocalVolumeSet
objects (see Storage → Configuring persistent storage → Persistent storage using local volumes → Deleting the Local Storage Operator Resources).
Pod disruption budgets
Understand and configure pod disruption budgets.
Understanding how to use pod disruption budgets to specify the number of pods that must be up
A pod disruption budget is part of the Kubernetes API, which can be managed with oc
commands like other object types. They allow the specification of safety constraints on pods during operations, such as draining a node for maintenance.
PodDisruptionBudget
is an API object that specifies the minimum number or percentage of replicas that must be up at a time. Setting these in projects can be helpful during node maintenance (such as scaling a cluster down or a cluster upgrade) and is only honored on voluntary evictions (not on node failures).
A PodDisruptionBudget
object’s configuration consists of the following key parts:
A label selector, which is a label query over a set of pods.
An availability level, which specifies the minimum number of pods that must be available simultaneously, either:
minAvailable
is the number of pods must always be available, even during a disruption.maxUnavailable
is the number of pods can be unavailable during a disruption.
A |
You can check for pod disruption budgets across all projects with the following:
$ oc get poddisruptionbudget --all-namespaces
Example output
NAMESPACE NAME MIN-AVAILABLE SELECTOR
another-project another-pdb 4 bar=foo
test-project my-pdb 2 foo=bar
The PodDisruptionBudget
is considered healthy when there are at least minAvailable
pods running in the system. Every pod above that limit can be evicted.
Depending on your pod priority and preemption settings, lower-priority pods might be removed despite their pod disruption budget requirements. |
Specifying the number of pods that must be up with pod disruption budgets
You can use a PodDisruptionBudget
object to specify the minimum number or percentage of replicas that must be up at a time.
Procedure
To configure a pod disruption budget:
Create a YAML file with the an object definition similar to the following:
apiVersion: policy/v1beta1 (1)
kind: PodDisruptionBudget
metadata:
name: my-pdb
spec:
minAvailable: 2 (2)
selector: (3)
matchLabels:
foo: bar
1 PodDisruptionBudget
is part of thepolicy/v1beta1
API group.2 The minimum number of pods that must be available simultaneously. This can be either an integer or a string specifying a percentage, for example, 20%
.3 A label query over a set of resources. The result of matchLabels
andmatchExpressions
are logically conjoined.Or:
apiVersion: policy/v1beta1 (1)
kind: PodDisruptionBudget
metadata:
name: my-pdb
spec:
maxUnavailable: 25% (2)
selector: (3)
matchLabels:
foo: bar
1 PodDisruptionBudget
is part of thepolicy/v1beta1
API group.2 The maximum number of pods that can be unavailable simultaneously. This can be either an integer or a string specifying a percentage, for example, 20%
.3 A label query over a set of resources. The result of matchLabels
andmatchExpressions
are logically conjoined.Run the following command to add the object to project:
$ oc create -f </path/to/file> -n <project_name>
Rotating or removing cloud provider credentials
After installing OKD, some organizations require the rotation or removal of the cloud provider credentials that were used during the initial installation.
To allow the cluster to use the new credentials, you must update the secrets that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials.
Rotating cloud provider credentials manually
If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials.
The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential.
Prerequisites
Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using:
For mint mode, AWS, Azure, and GCP are supported.
For passthrough mode, AWS, Azure, GCP, Red Hat OpenStack Platform (RHOSP), oVirt, and VMware vSphere are supported.
You have changed the credentials that are used to interface with your cloud provider.
The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster.
When rotating the credentials for an Azure cluster that is using mint mode, do not delete or replace the service principal that was used during installation. Instead, generate new Azure service principal client secrets and update the OKD secrets accordingly. |
Procedure
In the Administrator perspective of the web console, navigate to Workloads → Secrets.
In the table on the Secrets page, find the root secret for your cloud provider.
Platform Secret name AWS
aws-creds
Azure
azure-credentials
GCP
gcp-credentials
Click the Options menu in the same row as the secret and select Edit Secret.
Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials.
Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save.
If the CCO for your cluster is configured to use mint mode, delete each component secret that is referenced by the individual
CredentialsRequest
objects.Log in to the OKD CLI as a user with the
cluster-admin
role.Get the names and namespaces of all referenced component secrets:
$ oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec[].kind=="<provider_spec>") | .spec.secretRef'
Where
<provider_spec>
is the corresponding value for your cloud provider:AWSProviderSpec
for AWS,AzureProviderSpec
for Azure, orGCPProviderSpec
for GCP.Partial example output for AWS
{
"name": "ebs-cloud-credentials",
"namespace": "openshift-cluster-csi-drivers"
}
{
"name": "cloud-credential-operator-iam-ro-creds",
"namespace": "openshift-cloud-credential-operator"
}
...
Delete each of the referenced component secrets:
$ oc delete secret <secret_name> -n <secret_namespace>
Where
<secret_name>
is the name of a secret and<secret_namespace>
is the namespace that contains the secret.Example deletion of an AWS secret
$ oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers
You do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones.
To verify that the credentials have changed:
In the Administrator perspective of the web console, navigate to Workloads → Secrets.
Verify that the contents of the Value field or fields are different than the previously recorded information.
Removing cloud provider credentials
After installing an OKD cluster with the Cloud Credential Operator (CCO) in mint mode, you can remove the administrator-level credential secret from the kube-system
namespace in the cluster. The administrator-level credential is required only during changes that require its elevated permissions, such as upgrades.
Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked. |
Prerequisites
- Your cluster is installed on a platform that supports removing cloud credentials from the CCO. Supported platforms are AWS and GCP.
Procedure
In the Administrator perspective of the web console, navigate to Workloads → Secrets.
In the table on the Secrets page, find the root secret for your cloud provider.
Platform Secret name AWS
aws-creds
GCP
gcp-credentials
Click the Options menu in the same row as the secret and select Delete Secret.
Configuring image streams for a disconnected cluster
After installing OKD in a disconnected environment, configure the image streams for the Cluster Samples Operator and the must-gather
image stream.
Cluster Samples Operator assistance for mirroring
During installation, OKD creates a config map named imagestreamtag-to-image
in the openshift-cluster-samples-operator
namespace. The imagestreamtag-to-image
config map contains an entry, the populating image, for each image stream tag.
The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name>
.
During a disconnected installation of OKD, the status of the Cluster Samples Operator is set to Removed
. If you choose to change it to Managed
, it installs samples.
You can use this config map as a reference for which images need to be mirrored for your image streams to import.
While the Cluster Samples Operator is set to
Removed
, you can create your mirrored registry, or determine which existing mirrored registry you want to use.Mirror the samples you want to the mirrored registry using the new config map as your guide.
Add any of the image streams you did not mirror to the
skippedImagestreams
list of the Cluster Samples Operator configuration object.Set
samplesRegistry
of the Cluster Samples Operator configuration object to the mirrored registry.Then set the Cluster Samples Operator to
Managed
to install the image streams you have mirrored.
Using Cluster Samples Operator image streams with alternate or mirrored registries
Most image streams in the openshift
namespace managed by the Cluster Samples Operator point to images located in the Red Hat registry at registry.redhat.io. Mirroring will not apply to these image streams.
The Setting the |
The |
Prerequisites
Access to the cluster as a user with the
cluster-admin
role.Create a pull secret for your mirror registry.
Procedure
Access the images of a specific image stream to mirror, for example:
$ oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io
Mirror images from registry.redhat.io associated with any image streams you need in the restricted network environment into one of the defined mirrors, for example:
$ oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest ${MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest
Create the cluster’s image configuration object:
$ oc create configmap registry-config --from-file=${MIRROR_ADDR_HOSTNAME}..5000=$path/ca.crt -n openshift-config
Add the required trusted CAs for the mirror in the cluster’s image configuration object:
$ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge
Update the
samplesRegistry
field in the Cluster Samples Operator configuration object to contain thehostname
portion of the mirror location defined in the mirror configuration:$ oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator
This is required because the image stream import process does not use the mirror or search mechanism at this time.
Add any image streams that are not mirrored into the
skippedImagestreams
field of the Cluster Samples Operator configuration object. Or if you do not want to support any of the sample image streams, set the Cluster Samples Operator toRemoved
in the Cluster Samples Operator configuration object.The Cluster Samples Operator issues alerts if image stream imports are failing but the Cluster Samples Operator is either periodically retrying or does not appear to be retrying them.
Many of the templates in the
openshift
namespace reference the image streams. So usingRemoved
to purge both the image streams and templates will eliminate the possibility of attempts to use them if they are not functional because of any missing image streams.
Preparing your cluster to gather support data
Clusters using a restricted network must import the default must-gather image to gather debugging data for Red Hat support. The must-gather image is not imported by default, and clusters on a restricted network do not have access to the internet to pull the latest image from a remote repository.
Procedure
If you have not added your mirror registry’s trusted CA to your cluster’s image configuration object as part of the Cluster Samples Operator configuration, perform the following steps:
Create the cluster’s image configuration object:
$ oc create configmap registry-config --from-file=${MIRROR_ADDR_HOSTNAME}..5000=$path/ca.crt -n openshift-config
Add the required trusted CAs for the mirror in the cluster’s image configuration object:
$ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge
Import the default must-gather image from your installation payload:
$ oc import-image is/must-gather -n openshift
When running the oc adm must-gather
command, use the --image
flag and point to the payload image, as in the following example:
$ oc adm must-gather --image=$(oc adm release info --image-for must-gather)