- Post-installation node tasks
- Adding FCOS compute machines to an OKD cluster
- Deploying machine health checks
- Recommended node host practices
- Huge pages
- Understanding device plug-ins
- Methods for deploying a device plug-in
- Understanding the Device Manager
- Enabling Device Manager
- Taints and tolerations
- Topology Manager
- Resource requests and overcommitment
- Cluster-level overcommit using the Cluster Resource Override Operator
- Node-level overcommit
- Understanding compute resources and containers
- Understanding overcomitment and quality of service classes
- Understanding swap memory and QOS
- Understanding nodes overcommitment
- Disabling or enforcing CPU limits using CPU CFS quotas
- Reserving resources for system processes
- Disabling overcommitment for a node
- Project-level limits
- Freeing node resources using garbage collection
- Using the Node Tuning Operator
- Configuring the maximum number of pods per node
Post-installation node tasks
After installing OKD, you can further expand and customize your cluster to your requirements through certain node tasks.
Adding FCOS compute machines to an OKD cluster
You can add more Fedora CoreOS (FCOS) compute machines to your OKD cluster on bare metal.
Before you add more compute machines to a cluster that you installed on bare metal infrastructure, you must create FCOS machines for it to use. You can either use an ISO image or network PXE booting to create the machines.
Prerequisites
You installed a cluster on bare metal.
You have installation media and Fedora CoreOS (FCOS) images that you used to create your cluster. If you do not have these files, you must obtain them by following the instructions in the installation procedure.
Creating more FCOS machines using an ISO image
You can create more Fedora CoreOS (FCOS) compute machines for your bare metal cluster by using an ISO image to create the machines.
Prerequisites
- Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation.
Procedure
Use the ISO file to install FCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster:
Burn the ISO image to a disk and boot it directly.
Use ISO redirection with a LOM interface.
After the instance boots, press the
TAB
orE
key to edit the kernel command line.Add the parameters to the kernel command line:
coreos.inst.install_dev=sda (1)
coreos.inst.ignition_url=http://example.com/worker.ign (2)
1 Specify the block device of the system to install to. 2 Specify the URL of the compute Ignition config file. Only HTTP and HTTPS protocols are supported. Press
Enter
to complete the installation. After FCOS installs, the system reboots. After the system reboots, it applies the Ignition config file that you specified.Continue to create more compute machines for your cluster.
Creating more FCOS machines by PXE or iPXE booting
You can create more Fedora CoreOS (FCOS) compute machines for your bare metal cluster by using PXE or iPXE booting.
Prerequisites
Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation.
Obtain the URLs of the FCOS ISO image, compressed metal BIOS,
kernel
, andinitramfs
files that you uploaded to your HTTP server during cluster installation.You have access to the PXE booting infrastructure that you used to create the machines for your OKD cluster during installation. The machines must boot from their local disks after FCOS is installed on them.
If you use UEFI, you have access to the
grub.conf
file that you modified during OKD installation.
Procedure
Confirm that your PXE or iPXE installation for the FCOS images is correct.
For PXE:
DEFAULT pxeboot
TIMEOUT 20
PROMPT 0
LABEL pxeboot
KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> (1)
APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img (2)
1 Specify the location of the live kernel
file that you uploaded to your HTTP server.2 Specify locations of the FCOS files that you uploaded to your HTTP server. The initrd
parameter value is the location of the liveinitramfs
file, thecoreos.inst.ignition_url
parameter value is the location of the worker Ignition config file, and thecoreos.live.rootfs_url
parameter value is the location of the liverootfs
file. Thecoreos.inst.ignition_url
andcoreos.live.rootfs_url
parameters only support HTTP and HTTPS.
This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console=
arguments to the APPEND
line. For example, add console=tty0 console=ttyS0
to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux?.
For iPXE:
kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img (1)
initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img (2)
1 Specify locations of the FCOS files that you uploaded to your HTTP server. The kernel
parameter value is the location of thekernel
file, theinitrd=main
argument is needed for booting on UEFI systems, thecoreos.inst.ignition_url
parameter value is the location of the worker Ignition config file, and thecoreos.live.rootfs_url
parameter value is the location of the liverootfs
file. Thecoreos.inst.ignition_url
andcoreos.live.rootfs_url
parameters only support HTTP and HTTPS.2 Specify the location of the initramfs
file that you uploaded to your HTTP server.
This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console=
arguments to the kernel
line. For example, add console=tty0 console=ttyS0
to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux?.
- Use the PXE or iPXE infrastructure to create the required compute machines for your cluster.
Approving the certificate signing requests for your machines
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
Prerequisites
- You added machines to your cluster.
Procedure
Confirm that the cluster recognizes the machines:
$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION
master-0 Ready master 63m v1.19.0
master-1 Ready master 63m v1.19.0
master-2 Ready master 64m v1.19.0
worker-0 NotReady worker 76s v1.19.0
worker-1 NotReady worker 70s v1.19.0
The output lists all of the machines that you created.
The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.
Review the pending CSRs and ensure that you see the client requests with the
Pending
orApproved
status for each machine that you added to the cluster:$ oc get csr
Example output
NAME AGE REQUESTOR CONDITION
csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending
csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending
...
In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending
status, approve the CSRs for your cluster machines:Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the
machine-approver
if the Kubelet requests a new certificate with identical parameters.For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the
oc exec
,oc rsh
, andoc logs
commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by thenode-bootstrapper
service account in thesystem:node
orsystem:admin
groups, and confirm the identity of the node.To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> (1)
1 <csr_name>
is the name of a CSR from the list of current CSRs.To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
Some Operators might not become available until some CSRs are approved.
Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:
$ oc get csr
Example output
NAME AGE REQUESTOR CONDITION
csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending
csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending
...
If the remaining CSRs are not approved, and are in the
Pending
status, approve the CSRs for your cluster machines:To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> (1)
1 <csr_name>
is the name of a CSR from the list of current CSRs.To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
After all client and server CSRs have been approved, the machines have the
Ready
status. Verify this by running the following command:$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION
master-0 Ready master 73m v1.20.0
master-1 Ready master 73m v1.20.0
master-2 Ready master 74m v1.20.0
worker-0 Ready worker 11m v1.20.0
worker-1 Ready worker 11m v1.20.0
It can take a few minutes after approval of the server CSRs for the machines to transition to the
Ready
status.
Additional information
- For more information on CSRs, see Certificate Signing Requests.
Deploying machine health checks
Understand and deploy machine health checks.
This process is not applicable for clusters with manually provisioned machines. You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. |
About machine health checks
You can define conditions under which machines in a cluster are considered unhealthy by using a MachineHealthCheck
resource. Machines matching the conditions are automatically remediated.
To monitor machine health, create a MachineHealthCheck
custom resource (CR) that includes a label for the set of machines to monitor and a condition to check, such as staying in the NotReady
status for 15 minutes or displaying a permanent condition in the node-problem-detector.
The controller that observes a MachineHealthCheck
CR checks for the condition that you defined. If a machine fails the health check, the machine is automatically deleted and a new one is created to take its place. When a machine is deleted, you see a machine deleted
event.
For machines with the master role, the machine health check reports the number of unhealthy nodes, but the machine is not deleted. For example: Example output
To limit the disruptive impact of machine deletions, the controller drains and deletes only one node at a time. If there are more unhealthy machines than the |
To stop the check, remove the custom resource.
MachineHealthChecks on Bare Metal
Machine deletion on bare metal cluster triggers reprovisioning of a bare metal host. Usually bare metal reprovisioning is a lengthy process, during which the cluster is missing compute resources and applications might be interrupted. To change the default remediation process from machine deletion to host power-cycle, annotate the MachineHealthCheck resource with the machine.openshift.io/remediation-strategy: external-baremetal
annotation.
After you set the annotation, unhealthy machines are power-cycled by using BMC credentials.
Limitations when deploying machine health checks
There are limitations to consider before deploying a machine health check:
Only machines owned by a machine set are remediated by a machine health check.
Control plane machines are not currently supported and are not remediated if they are unhealthy.
If the node for a machine is removed from the cluster, a machine health check considers the machine to be unhealthy and remediates it immediately.
If the corresponding node for a machine does not join the cluster after the
nodeStartupTimeout
, the machine is remediated.A machine is remediated immediately if the
Machine
resource phase isFailed
.
Sample MachineHealthCheck resource
The MachineHealthCheck
resource resembles one of the following YAML files:
MachineHealthCheck
for bare metal
apiVersion: machine.openshift.io/v1beta1
kind: MachineHealthCheck
metadata:
name: example (1)
namespace: openshift-machine-api
annotations:
machine.openshift.io/remediation-strategy: external-baremetal (2)
spec:
selector:
matchLabels:
machine.openshift.io/cluster-api-machine-role: <role> (3)
machine.openshift.io/cluster-api-machine-type: <role> (3)
machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> (4)
unhealthyConditions:
- type: "Ready"
timeout: "300s" (5)
status: "False"
- type: "Ready"
timeout: "300s" (5)
status: "Unknown"
maxUnhealthy: "40%" (6)
nodeStartupTimeout: "10m" (7)
1 | Specify the name of the machine health check to deploy. |
2 | For bare metal clusters, you must include the machine.openshift.io/remediation-strategy: external-baremetal annotation in the annotations section to enable power-cycle remediation. With this remediation strategy, unhealthy hosts are rebooted instead of removed from the cluster. |
3 | Specify a label for the machine pool that you want to check. |
4 | Specify the machine set to track in <cluster_name>-<label>-<zone> format. For example, prod-node-us-east-1a . |
5 | Specify the timeout duration for a node condition. If a condition is met for the duration of the timeout, the machine will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy machine. |
6 | Specify the amount of machines allowed to be concurrently remediated in the targeted pool. This can be set as a percentage or an integer. If the number of unhealthy machines exceeds the limit set by maxUnhealthy , remediation is not performed. |
7 | Specify the timeout duration that a machine health check must wait for a node to join the cluster before a machine is determined to be unhealthy. |
The |
MachineHealthCheck
for all other installation types
apiVersion: machine.openshift.io/v1beta1
kind: MachineHealthCheck
metadata:
name: example (1)
namespace: openshift-machine-api
spec:
selector:
matchLabels:
machine.openshift.io/cluster-api-machine-role: <role> (2)
machine.openshift.io/cluster-api-machine-type: <role> (2)
machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> (3)
unhealthyConditions:
- type: "Ready"
timeout: "300s" (4)
status: "False"
- type: "Ready"
timeout: "300s" (4)
status: "Unknown"
maxUnhealthy: "40%" (5)
nodeStartupTimeout: "10m" (6)
1 | Specify the name of the machine health check to deploy. |
2 | Specify a label for the machine pool that you want to check. |
3 | Specify the machine set to track in <cluster_name>-<label>-<zone> format. For example, prod-node-us-east-1a . |
4 | Specify the timeout duration for a node condition. If a condition is met for the duration of the timeout, the machine will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy machine. |
5 | Specify the amount of machines allowed to be concurrently remediated in the targeted pool. This can be set as a percentage or an integer. If the number of unhealthy machines exceeds the limit set by maxUnhealthy , remediation is not performed. |
6 | Specify the timeout duration that a machine health check must wait for a node to join the cluster before a machine is determined to be unhealthy. |
The |
Short-circuiting machine health check remediation
Short circuiting ensures that machine health checks remediate machines only when the cluster is healthy. Short-circuiting is configured through the maxUnhealthy
field in the MachineHealthCheck
resource.
If the user defines a value for the maxUnhealthy
field, before remediating any machines, the MachineHealthCheck
compares the value of maxUnhealthy
with the number of machines within its target pool that it has determined to be unhealthy. Remediation is not performed if the number of unhealthy machines exceeds the maxUnhealthy
limit.
If |
The appropriate maxUnhealthy
value depends on the scale of the cluster you deploy and how many machines the MachineHealthCheck
covers. For example, you can use the maxUnhealthy
value to cover multiple machine sets across multiple availability zones so that if you lose an entire zone, your maxUnhealthy
setting prevents further remediation within the cluster.
The maxUnhealthy
field can be set as either an integer or percentage. There are different remediation implementations depending on the maxUnhealthy
value.
Setting maxUnhealthy
by using an absolute value
If maxUnhealthy
is set to 2
:
Remediation will be performed if 2 or fewer nodes are unhealthy
Remediation will not be performed if 3 or more nodes are unhealthy
These values are independent of how many machines are being checked by the machine health check.
Setting maxUnhealthy
by using percentages
If maxUnhealthy
is set to 40%
and there are 25 machines being checked:
Remediation will be performed if 10 or fewer nodes are unhealthy
Remediation will not be performed if 11 or more nodes are unhealthy
If maxUnhealthy
is set to 40%
and there are 6 machines being checked:
Remediation will be performed if 2 or fewer nodes are unhealthy
Remediation will not be performed if 3 or more nodes are unhealthy
The allowed number of machines is rounded down when the percentage of |
Creating a MachineHealthCheck resource
You can create a MachineHealthCheck
resource for all MachineSets
in your cluster. You should not create a MachineHealthCheck
resource that targets control plane machines.
Prerequisites
- Install the
oc
command line interface.
Procedure
Create a
healthcheck.yml
file that contains the definition of your machine health check.Apply the
healthcheck.yml
file to your cluster:$ oc apply -f healthcheck.yml
Scaling a machine set manually
To add or remove an instance of a machine in a machine set, you can manually scale the machine set.
This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have machine sets.
Prerequisites
Install an OKD cluster and the
oc
command line.Log in to
oc
as a user withcluster-admin
permission.
Procedure
View the machine sets that are in the cluster:
$ oc get machinesets -n openshift-machine-api
The machine sets are listed in the form of
<clusterid>-worker-<aws-region-az>
.Scale the machine set:
$ oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
Or:
$ oc edit machineset <machineset> -n openshift-machine-api
You can scale the machine set up or down. It takes several minutes for the new machines to be available.
Understanding the difference between machine sets and the machine config pool
MachineSet
objects describe OKD nodes with respect to the cloud or machine provider.
The MachineConfigPool
object allows MachineConfigController
components to define and provide the status of machines in the context of upgrades.
The MachineConfigPool
object allows users to configure how upgrades are rolled out to the OKD nodes in the machine config pool.
The NodeSelector
object can be replaced with a reference to the MachineSet
object.
Recommended node host practices
The OKD node configuration file contains important options. For example, two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore
and maxPods
.
When both options are in use, the lower of the two values limits the number of pods on a node. Exceeding these values can result in:
Increased CPU utilization.
Slow pod scheduling.
Potential out-of-memory scenarios, depending on the amount of memory in the node.
Exhausting the pool of IP addresses.
Resource overcommitting, leading to poor user application performance.
In Kubernetes, a pod that is holding a single container actually uses two containers. The second container is used to set up networking prior to the actual container starting. Therefore, a system running 10 pods will actually have 20 containers running. |
podsPerCore
sets the number of pods the node can run based on the number of processor cores on the node. For example, if podsPerCore
is set to 10
on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40
.
kubeletConfig:
podsPerCore: 10
Setting podsPerCore
to 0
disables this limit. The default is 0
. podsPerCore
cannot exceed maxPods
.
maxPods
sets the number of pods the node can run to a fixed value, regardless of the properties of the node.
kubeletConfig:
maxPods: 250
Creating a KubeletConfig CRD to edit kubelet parameters
The kubelet configuration is currently serialized as an Ignition configuration, so it can be directly edited. However, there is also a new kubelet-config-controller
added to the Machine Config Controller (MCC). This allows you to create a KubeletConfig
custom resource (CR) to edit the kubelet parameters.
As the fields in the |
Procedure
View the available machine configuration objects that you can select:
$ oc get machineconfig
By default, the two kubelet-related configs are
01-master-kubelet
and01-worker-kubelet
.To check the current value of max pods per node, run:
# oc describe node <node-ip> | grep Allocatable -A6
Look for
value: pods: <value>
.For example:
# oc describe node ip-172-31-128-158.us-east-2.compute.internal | grep Allocatable -A6
Example output
Allocatable:
attachable-volumes-aws-ebs: 25
cpu: 3500m
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 15341844Ki
pods: 250
To set the max pods per node on the worker nodes, create a custom resource file that contains the kubelet configuration. For example,
change-maxPods-cr.yaml
:apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: set-max-pods
spec:
machineConfigPoolSelector:
matchLabels:
custom-kubelet: large-pods
kubeletConfig:
maxPods: 500
The rate at which the kubelet talks to the API server depends on queries per second (QPS) and burst values. The default values,
50
forkubeAPIQPS
and100
forkubeAPIBurst
, are good enough if there are limited pods running on each node. Updating the kubelet QPS and burst rates is recommended if there are enough CPU and memory resources on the node:apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: set-max-pods
spec:
machineConfigPoolSelector:
matchLabels:
custom-kubelet: large-pods
kubeletConfig:
maxPods: <pod_count>
kubeAPIBurst: <burst_rate>
kubeAPIQPS: <QPS>
Update the machine config pool for workers with the label:
$ oc label machineconfigpool worker custom-kubelet=large-pods
Create the
KubeletConfig
object:$ oc create -f change-maxPods-cr.yaml
Verify that the
KubeletConfig
object is created:$ oc get kubeletconfig
This should return
set-max-pods
.Depending on the number of worker nodes in the cluster, wait for the worker nodes to be rebooted one by one. For a cluster with 3 worker nodes, this could take about 10 to 15 minutes.
Check for
maxPods
changing for the worker nodes:$ oc describe node
Verify the change by running:
$ oc get kubeletconfigs set-max-pods -o yaml
This should show a status of
True
andtype:Success
Procedure
By default, only one machine is allowed to be unavailable when applying the kubelet-related configuration to the available worker nodes. For a large cluster, it can take a long time for the configuration change to be reflected. At any time, you can adjust the number of machines that are updating to speed up the process.
Edit the
worker
machine config pool:$ oc edit machineconfigpool worker
Set
maxUnavailable
to the desired value.spec:
maxUnavailable: <node_count>
When setting the value, consider the number of worker nodes that can be unavailable without affecting the applications running on the cluster.
Control plane node sizing
The control plane node resource requirements depend on the number of nodes in the cluster. The following control plane node size recommendations are based on the results of control plane density focused testing. The control plane tests create the following objects across the cluster in each of the namespaces depending on the node counts:
12 image streams
3 build configurations
6 builds
1 deployment with 2 pod replicas mounting two secrets each
2 deployments with 1 pod replica mounting two secrets
3 services pointing to the previous deployments
3 routes pointing to the previous deployments
10 secrets, 2 of which are mounted by the previous deployments
10 config maps, 2 of which are mounted by the previous deployments
Number of worker nodes | Cluster load (namespaces) | CPU cores | Memory (GB) |
---|---|---|---|
25 | 500 | 4 | 16 |
100 | 1000 | 8 | 32 |
250 | 4000 | 16 | 96 |
On a cluster with three masters or control plane nodes, the CPU and memory usage will spike up when one of the nodes is stopped, rebooted or fails because the remaining two nodes must handle the load in order to be highly available. This is also expected during upgrades because the masters are cordoned, drained, and rebooted serially to apply the operating system updates, as well as the control plane Operators update. To avoid cascading failures on large and dense clusters, keep the overall resource usage on the control plane nodes (also known as the master nodes) to at most half of all available capacity to handle the resource usage spikes. Increase the CPU and memory on the control plane nodes accordingly.
The node sizing varies depending on the number of nodes and object counts in the cluster. It also depends on whether the objects are actively being created on the cluster. During object creation, the control plane is more active in terms of resource usage compared to when the objects are in the |
If you used an installer-provisioned infrastructure installation method, you cannot modify the control plane node size in a running OKD 4.6 cluster. Instead, you must estimate your total node count and use the suggested control plane node size during installation. |
The recommendations are based on the data points captured on OKD clusters with OpenShiftSDN as the network plug-in. |
In OKD 4.6, half of a CPU core (500 millicore) is now reserved by the system by default compared to OKD 3.11 and previous versions. The sizes are determined taking that into consideration. |
Setting up CPU Manager
Procedure
Optional: Label a node:
# oc label node perf-node.example.com cpumanager=true
Edit the
MachineConfigPool
of the nodes where CPU Manager should be enabled. In this example, all workers have CPU Manager enabled:# oc edit machineconfigpool worker
Add a label to the worker machine config pool:
metadata:
creationTimestamp: 2020-xx-xxx
generation: 3
labels:
custom-kubelet: cpumanager-enabled
Create a
KubeletConfig
,cpumanager-kubeletconfig.yaml
, custom resource (CR). Refer to the label created in the previous step to have the correct nodes updated with the new kubelet config. See themachineConfigPoolSelector
section:apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: cpumanager-enabled
spec:
machineConfigPoolSelector:
matchLabels:
custom-kubelet: cpumanager-enabled
kubeletConfig:
cpuManagerPolicy: static (1)
cpuManagerReconcilePeriod: 5s (2)
1 Specify a policy: none
. This policy explicitly enables the existing default CPU affinity scheme, providing no affinity beyond what the scheduler does automatically.static
. This policy allows pods with certain resource characteristics to be granted increased CPU affinity and exclusivity on the node.
2 Optional. Specify the CPU Manager reconcile frequency. The default is 5s
.Create the dynamic kubelet config:
# oc create -f cpumanager-kubeletconfig.yaml
This adds the CPU Manager feature to the kubelet config and, if needed, the Machine Config Operator (MCO) reboots the node. To enable CPU Manager, a reboot is not needed.
Check for the merged kubelet config:
# oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7
Example output
"ownerReferences": [
{
"apiVersion": "machineconfiguration.openshift.io/v1",
"kind": "KubeletConfig",
"name": "cpumanager-enabled",
"uid": "7ed5616d-6b72-11e9-aae1-021e1ce18878"
}
]
Check the worker for the updated
kubelet.conf
:# oc debug node/perf-node.example.com
sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager
Example output
cpuManagerPolicy: static (1)
cpuManagerReconcilePeriod: 5s (1)
1 These settings were defined when you created the KubeletConfig
CR.Create a pod that requests a core or multiple cores. Both limits and requests must have their CPU value set to a whole integer. That is the number of cores that will be dedicated to this pod:
# cat cpumanager-pod.yaml
Example output
apiVersion: v1
kind: Pod
metadata:
generateName: cpumanager-
spec:
containers:
- name: cpumanager
image: gcr.io/google_containers/pause-amd64:3.0
resources:
requests:
cpu: 1
memory: "1G"
limits:
cpu: 1
memory: "1G"
nodeSelector:
cpumanager: "true"
Create the pod:
# oc create -f cpumanager-pod.yaml
Verify that the pod is scheduled to the node that you labeled:
# oc describe pod cpumanager
Example output
Name: cpumanager-6cqz7
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: perf-node.example.com/xxx.xx.xx.xxx
...
Limits:
cpu: 1
memory: 1G
Requests:
cpu: 1
memory: 1G
...
QoS Class: Guaranteed
Node-Selectors: cpumanager=true
Verify that the
cgroups
are set up correctly. Get the process ID (PID) of thepause
process:# ├─init.scope
│ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17
└─kubepods.slice
├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice
│ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope
│ └─32706 /pause
Pods of quality of service (QoS) tier
Guaranteed
are placed within thekubepods.slice
. Pods of other QoS tiers end up in childcgroups
ofkubepods
:# cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope
# for i in `ls cpuset.cpus tasks` ; do echo -n "$i "; cat $i ; done
Example output
cpuset.cpus 1
tasks 32706
Check the allowed CPU list for the task:
# grep ^Cpus_allowed_list /proc/32706/status
Example output
Cpus_allowed_list: 1
Verify that another pod (in this case, the pod in the
burstable
QoS tier) on the system cannot run on the core allocated for theGuaranteed
pod:# cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus
0
# oc describe node perf-node.example.com
Example output
...
Capacity:
attachable-volumes-aws-ebs: 39
cpu: 2
ephemeral-storage: 124768236Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8162900Ki
pods: 250
Allocatable:
attachable-volumes-aws-ebs: 39
cpu: 1500m
ephemeral-storage: 124768236Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7548500Ki
pods: 250
------- ---- ------------ ---------- --------------- ------------- ---
default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1440m (96%) 1 (66%)
This VM has two CPU cores. The
system-reserved
setting reserves 500 millicores, meaning that half of one core is subtracted from the total capacity of the node to arrive at theNode Allocatable
amount. You can see thatAllocatable CPU
is 1500 millicores. This means you can run one of the CPU Manager pods since each will take one whole core. A whole core is equivalent to 1000 millicores. If you try to schedule a second pod, the system will accept the pod, but it will never be scheduled:NAME READY STATUS RESTARTS AGE
cpumanager-6cqz7 1/1 Running 0 33m
cpumanager-7qc2t 0/1 Pending 0 11s
Huge pages
Understand and configure huge pages.
What huge pages do
Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size.
A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. In order to use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP.
How huge pages are consumed by apps
Nodes must pre-allocate huge pages in order for the node to report its huge page capacity. A node can only pre-allocate huge pages for a single size.
Huge pages can be consumed through container-level resource requirements using the resource name hugepages-<size>
, where size is the most compact binary notation using integer values supported on a particular node. For example, if a node supports 2048KiB page sizes, it exposes a schedulable resource hugepages-2Mi
. Unlike CPU or memory, huge pages do not support over-commitment.
apiVersion: v1
kind: Pod
metadata:
generateName: hugepages-volume-
spec:
containers:
- securityContext:
privileged: true
image: rhel7:latest
command:
- sleep
- inf
name: example
volumeMounts:
- mountPath: /dev/hugepages
name: hugepage
resources:
limits:
hugepages-2Mi: 100Mi (1)
memory: "1Gi"
cpu: "1"
volumes:
- name: hugepage
emptyDir:
medium: HugePages
1 | Specify the amount of memory for hugepages as the exact amount to be allocated. Do not specify this value as the amount of memory for hugepages multiplied by the size of the page. For example, given a huge page size of 2MB, if you want to use 100MB of huge-page-backed RAM for your application, then you would allocate 50 huge pages. OKD handles the math for you. As in the above example, you can specify 100MB directly. |
Allocating huge pages of a specific size
Some platforms support multiple huge page sizes. To allocate huge pages of a specific size, precede the huge pages boot command parameters with a huge page size selection parameter hugepagesz=<size>
. The <size>
value must be specified in bytes with an optional scale suffix [kKmMgG
]. The default huge page size can be defined with the default_hugepagesz=<size>
boot parameter.
Huge page requirements
Huge page requests must equal the limits. This is the default if limits are specified, but requests are not.
Huge pages are isolated at a pod scope. Container isolation is planned in a future iteration.
EmptyDir
volumes backed by huge pages must not consume more huge page memory than the pod request.Applications that consume huge pages via
shmget()
withSHM_HUGETLB
must run with a supplemental group that matches proc/sys/vm/hugetlb_shm_group.
Additional resources
Configuring huge pages
Nodes must pre-allocate huge pages used in an OKD cluster. There are two ways of reserving huge pages: at boot time and at run time. Reserving at boot time increases the possibility of success because the memory has not yet been significantly fragmented. The Node Tuning Operator currently supports boot time allocation of huge pages on specific nodes.
At boot time
Procedure
To minimize node reboots, the order of the steps below needs to be followed:
Label all nodes that need the same huge pages setting by a label.
$ oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=
Create a file with the following content and name it
hugepages-tuned-boottime.yaml
:apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
name: hugepages (1)
namespace: openshift-cluster-node-tuning-operator
spec:
profile: (2)
- data: |
[main]
summary=Boot time configuration for hugepages
include=openshift-node
[bootloader]
cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 (3)
name: openshift-node-hugepages
recommend:
- machineConfigLabels: (4)
machineconfiguration.openshift.io/role: "worker-hp"
priority: 30
profile: openshift-node-hugepages
1 Set the name
of the Tuned resource tohugepages
.2 Set the profile
section to allocate huge pages.3 Note the order of parameters is important as some platforms support huge pages of various sizes. 4 Enable machine config pool based matching. Create the Tuned
hugepages
profile$ oc create -f hugepages-tuned-boottime.yaml
Create a file with the following content and name it
hugepages-mcp.yaml
:apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
name: worker-hp
labels:
worker-hp: ""
spec:
machineConfigSelector:
matchExpressions:
- {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]}
nodeSelector:
matchLabels:
node-role.kubernetes.io/worker-hp: ""
Create the machine config pool:
$ oc create -f hugepages-mcp.yaml
Given enough non-fragmented memory, all the nodes in the worker-hp
machine config pool should now have 50 2Mi huge pages allocated.
$ oc get node <node_using_hugepages> -o jsonpath="{.status.allocatable.hugepages-2Mi}"
100Mi
This functionality is currently only supported on Fedora CoreOS (FCOS) 8.x worker nodes. On Fedora 7.x worker nodes the Tuned |
Understanding device plug-ins
The device plug-in provides a consistent and portable solution to consume hardware devices across clusters. The device plug-in provides support for these devices through an extension mechanism, which makes these devices available to Containers, provides health checks of these devices, and securely shares them.
OKD supports the device plug-in API, but the device plug-in Containers are supported by individual vendors. |
A device plug-in is a gRPC service running on the nodes (external to the kubelet
) that is responsible for managing specific hardware resources. Any device plug-in must support following remote procedure calls (RPCs):
service DevicePlugin {
// GetDevicePluginOptions returns options to be communicated with Device
// Manager
rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {}
// ListAndWatch returns a stream of List of Devices
// Whenever a Device state change or a Device disappears, ListAndWatch
// returns the new list
rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {}
// Allocate is called during container creation so that the Device
// Plug-in can run device specific operations and instruct Kubelet
// of the steps to make the Device available in the container
rpc Allocate(AllocateRequest) returns (AllocateResponse) {}
// PreStartcontainer is called, if indicated by Device Plug-in during
// registration phase, before each container start. Device plug-in
// can run device specific operations such as reseting the device
// before making devices available to the container
rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {}
}
Example device plug-ins
For easy device plug-in reference implementation, there is a stub device plug-in in the Device Manager code: vendor/k8s.io/kubernetes/pkg/kubelet/cm/deviceplugin/device_plugin_stub.go. |
Methods for deploying a device plug-in
Daemon sets are the recommended approach for device plug-in deployments.
Upon start, the device plug-in will try to create a UNIX domain socket at /var/lib/kubelet/device-plugin/ on the node to serve RPCs from Device Manager.
Since device plug-ins must manage hardware resources, access to the host file system, as well as socket creation, they must be run in a privileged security context.
More specific details regarding deployment steps can be found with each device plug-in implementation.
Understanding the Device Manager
Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plug-ins known as device plug-ins.
You can advertise specialized hardware without requiring any upstream code changes.
OKD supports the device plug-in API, but the device plug-in Containers are supported by individual vendors. |
Device Manager advertises devices as Extended Resources. User pods can consume devices, advertised by Device Manager, using the same Limit/Request mechanism, which is used for requesting any other Extended Resource.
Upon start, the device plug-in registers itself with Device Manager invoking Register
on the /var/lib/kubelet/device-plugins/kubelet.sock and starts a gRPC service at /var/lib/kubelet/device-plugins/<plugin>.sock for serving Device Manager requests.
Device Manager, while processing a new registration request, invokes ListAndWatch
remote procedure call (RPC) at the device plug-in service. In response, Device Manager gets a list of Device objects from the plug-in over a gRPC stream. Device Manager will keep watching on the stream for new updates from the plug-in. On the plug-in side, the plug-in will also keep the stream open and whenever there is a change in the state of any of the devices, a new device list is sent to the Device Manager over the same streaming connection.
While handling a new pod admission request, Kubelet passes requested Extended Resources
to the Device Manager for device allocation. Device Manager checks in its database to verify if a corresponding plug-in exists or not. If the plug-in exists and there are free allocatable devices as well as per local cache, Allocate
RPC is invoked at that particular device plug-in.
Additionally, device plug-ins can also perform several other device-specific operations, such as driver installation, device initialization, and device resets. These functionalities vary from implementation to implementation.
Enabling Device Manager
Enable Device Manager to implement a device plug-in to advertise specialized hardware without any upstream code changes.
Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plug-ins known as device plug-ins.
Obtain the label associated with the static
MachineConfigPool
CRD for the type of node you want to configure. Perform one of the following steps:View the machine config:
# oc describe machineconfig <name>
For example:
# oc describe machineconfig 00-worker
Example output
Name: 00-worker
Namespace:
Labels: machineconfiguration.openshift.io/role=worker (1)
1 Label required for the Device Manager.
Procedure
Create a custom resource (CR) for your configuration change.
Sample configuration for a Device Manager CR
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: devicemgr (1)
spec:
machineConfigPoolSelector:
matchLabels:
machineconfiguration.openshift.io: devicemgr (2)
kubeletConfig:
feature-gates:
- DevicePlugins=true (3)
1 Assign a name to CR. 2 Enter the label from the Machine Config Pool. 3 Set DevicePlugins
to ‘true`.Create the Device Manager:
$ oc create -f devicemgr.yaml
Example output
kubeletconfig.machineconfiguration.openshift.io/devicemgr created
Ensure that Device Manager was actually enabled by confirming that /var/lib/kubelet/device-plugins/kubelet.sock is created on the node. This is the UNIX domain socket on which the Device Manager gRPC server listens for new plug-in registrations. This sock file is created when the Kubelet is started only if Device Manager is enabled.
Taints and tolerations
Understand and work with taints and tolerations.
Understanding taints and tolerations
A taint allows a node to refuse a pod to be scheduled unless that pod has a matching toleration.
You apply taints to a node through the Node
specification (NodeSpec
) and apply tolerations to a pod through the Pod
specification (PodSpec
). When you apply a taint a node, the scheduler cannot place a pod on that node unless the pod can tolerate the taint.
Example taint in a node specification
spec:
....
template:
....
spec:
taints:
- effect: NoExecute
key: key1
value: value1
....
Example toleration in a Pod
spec
spec:
....
template:
....
spec:
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoExecute"
tolerationSeconds: 3600
....
Taints and tolerations consist of a key, value, and effect.
Parameter | Description | ||||||
---|---|---|---|---|---|---|---|
| The | ||||||
| The | ||||||
| The effect is one of the following:
| ||||||
|
|
If you add a
NoSchedule
taint to a control plane node (also known as the master node) the node must have thenode-role.kubernetes.io/master=:NoSchedule
taint, which is added by default.For example:
apiVersion: v1
kind: Node
metadata:
annotations:
machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0
machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c
...
spec:
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
...
A toleration matches a taint:
If the
operator
parameter is set toEqual
:the
key
parameters are the same;the
value
parameters are the same;the
effect
parameters are the same.
If the
operator
parameter is set toExists
:the
key
parameters are the same;the
effect
parameters are the same.
The following taints are built into OKD:
node.kubernetes.io/not-ready
: The node is not ready. This corresponds to the node conditionReady=False
.node.kubernetes.io/unreachable
: The node is unreachable from the node controller. This corresponds to the node conditionReady=Unknown
.node.kubernetes.io/out-of-disk
: The node has insufficient free space on the node for adding new pods. This corresponds to the node conditionOutOfDisk=True
.node.kubernetes.io/memory-pressure
: The node has memory pressure issues. This corresponds to the node conditionMemoryPressure=True
.node.kubernetes.io/disk-pressure
: The node has disk pressure issues. This corresponds to the node conditionDiskPressure=True
.node.kubernetes.io/network-unavailable
: The node network is unavailable.node.kubernetes.io/unschedulable
: The node is unschedulable.node.cloudprovider.kubernetes.io/uninitialized
: When the node controller is started with an external cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint.
Understanding how to use toleration seconds to delay pod evictions
You can specify how long a pod can remain bound to a node before being evicted by specifying the tolerationSeconds
parameter in the Pod
specification or MachineSet
object. If a taint with the NoExecute
effect is added to a node, a pod that does tolerate the taint, which has the tolerationSeconds
parameter, the pod is not evicted until that time period expires.
Example output
spec:
....
template:
....
spec:
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoExecute"
tolerationSeconds: 3600
Here, if this pod is running but does not have a matching toleration, the pod stays bound to the node for 3,600 seconds and then be evicted. If the taint is removed before that time, the pod is not evicted.
Understanding how to use multiple taints
You can put multiple taints on the same node and multiple tolerations on the same pod. OKD processes multiple taints and tolerations as follows:
Process the taints for which the pod has a matching toleration.
The remaining unmatched taints have the indicated effects on the pod:
If there is at least one unmatched taint with effect
NoSchedule
, OKD cannot schedule a pod onto that node.If there is no unmatched taint with effect
NoSchedule
but there is at least one unmatched taint with effectPreferNoSchedule
, OKD tries to not schedule the pod onto the node.If there is at least one unmatched taint with effect
NoExecute
, OKD evicts the pod from the node if it is already running on the node, or the pod is not scheduled onto the node if it is not yet running on the node.Pods that do not tolerate the taint are evicted immediately.
Pods that tolerate the taint without specifying
tolerationSeconds
in theirPod
specification remain bound forever.Pods that tolerate the taint with a specified
tolerationSeconds
remain bound for the specified amount of time.
For example:
Add the following taints to the node:
$ oc adm taint nodes node1 key1=value1:NoSchedule
$ oc adm taint nodes node1 key1=value1:NoExecute
$ oc adm taint nodes node1 key2=value2:NoSchedule
The pod has the following tolerations:
spec:
....
template:
....
spec:
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoSchedule"
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoExecute"
In this case, the pod cannot be scheduled onto the node, because there is no toleration matching the third taint. The pod continues running if it is already running on the node when the taint is added, because the third taint is the only one of the three that is not tolerated by the pod.
Understanding pod scheduling and node conditions (taint node by condition)
The Taint Nodes By Condition feature, which is enabled by default, automatically taints nodes that report conditions such as memory pressure and disk pressure. If a node reports a condition, a taint is added until the condition clears. The taints have the NoSchedule
effect, which means no pod can be scheduled on the node unless the pod has a matching toleration.
The scheduler checks for these taints on nodes before scheduling pods. If the taint is present, the pod is scheduled on a different node. Because the scheduler checks for taints and not the actual node conditions, you configure the scheduler to ignore some of these node conditions by adding appropriate pod tolerations.
To ensure backward compatibility, the daemon set controller automatically adds the following tolerations to all daemons:
node.kubernetes.io/memory-pressure
node.kubernetes.io/disk-pressure
node.kubernetes.io/out-of-disk (only for critical pods)
node.kubernetes.io/unschedulable (1.10 or later)
node.kubernetes.io/network-unavailable (host network only)
You can also add arbitrary tolerations to daemon sets.
Understanding evicting pods by condition (taint-based evictions)
The Taint-Based Evictions feature, which is enabled by default, evicts pods from a node that experiences specific conditions, such as not-ready
and unreachable
. When a node experiences one of these conditions, OKD automatically adds taints to the node, and starts evicting and rescheduling the pods on different nodes.
Taint Based Evictions have a NoExecute
effect, where any pod that does not tolerate the taint is evicted immediately and any pod that does tolerate the taint will never be evicted, unless the pod uses the tolerationSeconds
parameter.
The tolerationSeconds
parameter allows you to specify how long a pod stays bound to a node that has a node condition. If the condition still exists after the tolerationSeconds
period, the taint remains on the node and the pods with a matching toleration are evicted. If the condition clears before the tolerationSeconds
period, pods with matching tolerations are not removed.
If you use the tolerationSeconds
parameter with no value, pods are never evicted because of the not ready and unreachable node conditions.
OKD evicts pods in a rate-limited way to prevent massive pod evictions in scenarios such as the master becoming partitioned from the nodes. |
OKD automatically adds a toleration for node.kubernetes.io/not-ready
and node.kubernetes.io/unreachable
with tolerationSeconds=300
, unless the Pod
configuration specifies either toleration.
spec:
....
template:
....
spec:
tolerations:
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoExecute
tolerationSeconds: 300 (1)
- key: node.kubernetes.io/unreachable
operator: Exists
effect: NoExecute
tolerationSeconds: 300
1 | These tolerations ensure that the default pod behavior is to remain bound for five minutes after one of these node conditions problems is detected. |
You can configure these tolerations as needed. For example, if you have an application with a lot of local state, you might want to keep the pods bound to node for a longer time in the event of network partition, allowing for the partition to recover and avoiding pod eviction.
Pods spawned by a daemon set are created with NoExecute
tolerations for the following taints with no tolerationSeconds
:
node.kubernetes.io/unreachable
node.kubernetes.io/not-ready
As a result, daemon set pods are never evicted because of these node conditions.
Tolerating all taints
You can configure a pod to tolerate all taints by adding an operator: "Exists"
toleration with no key
and value
parameters. Pods with this toleration are not removed from a node that has taints.
Pod
spec for tolerating all taints
spec:
....
template:
....
spec:
tolerations:
- operator: "Exists"
Adding taints and tolerations
You add tolerations to pods and taints to nodes to allow the node to control which pods should or should not be scheduled on them. For existing pods and nodes, you should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration.
Procedure
Add a toleration to a pod by editing the
Pod
spec to include atolerations
stanza:Sample pod configuration file with an Equal operator
spec:
....
template:
....
spec:
tolerations:
- key: "key1" (1)
value: "value1"
operator: "Equal"
effect: "NoExecute"
tolerationSeconds: 3600 (2)
1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds
parameter specifies how long a pod can remain bound to a node before being evicted.For example:
Sample pod configuration file with an Exists operator
spec:
....
template:
....
spec:
tolerations:
- key: "key1"
operator: "Exists" (1)
effect: "NoExecute"
tolerationSeconds: 3600
1 The Exists
operator does not take avalue
.This example places a taint on
node1
that has keykey1
, valuevalue1
, and taint effectNoExecute
.Add a taint to a node by using the following command with the parameters described in the Taint and toleration components table:
$ oc adm taint nodes <node_name> <key>=<value>:<effect>
For example:
$ oc adm taint nodes node1 key1=value1:NoExecute
This command places a taint on
node1
that has keykey1
, valuevalue1
, and effectNoExecute
.If you add a
NoSchedule
taint to a control plane node (also known as the master node) the node must have thenode-role.kubernetes.io/master=:NoSchedule
taint, which is added by default.For example:
apiVersion: v1
kind: Node
metadata:
annotations:
machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0
machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c
…
spec:
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
…
The tolerations on the Pod match the taint on the node. A pod with either toleration can be scheduled onto
node1
.
Adding taints and tolerations using a machine set
You can add taints to nodes using a machine set. All nodes associated with the MachineSet
object are updated with the taint. Tolerations respond to taints added by a machine set in the same manner as taints added directly to the nodes.
Procedure
Add a toleration to a pod by editing the
Pod
spec to include atolerations
stanza:Sample pod configuration file with
Equal
operatorspec:
....
template:
....
spec:
tolerations:
- key: "key1" (1)
value: "value1"
operator: "Equal"
effect: "NoExecute"
tolerationSeconds: 3600 (2)
1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds
parameter specifies how long a pod is bound to a node before being evicted.For example:
Sample pod configuration file with
Exists
operatorspec:
....
template:
....
spec:
tolerations:
- key: "key1"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 3600
Add the taint to the
MachineSet
object:Edit the
MachineSet
YAML for the nodes you want to taint or you can create a newMachineSet
object:$ oc edit machineset <machineset>
Add the taint to the
spec.template.spec
section:Example taint in a node specification
spec:
....
template:
....
spec:
taints:
- effect: NoExecute
key: key1
value: value1
....
This example places a taint that has the key
key1
, valuevalue1
, and taint effectNoExecute
on the nodes.Scale down the machine set to 0:
$ oc scale --replicas=0 machineset <machineset> -n openshift-machine-api
Wait for the machines to be removed.
Scale up the machine set as needed:
$ oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
Wait for the machines to start. The taint is added to the nodes associated with the
MachineSet
object.
Binding a user to a node using taints and tolerations
If you want to dedicate a set of nodes for exclusive use by a particular set of users, add a toleration to their pods. Then, add a corresponding taint to those nodes. The pods with the tolerations are allowed to use the tainted nodes, or any other nodes in the cluster.
If you want ensure the pods are scheduled to only those tainted nodes, also add a label to the same set of nodes and add a node affinity to the pods so that the pods can only be scheduled onto nodes with that label.
Procedure
To configure a node so that users can use only that node:
Add a corresponding taint to those nodes:
For example:
$ oc adm taint nodes node1 dedicated=groupName:NoSchedule
Add a toleration to the pods by writing a custom admission controller.
Controlling nodes with special hardware using taints and tolerations
In a cluster where a small subset of nodes have specialized hardware, you can use taints and tolerations to keep pods that do not need the specialized hardware off of those nodes, leaving the nodes for pods that do need the specialized hardware. You can also require pods that need specialized hardware to use specific nodes.
You can achieve this by adding a toleration to pods that need the special hardware and tainting the nodes that have the specialized hardware.
Procedure
To ensure nodes with specialized hardware are reserved for specific pods:
Add a toleration to pods that need the special hardware.
For example:
spec:
....
template:
....
spec:
tolerations:
- key: "disktype"
value: "ssd"
operator: "Equal"
effect: "NoSchedule"
tolerationSeconds: 3600
Taint the nodes that have the specialized hardware using one of the following commands:
$ oc adm taint nodes <node-name> disktype=ssd:NoSchedule
Or:
$ oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule
Removing taints and tolerations
You can remove taints from nodes and tolerations from pods as needed. You should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration.
Procedure
To remove taints and tolerations:
To remove a taint from a node:
$ oc adm taint nodes <node-name> <key>-
For example:
$ oc adm taint nodes ip-10-0-132-248.ec2.internal key1-
Example output
node/ip-10-0-132-248.ec2.internal untainted
To remove a toleration from a pod, edit the
Pod
spec to remove the toleration:spec:
....
template:
....
spec:
tolerations:
- key: "key2"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 3600
Topology Manager
Understand and work with Topology Manager.
Topology Manager policies
Topology Manager aligns Pod
resources of all Quality of Service (QoS) classes by collecting topology hints from Hint Providers, such as CPU Manager and Device Manager, and using the collected hints to align the Pod
resources.
To align CPU resources with other requested resources in a |
Topology Manager supports four allocation policies, which you assign in the cpumanager-enabled
custom resource (CR):
none
policy
This is the default policy and does not perform any topology alignment.
best-effort
policy
For each container in a pod with the best-effort
topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager stores this and admits the pod to the node.
restricted
policy
For each container in a pod with the restricted
topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager rejects this pod from the node, resulting in a pod in a Terminated
state with a pod admission failure.
single-numa-node
policy
For each container in a pod with the single-numa-node
topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager determines if a single NUMA Node affinity is possible. If it is, the pod is admitted to the node. If a single NUMA Node affinity is not possible, the Topology Manager rejects the pod from the node. This results in a pod in a Terminated state with a pod admission failure.
Setting up Topology Manager
To use Topology Manager, you must configure an allocation policy in the cpumanager-enabled
custom resource (CR). This file might exist if you have set up CPU Manager. If the file does not exist, you can create the file.
Prequisites
- Configure the CPU Manager policy to be
static
. Refer to Using CPU Manager in the Scalability and Performance section.
Procedure
To activate Topololgy Manager:
Configure the Topology Manager allocation policy in the
cpumanager-enabled
custom resource (CR).$ oc edit KubeletConfig cpumanager-enabled
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: cpumanager-enabled
spec:
machineConfigPoolSelector:
matchLabels:
custom-kubelet: cpumanager-enabled
kubeletConfig:
cpuManagerPolicy: static (1)
cpuManagerReconcilePeriod: 5s
topologyManagerPolicy: single-numa-node (2)
1 This parameter must be static
.2 Specify your selected Topology Manager allocation policy. Here, the policy is single-numa-node
. Acceptable values are:default
,best-effort
,restricted
,single-numa-node
.
Pod interactions with Topology Manager policies
The example Pod
specs below help illustrate pod interactions with Topology Manager.
The following pod runs in the BestEffort
QoS class because no resource requests or limits are specified.
spec:
containers:
- name: nginx
image: nginx
The next pod runs in the Burstable
QoS class because requests are less than limits.
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
If the selected policy is anything other than none
, Topology Manager would not consider either of these Pod
specifications.
The last example pod below runs in the Guaranteed QoS class because requests are equal to limits.
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "200Mi"
cpu: "2"
example.com/device: "1"
requests:
memory: "200Mi"
cpu: "2"
example.com/device: "1"
Topology Manager would consider this pod. The Topology Manager consults the CPU Manager static policy, which returns the topology of available CPUs. Topology Manager also consults Device Manager to discover the topology of available devices for example.com/device.
Topology Manager will use this information to store the best Topology for this container. In the case of this pod, CPU Manager and Device Manager will use this stored information at the resource allocation stage.
Resource requests and overcommitment
For each compute resource, a container may specify a resource request and limit. Scheduling decisions are made based on the request to ensure that a node has enough capacity available to meet the requested value. If a container specifies limits, but omits requests, the requests are defaulted to the limits. A container is not able to exceed the specified limit on the node.
The enforcement of limits is dependent upon the compute resource type. If a container makes no request or limit, the container is scheduled to a node with no resource guarantees. In practice, the container is able to consume as much of the specified resource as is available with the lowest local priority. In low resource situations, containers that specify no resource requests are given the lowest quality of service.
Scheduling is based on resources requested, while quota and hard limits refer to resource limits, which can be set higher than requested resources. The difference between request and limit determines the level of overcommit; for instance, if a container is given a memory request of 1Gi and a memory limit of 2Gi, it is scheduled based on the 1Gi request being available on the node, but could use up to 2Gi; so it is 200% overcommitted.
Cluster-level overcommit using the Cluster Resource Override Operator
The Cluster Resource Override Operator is an admission webhook that allows you to control the level of overcommit and manage container density across all the nodes in your cluster. The Operator controls how nodes in specific projects can exceed defined memory and CPU limits.
You must install the Cluster Resource Override Operator using the OKD console or CLI as shown in the following sections. During the installation, you create a ClusterResourceOverride
custom resource (CR), where you set the level of overcommit, as shown in the following example:
apiVersion: operator.autoscaling.openshift.io/v1
kind: ClusterResourceOverride
metadata:
- name: cluster (1)
spec:
memoryRequestToLimitPercent: 50 (2)
cpuRequestToLimitPercent: 25 (3)
limitCPUToMemoryPercent: 200 (4)
1 | The name must be cluster . |
2 | Optional. If a container memory limit has been specified or defaulted, the memory request is overridden to this percentage of the limit, between 1-100. The default is 50. |
3 | Optional. If a container CPU limit has been specified or defaulted, the CPU request is overridden to this percentage of the limit, between 1-100. The default is 25. |
4 | Optional. If a container memory limit has been specified or defaulted, the CPU limit is overridden to a percentage of the memory limit, if specified. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request (if configured). The default is 200. |
The Cluster Resource Override Operator overrides have no effect if limits have not been set on containers. Create a |
When configured, overrides can be enabled per-project by applying the following label to the Namespace object for each project:
apiVersion: v1
kind: Namespace
metadata:
....
labels:
clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true"
....
The Operator watches for the ClusterResourceOverride
CR and ensures that the ClusterResourceOverride
admission webhook is installed into the same namespace as the operator.
Installing the Cluster Resource Override Operator using the web console
You can use the OKD web console to install the Cluster Resource Override Operator to help control overcommit in your cluster.
Prerequisites
- The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a
LimitRange
object or configure limits inPod
specs for the overrides to apply.
Procedure
To install the Cluster Resource Override Operator using the OKD web console:
In the OKD web console, navigate to Home → Projects
Click Create Project.
Specify
clusterresourceoverride-operator
as the name of the project.Click Create.
Navigate to Operators → OperatorHub.
Choose ClusterResourceOverride Operator from the list of available Operators and click Install.
On the Install Operator page, make sure A specific Namespace on the cluster is selected for Installation Mode.
Make sure clusterresourceoverride-operator is selected for Installed Namespace.
Select an Update Channel and Approval Strategy.
Click Install.
On the Installed Operators page, click ClusterResourceOverride.
On the ClusterResourceOverride Operator details page, click Create Instance.
On the Create ClusterResourceOverride page, edit the YAML template to set the overcommit values as needed:
apiVersion: operator.autoscaling.openshift.io/v1
kind: ClusterResourceOverride
metadata:
name: cluster (1)
spec:
podResourceOverride:
spec:
memoryRequestToLimitPercent: 50 (2)
cpuRequestToLimitPercent: 25 (3)
limitCPUToMemoryPercent: 200 (4)
1 The name must be cluster
.2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Click Create.
Check the current state of the admission webhook by checking the status of the cluster custom resource:
On the ClusterResourceOverride Operator page, click cluster.
On the ClusterResourceOverride Details age, click YAML. The
mutatingWebhookConfigurationRef
section appears when the webhook is called.apiVersion: operator.autoscaling.openshift.io/v1
kind: ClusterResourceOverride
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}}
creationTimestamp: "2019-12-18T22:35:02Z"
generation: 1
name: cluster
resourceVersion: "127622"
selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster
uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d
spec:
podResourceOverride:
spec:
cpuRequestToLimitPercent: 25
limitCPUToMemoryPercent: 200
memoryRequestToLimitPercent: 50
status:
....
mutatingWebhookConfigurationRef: (1)
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
name: clusterresourceoverrides.admission.autoscaling.openshift.io
resourceVersion: "127621"
uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3
....
1 Reference to the ClusterResourceOverride
admission webhook.
Installing the Cluster Resource Override Operator using the CLI
You can use the OKD CLI to install the Cluster Resource Override Operator to help control overcommit in your cluster.
Prerequisites
- The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a
LimitRange
object or configure limits inPod
specs for the overrides to apply.
Procedure
To install the Cluster Resource Override Operator using the CLI:
Create a namespace for the Cluster Resource Override Operator:
Create a
Namespace
object YAML file (for example,cro-namespace.yaml
) for the Cluster Resource Override Operator:apiVersion: v1
kind: Namespace
metadata:
name: clusterresourceoverride-operator
Create the namespace:
$ oc create -f <file-name>.yaml
For example:
$ oc create -f cro-namespace.yaml
Create an Operator group:
Create an
OperatorGroup
object YAML file (for example, cro-og.yaml) for the Cluster Resource Override Operator:apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: clusterresourceoverride-operator
namespace: clusterresourceoverride-operator
spec:
targetNamespaces:
- clusterresourceoverride-operator
Create the Operator Group:
$ oc create -f <file-name>.yaml
For example:
$ oc create -f cro-og.yaml
Create a subscription:
Create a
Subscription
object YAML file (for example, cro-sub.yaml) for the Cluster Resource Override Operator:apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: clusterresourceoverride
namespace: clusterresourceoverride-operator
spec:
channel: "4.6"
name: clusterresourceoverride
source: redhat-operators
sourceNamespace: openshift-marketplace
Create the subscription:
$ oc create -f <file-name>.yaml
For example:
$ oc create -f cro-sub.yaml
Create a
ClusterResourceOverride
custom resource (CR) object in theclusterresourceoverride-operator
namespace:Change to the
clusterresourceoverride-operator
namespace.$ oc project clusterresourceoverride-operator
Create a
ClusterResourceOverride
object YAML file (for example, cro-cr.yaml) for the Cluster Resource Override Operator:apiVersion: operator.autoscaling.openshift.io/v1
kind: ClusterResourceOverride
metadata:
name: cluster (1)
spec:
podResourceOverride:
spec:
memoryRequestToLimitPercent: 50 (2)
cpuRequestToLimitPercent: 25 (3)
limitCPUToMemoryPercent: 200 (4)
1 The name must be cluster
.2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Create the
ClusterResourceOverride
object:$ oc create -f <file-name>.yaml
For example:
$ oc create -f cro-cr.yaml
Verify the current state of the admission webhook by checking the status of the cluster custom resource.
$ oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml
The
mutatingWebhookConfigurationRef
section appears when the webhook is called.Example output
apiVersion: operator.autoscaling.openshift.io/v1
kind: ClusterResourceOverride
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}}
creationTimestamp: "2019-12-18T22:35:02Z"
generation: 1
name: cluster
resourceVersion: "127622"
selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster
uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d
spec:
podResourceOverride:
spec:
cpuRequestToLimitPercent: 25
limitCPUToMemoryPercent: 200
memoryRequestToLimitPercent: 50
status:
....
mutatingWebhookConfigurationRef: (1)
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
name: clusterresourceoverrides.admission.autoscaling.openshift.io
resourceVersion: "127621"
uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3
....
1 Reference to the ClusterResourceOverride
admission webhook.
Configuring cluster-level overcommit
The Cluster Resource Override Operator requires a ClusterResourceOverride
custom resource (CR) and a label for each project where you want the Operator to control overcommit.
Prerequisites
- The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a
LimitRange
object or configure limits inPod
specs for the overrides to apply.
Procedure
To modify cluster-level overcommit:
Edit the
ClusterResourceOverride
CR:apiVersion: operator.autoscaling.openshift.io/v1
kind: ClusterResourceOverride
metadata:
- name: cluster
spec:
memoryRequestToLimitPercent: 50 (1)
cpuRequestToLimitPercent: 25 (2)
limitCPUToMemoryPercent: 200 (3)
1 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 2 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 3 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Ensure the following label has been added to the Namespace object for each project where you want the Cluster Resource Override Operator to control overcommit:
apiVersion: v1
kind: Namespace
metadata:
....
labels:
clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" (1)
....
1 Add this label to each project.
Node-level overcommit
You can use various ways to control overcommit on specific nodes, such as quality of service (QOS) guarantees, CPU limits, or reserve resources. You can also disable overcommit for specific nodes and specific projects.
Understanding compute resources and containers
The node-enforced behavior for compute resources is specific to the resource type.
Understanding container CPU requests
A container is guaranteed the amount of CPU it requests and is additionally able to consume excess CPU available on the node, up to any limit specified by the container. If multiple containers are attempting to use excess CPU, CPU time is distributed based on the amount of CPU requested by each container.
For example, if one container requested 500m of CPU time and another container requested 250m of CPU time, then any extra CPU time available on the node is distributed among the containers in a 2:1 ratio. If a container specified a limit, it will be throttled not to use more CPU than the specified limit. CPU requests are enforced using the CFS shares support in the Linux kernel. By default, CPU limits are enforced using the CFS quota support in the Linux kernel over a 100ms measuring interval, though this can be disabled.
Understanding container memory requests
A container is guaranteed the amount of memory it requests. A container can use more memory than requested, but once it exceeds its requested amount, it could be terminated in a low memory situation on the node. If a container uses less memory than requested, it will not be terminated unless system tasks or daemons need more memory than was accounted for in the node’s resource reservation. If a container specifies a limit on memory, it is immediately terminated if it exceeds the limit amount.
Understanding overcomitment and quality of service classes
A node is overcommitted when it has a pod scheduled that makes no request, or when the sum of limits across all pods on that node exceeds available machine capacity.
In an overcommitted environment, it is possible that the pods on the node will attempt to use more compute resource than is available at any given point in time. When this occurs, the node must give priority to one pod over another. The facility used to make this decision is referred to as a Quality of Service (QoS) Class.
For each compute resource, a container is divided into one of three QoS classes with decreasing order of priority:
Priority | Class Name | Description |
---|---|---|
1 (highest) | Guaranteed | If limits and optionally requests are set (not equal to 0) for all resources and they are equal, then the container is classified as Guaranteed. |
2 | Burstable | If requests and optionally limits are set (not equal to 0) for all resources, and they are not equal, then the container is classified as Burstable. |
3 (lowest) | BestEffort | If requests and limits are not set for any of the resources, then the container is classified as BestEffort. |
Memory is an incompressible resource, so in low memory situations, containers that have the lowest priority are terminated first:
Guaranteed containers are considered top priority, and are guaranteed to only be terminated if they exceed their limits, or if the system is under memory pressure and there are no lower priority containers that can be evicted.
Burstable containers under system memory pressure are more likely to be terminated once they exceed their requests and no other BestEffort containers exist.
BestEffort containers are treated with the lowest priority. Processes in these containers are first to be terminated if the system runs out of memory.
Understanding how to reserve memory across quality of service tiers
You can use the qos-reserved
parameter to specify a percentage of memory to be reserved by a pod in a particular QoS level. This feature attempts to reserve requested resources to exclude pods from lower OoS classes from using resources requested by pods in higher QoS classes.
OKD uses the qos-reserved
parameter as follows:
A value of
qos-reserved=memory=100%
will prevent theBurstable
andBestEffort
QOS classes from consuming memory that was requested by a higher QoS class. This increases the risk of inducing OOM onBestEffort
andBurstable
workloads in favor of increasing memory resource guarantees forGuaranteed
andBurstable
workloads.A value of
qos-reserved=memory=50%
will allow theBurstable
andBestEffort
QOS classes to consume half of the memory requested by a higher QoS class.A value of
qos-reserved=memory=0%
will allow aBurstable
andBestEffort
QoS classes to consume up to the full node allocatable amount if available, but increases the risk that aGuaranteed
workload will not have access to requested memory. This condition effectively disables this feature.
Understanding swap memory and QOS
You can disable swap by default on your nodes in order to preserve quality of service (QOS) guarantees. Otherwise, physical resources on a node can oversubscribe, affecting the resource guarantees the Kubernetes scheduler makes during pod placement.
For example, if two guaranteed pods have reached their memory limit, each container could start using swap memory. Eventually, if there is not enough swap space, processes in the pods can be terminated due to the system being oversubscribed.
Failing to disable swap results in nodes not recognizing that they are experiencing MemoryPressure, resulting in pods not receiving the memory they made in their scheduling request. As a result, additional pods are placed on the node to further increase memory pressure, ultimately increasing your risk of experiencing a system out of memory (OOM) event.
If swap is enabled, any out-of-resource handling eviction thresholds for available memory will not work as expected. Take advantage of out-of-resource handling to allow pods to be evicted from a node when it is under memory pressure, and rescheduled on an alternative node that has no such pressure. |
Understanding nodes overcommitment
In an overcommitted environment, it is important to properly configure your node to provide best system behavior.
When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory.
To ensure this behavior, OKD configures the kernel to always overcommit memory by setting the vm.overcommit_memory
parameter to 1
, overriding the default operating system setting.
OKD also configures the kernel not to panic when it runs out of memory by setting the vm.panic_on_oom
parameter to 0
. A setting of 0 instructs the kernel to call oom_killer in an Out of Memory (OOM) condition, which kills processes based on priority
You can view the current setting by running the following commands on your nodes:
$ sysctl -a |grep commit
Example output
vm.overcommit_memory = 1
$ sysctl -a |grep panic
Example output
vm.panic_on_oom = 0
The above flags should already be set on nodes, and no further action is required. |
You can also perform the following configurations for each node:
Disable or enforce CPU limits using CPU CFS quotas
Reserve resources for system processes
Reserve memory across quality of service tiers
Disabling or enforcing CPU limits using CPU CFS quotas
Nodes by default enforce specified CPU limits using the Completely Fair Scheduler (CFS) quota support in the Linux kernel.
If you disable CPU limit enforcement, it is important to understand the impact on your node:
If a container has a CPU request, the request continues to be enforced by CFS shares in the Linux kernel.
If a container does not have a CPU request, but does have a CPU limit, the CPU request defaults to the specified CPU limit, and is enforced by CFS shares in the Linux kernel.
If a container has both a CPU request and limit, the CPU request is enforced by CFS shares in the Linux kernel, and the CPU limit has no impact on the node.
Prerequisites
Obtain the label associated with the static
MachineConfigPool
CRD for the type of node you want to configure. Perform one of the following steps:View the machine config pool:
$ oc describe machineconfigpool <name>
For example:
$ oc describe machineconfigpool worker
Example output
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
creationTimestamp: 2019-02-08T14:52:39Z
generation: 1
labels:
custom-kubelet: small-pods (1)
1 If a label has been added it appears under labels
.If the label is not present, add a key/value pair:
$ oc label machineconfigpool worker custom-kubelet=small-pods
Procedure
Create a custom resource (CR) for your configuration change.
Sample configuration for a disabling CPU limits
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: disable-cpu-units (1)
spec:
machineConfigPoolSelector:
matchLabels:
custom-kubelet: small-pods (2)
kubeletConfig:
cpuCfsQuota: (3)
- "false"
1 Assign a name to CR. 2 Specify the label to apply the configuration change. 3 Set the cpuCfsQuota
parameter tofalse
.
Reserving resources for system processes
To provide more reliable scheduling and minimize node resource overcommitment, each node can reserve a portion of its resources for use by system daemons that are required to run on your node for your cluster to function. In particular, it is recommended that you reserve resources for incompressible resources such as memory.
Procedure
To explicitly reserve resources for non-pod processes, allocate node resources by specifying resources available for scheduling. For more details, see Allocating Resources for Nodes.
Disabling overcommitment for a node
When enabled, overcommitment can be disabled on each node.
Procedure
To disable overcommitment in a node run the following command on that node:
$ sysctl -w vm.overcommit_memory=0
Project-level limits
To help control overcommit, you can set per-project resource limit ranges, specifying memory and CPU limits and defaults for a project that overcommit cannot exceed.
For information on project-level resource limits, see Additional Resources.
Alternatively, you can disable overcommitment for specific projects.
Disabling overcommitment for a project
When enabled, overcommitment can be disabled per-project. For example, you can allow infrastructure components to be configured independently of overcommitment.
Procedure
To disable overcommitment in a project:
Edit the project object file
Add the following annotation:
quota.openshift.io/cluster-resource-override-enabled: "false"
Create the project object:
$ oc create -f <file-name>.yaml
Freeing node resources using garbage collection
Understand and use garbage collection.
Understanding how terminated containers are removed though garbage collection
Container garbage collection can be performed using eviction thresholds.
When eviction thresholds are set for garbage collection, the node tries to keep any container for any pod accessible from the API. If the pod has been deleted, the containers will be as well. Containers are preserved as long the pod is not deleted and the eviction threshold is not reached. If the node is under disk pressure, it will remove containers and their logs will no longer be accessible using oc logs
.
eviction-soft - A soft eviction threshold pairs an eviction threshold with a required administrator-specified grace period.
eviction-hard - A hard eviction threshold has no grace period, and if observed, OKD takes immediate action.
If a node is oscillating above and below a soft eviction threshold, but not exceeding its associated grace period, the corresponding node would constantly oscillate between true
and false
. As a consequence, the scheduler could make poor scheduling decisions.
To protect against this oscillation, use the eviction-pressure-transition-period
flag to control how long OKD must wait before transitioning out of a pressure condition. OKD will not set an eviction threshold as being met for the specified pressure condition for the period specified before toggling the condition back to false.
Understanding how images are removed though garbage collection
Image garbage collection relies on disk usage as reported by cAdvisor on the node to decide which images to remove from the node.
The policy for image garbage collection is based on two conditions:
The percent of disk usage (expressed as an integer) which triggers image garbage collection. The default is 85.
The percent of disk usage (expressed as an integer) to which image garbage collection attempts to free. Default is 80.
For image garbage collection, you can modify any of the following variables using a custom resource.
Setting | Description |
---|---|
| The minimum age for an unused image before the image is removed by garbage collection. The default is 2m. |
| The percent of disk usage, expressed as an integer, which triggers image garbage collection. The default is 85. |
| The percent of disk usage, expressed as an integer, to which image garbage collection attempts to free. The default is 80. |
Two lists of images are retrieved in each garbage collector run:
A list of images currently running in at least one pod.
A list of images available on a host.
As new containers are run, new images appear. All images are marked with a time stamp. If the image is running (the first list above) or is newly detected (the second list above), it is marked with the current time. The remaining images are already marked from the previous spins. All images are then sorted by the time stamp.
Once the collection starts, the oldest images get deleted first until the stopping criterion is met.
Configuring garbage collection for containers and images
As an administrator, you can configure how OKD performs garbage collection by creating a kubeletConfig
object for each machine config pool.
OKD supports only one |
You can configure any combination of the following:
soft eviction for containers
hard eviction for containers
eviction for images
For soft container eviction you can also configure a grace period before eviction.
Prerequisites
Obtain the label associated with the static
MachineConfigPool
CRD for the type of node you want to configure. Perform one of the following steps:View the machine config pool:
$ oc describe machineconfigpool <name>
For example:
$ oc describe machineconfigpool worker
Example output
Name: worker
Namespace:
Labels: custom-kubelet=small-pods (1)
1 If a label has been added it appears under Labels
.If the label is not present, add a key/value pair:
$ oc label machineconfigpool worker custom-kubelet=small-pods
Procedure
Create a custom resource (CR) for your configuration change.
Sample configuration for a container garbage collection CR:
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: worker-kubeconfig (1)
spec:
machineConfigPoolSelector:
matchLabels:
custom-kubelet: small-pods (2)
kubeletConfig:
evictionSoft: (3)
memory.available: "500Mi" (4)
nodefs.available: "10%"
nodefs.inodesFree: "5%"
imagefs.available: "15%"
imagefs.inodesFree: "10%"
evictionSoftGracePeriod: (5)
memory.available: "1m30s"
nodefs.available: "1m30s"
nodefs.inodesFree: "1m30s"
imagefs.available: "1m30s"
imagefs.inodesFree: "1m30s"
evictionHard:
memory.available: "200Mi"
nodefs.available: "5%"
nodefs.inodesFree: "4%"
imagefs.available: "10%"
imagefs.inodesFree: "5%"
evictionPressureTransitionPeriod: 0s (6)
imageMinimumGCAge: 5m (7)
imageGCHighThresholdPercent: 80 (8)
imageGCLowThresholdPercent: 75 (9)
1 Name for the object. 2 Selector label. 3 Type of eviction: EvictionSoft
andEvictionHard
.4 Eviction thresholds based on a specific eviction trigger signal. 5 Grace periods for the soft eviction. This parameter does not apply to eviction-hard
.6 The duration to wait before transitioning out of an eviction pressure condition 7 The minimum age for an unused image before the image is removed by garbage collection. 8 The percent of disk usage (expressed as an integer) which triggers image garbage collection. 9 The percent of disk usage (expressed as an integer) to which image garbage collection attempts to free. Create the object:
$ oc create -f <file-name>.yaml
For example:
$ oc create -f gc-container.yaml
Example output
kubeletconfig.machineconfiguration.openshift.io/gc-container created
Verify that garbage collection is active. The Machine Config Pool you specified in the custom resource appears with
UPDATING
as ‘true` until the change is fully implemented:$ oc get machineconfigpool
Example output
NAME CONFIG UPDATED UPDATING
master rendered-master-546383f80705bd5aeaba93 True False
worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True
Using the Node Tuning Operator
Understand and use the Node Tuning Operator.
The Node Tuning Operator helps you manage node-level tuning by orchestrating the Tuned daemon. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs.
The Operator manages the containerized Tuned daemon for OKD as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized Tuned daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node.
Node-level settings applied by the containerized Tuned daemon are rolled back on an event that triggers a profile change or when the containerized Tuned daemon is terminated gracefully by receiving and handling a termination signal.
The Node Tuning Operator is part of a standard OKD installation in version 4.1 and later.
Accessing an example Node Tuning Operator specification
Use this process to access an example Node Tuning Operator specification.
Procedure
Run:
$ oc get Tuned/default -o yaml -n openshift-cluster-node-tuning-operator
The default CR is meant for delivering standard node-level tuning for the OKD platform and it can only be modified to set the Operator Management state. Any other custom changes to the default CR will be overwritten by the Operator. For custom tuning, create your own Tuned CRs. Newly created CRs will be combined with the default CR and custom tuning applied to OKD nodes based on node or pod labels and profile priorities.
While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality might be deprecated in future versions of the Node Tuning Operator. |
Custom tuning specification
The custom resource (CR) for the Operator has two major sections. The first section, profile:
, is a list of Tuned profiles and their names. The second, recommend:
, defines the profile selection logic.
Multiple custom tuning specifications can co-exist as multiple CRs in the Operator’s namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized Tuned daemons are updated.
Management state
The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the spec.managementState
field is not present in the default Tuned CR. Valid values for the Operator Management state are as follows:
Managed: the Operator will update its operands as configuration resources are updated
Unmanaged: the Operator will ignore changes to the configuration resources
Removed: the Operator will remove its operands and resources the Operator provisioned
Profile data
The profile:
section lists Tuned profiles and their names.
profile:
- name: tuned_profile_1
data: |
# Tuned profile specification
[main]
summary=Description of tuned_profile_1 profile
[sysctl]
net.ipv4.ip_forward=1
# ... other sysctl's or other Tuned daemon plugins supported by the containerized Tuned
# ...
- name: tuned_profile_n
data: |
# Tuned profile specification
[main]
summary=Description of tuned_profile_n profile
# tuned_profile_n profile settings
Recommended profiles
The profile:
selection logic is defined by the recommend:
section of the CR. The recommend:
section is a list of items to recommend the profiles based on a selection criteria.
recommend:
<recommend-item-1>
# ...
<recommend-item-n>
The individual items of the list:
- machineConfigLabels: (1)
<mcLabels> (2)
match: (3)
<match> (4)
priority: <priority> (5)
profile: <tuned_profile_name> (6)
1 | Optional. |
2 | A dictionary of key/value MachineConfig labels. The keys must be unique. |
3 | If omitted, profile match is assumed unless a profile with a higher priority matches first or machineConfigLabels is set. |
4 | An optional list. |
5 | Profile ordering priority. Lower numbers mean higher priority (0 is the highest priority). |
6 | A Tuned profile to apply on a match. For example tuned_profile_1 . |
<match>
is an optional list recursively defined as follows:
- label: <label_name> (1)
value: <label_value> (2)
type: <label_type> (3)
<match> (4)
1 | Node or pod label name. |
2 | Optional node or pod label value. If omitted, the presence of <label_name> is enough to match. |
3 | Optional object type (node or pod ). If omitted, node is assumed. |
4 | An optional <match> list. |
If <match>
is not omitted, all nested <match>
sections must also evaluate to true
. Otherwise, false
is assumed and the profile with the respective <match>
section will not be applied or recommended. Therefore, the nesting (child <match>
sections) works as logical AND operator. Conversely, if any item of the <match>
list matches, the entire <match>
list evaluates to true
. Therefore, the list acts as logical OR operator.
If machineConfigLabels
is defined, machine config pool based matching is turned on for the given recommend:
list item. <mcLabels>
specifies the labels for a machine config. The machine config is created automatically to apply host settings, such as kernel boot parameters, for the profile <tuned_profile_name>
. This involves finding all machine config pools with machine config selector matching <mcLabels>
and setting the profile <tuned_profile_name>
on all nodes that are assigned the found machine config pools. To target nodes that have both master and worker roles, you must use the master role.
The list items match
and machineConfigLabels
are connected by the logical OR operator. The match
item is evaluated first in a short-circuit manner. Therefore, if it evaluates to true
, the machineConfigLabels
item is not considered.
When using machine config pool based matching, it is advised to group nodes with the same hardware configuration into the same machine config pool. Not following this practice might result in Tuned operands calculating conflicting kernel parameters for two or more nodes sharing the same machine config pool. |
Example: node or pod label based matching
- match:
- label: tuned.openshift.io/elasticsearch
match:
- label: node-role.kubernetes.io/master
- label: node-role.kubernetes.io/infra
type: pod
priority: 10
profile: openshift-control-plane-es
- match:
- label: node-role.kubernetes.io/master
- label: node-role.kubernetes.io/infra
priority: 20
profile: openshift-control-plane
- priority: 30
profile: openshift-node
The CR above is translated for the containerized Tuned daemon into its recommend.conf
file based on the profile priorities. The profile with the highest priority (10
) is openshift-control-plane-es
and, therefore, it is considered first. The containerized Tuned daemon running on a given node looks to see if there is a pod running on the same node with the tuned.openshift.io/elasticsearch
label set. If not, the entire <match>
section evaluates as false
. If there is such a pod with the label, in order for the <match>
section to evaluate to true
, the node label also needs to be node-role.kubernetes.io/master
or node-role.kubernetes.io/infra
.
If the labels for the profile with priority 10
matched, openshift-control-plane-es
profile is applied and no other profile is considered. If the node/pod label combination did not match, the second highest priority profile (openshift-control-plane
) is considered. This profile is applied if the containerized Tuned pod runs on a node with labels node-role.kubernetes.io/master
or node-role.kubernetes.io/infra
.
Finally, the profile openshift-node
has the lowest priority of 30
. It lacks the <match>
section and, therefore, will always match. It acts as a profile catch-all to set openshift-node
profile, if no other profile with higher priority matches on a given node.
Example: machine config pool based matching
apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
name: openshift-node-custom
namespace: openshift-cluster-node-tuning-operator
spec:
profile:
- data: |
[main]
summary=Custom OpenShift node profile with an additional kernel parameter
include=openshift-node
[bootloader]
cmdline_openshift_node_custom=+skew_tick=1
name: openshift-node-custom
recommend:
- machineConfigLabels:
machineconfiguration.openshift.io/role: "worker-custom"
priority: 20
profile: openshift-node-custom
To minimize node reboots, label the target nodes with a label the machine config pool’s node selector will match, then create the Tuned CR above and finally create the custom machine config pool itself.
Default profiles set on a cluster
The following are the default profiles set on a cluster.
apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
name: default
namespace: openshift-cluster-node-tuning-operator
spec:
profile:
- name: "openshift"
data: |
[main]
summary=Optimize systems running OpenShift (parent profile)
include=${f:virt_check:virtual-guest:throughput-performance}
[selinux]
avc_cache_threshold=8192
[net]
nf_conntrack_hashsize=131072
[sysctl]
net.ipv4.ip_forward=1
kernel.pid_max=>4194304
net.netfilter.nf_conntrack_max=1048576
net.ipv4.conf.all.arp_announce=2
net.ipv4.neigh.default.gc_thresh1=8192
net.ipv4.neigh.default.gc_thresh2=32768
net.ipv4.neigh.default.gc_thresh3=65536
net.ipv6.neigh.default.gc_thresh1=8192
net.ipv6.neigh.default.gc_thresh2=32768
net.ipv6.neigh.default.gc_thresh3=65536
vm.max_map_count=262144
[sysfs]
/sys/module/nvme_core/parameters/io_timeout=4294967295
/sys/module/nvme_core/parameters/max_retries=10
- name: "openshift-control-plane"
data: |
[main]
summary=Optimize systems running OpenShift control plane
include=openshift
[sysctl]
# ktune sysctl settings, maximizing i/o throughput
#
# Minimal preemption granularity for CPU-bound tasks:
# (default: 1 msec# (1 + ilog(ncpus)), units: nanoseconds)
kernel.sched_min_granularity_ns=10000000
# The total time the scheduler will consider a migrated process
# "cache hot" and thus less likely to be re-migrated
# (system default is 500000, i.e. 0.5 ms)
kernel.sched_migration_cost_ns=5000000
# SCHED_OTHER wake-up granularity.
#
# Preemption granularity when tasks wake up. Lower the value to
# improve wake-up latency and throughput for latency critical tasks.
kernel.sched_wakeup_granularity_ns=4000000
- name: "openshift-node"
data: |
[main]
summary=Optimize systems running OpenShift nodes
include=openshift
[sysctl]
net.ipv4.tcp_fastopen=3
fs.inotify.max_user_watches=65536
fs.inotify.max_user_instances=8192
recommend:
- profile: "openshift-control-plane"
priority: 30
match:
- label: "node-role.kubernetes.io/master"
- label: "node-role.kubernetes.io/infra"
- profile: "openshift-node"
priority: 40
Supported Tuned daemon plug-ins
Excluding the [main]
section, the following Tuned plug-ins are supported when using custom profiles defined in the profile:
section of the Tuned CR:
audio
cpu
disk
eeepc_she
modules
mounts
net
scheduler
scsi_host
selinux
sysctl
sysfs
usb
video
vm
There is some dynamic tuning functionality provided by some of these plug-ins that is not supported. The following Tuned plug-ins are currently not supported:
bootloader
script
systemd
See Available Tuned Plug-ins and Getting Started with Tuned for more information.
Configuring the maximum number of pods per node
Two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore
and maxPods
. If you use both options, the lower of the two limits the number of pods on a node.
For example, if podsPerCore
is set to 10
on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40.
Prerequisites
Obtain the label associated with the static
MachineConfigPool
CRD for the type of node you want to configure. Perform one of the following steps:View the machine config pool:
$ oc describe machineconfigpool <name>
For example:
$ oc describe machineconfigpool worker
Example output
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
creationTimestamp: 2019-02-08T14:52:39Z
generation: 1
labels:
custom-kubelet: small-pods (1)
1 If a label has been added it appears under labels
.If the label is not present, add a key/value pair:
$ oc label machineconfigpool worker custom-kubelet=small-pods
Procedure
Create a custom resource (CR) for your configuration change.
Sample configuration for a
max-pods
CRapiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: set-max-pods (1)
spec:
machineConfigPoolSelector:
matchLabels:
custom-kubelet: small-pods (2)
kubeletConfig:
podsPerCore: 10 (3)
maxPods: 250 (4)
1 Assign a name to CR. 2 Specify the label to apply the configuration change. 3 Specify the number of pods the node can run based on the number of processor cores on the node. 4 Specify the number of pods the node can run to a fixed value, regardless of the properties of the node. Setting
podsPerCore
to0
disables this limit.In the above example, the default value for
podsPerCore
is10
and the default value formaxPods
is250
. This means that unless the node has 25 cores or more, by default,podsPerCore
will be the limiting factor.List the
MachineConfigPool
CRDs to see if the change is applied. TheUPDATING
column reportsTrue
if the change is picked up by the Machine Config Controller:$ oc get machineconfigpools
Example output
NAME CONFIG UPDATED UPDATING DEGRADED
master master-9cc2c72f205e103bb534 False False False
worker worker-8cecd1236b33ee3f8a5e False True False
Once the change is complete, the
UPDATED
column reportsTrue
.$ oc get machineconfigpools
Example output
NAME CONFIG UPDATED UPDATING DEGRADED
master master-9cc2c72f205e103bb534 False True False
worker worker-8cecd1236b33ee3f8a5e True False False