- Configuring your cluster to place pods on overcommitted nodes
- Resource requests and overcommitment
- Cluster-level overcommit using the Cluster Resource Override Operator
- Node-level overcommit
- Understanding compute resources and containers
- Understanding overcomitment and quality of service classes
- Understanding swap memory and QOS
- Understanding nodes overcommitment
- Disabling or enforcing CPU limits using CPU CFS quotas
- Reserving resources for system processes
- Disabling overcommitment for a node
- Project-level limits
- Additional resources
Configuring your cluster to place pods on overcommitted nodes
In an overcommitted state, the sum of the container compute resource requests and limits exceeds the resources available on the system. For example, you might want to use overcommitment in development environments where a trade-off of guaranteed performance for capacity is acceptable.
Containers can specify compute resource requests and limits. Requests are used for scheduling your container and provide a minimum service guarantee. Limits constrain the amount of compute resource that can be consumed on your node.
The scheduler attempts to optimize the compute resource use across all nodes in your cluster. It places pods onto specific nodes, taking the pods’ compute resource requests and nodes’ available capacity into consideration.
OKD administrators can control the level of overcommit and manage container density on nodes. You can configure cluster-level overcommit using the ClusterResourceOverride Operator to override the ratio between requests and limits set on developer containers. In conjunction with node overcommit and project memory and CPU limits and defaults, you can adjust the resource limit and request to achieve the desired level of overcommit.
In OKD, you must enable cluster-level overcommit. Node overcommitment is enabled by default. See Disabling overcommitment for a node. |
Resource requests and overcommitment
For each compute resource, a container may specify a resource request and limit. Scheduling decisions are made based on the request to ensure that a node has enough capacity available to meet the requested value. If a container specifies limits, but omits requests, the requests are defaulted to the limits. A container is not able to exceed the specified limit on the node.
The enforcement of limits is dependent upon the compute resource type. If a container makes no request or limit, the container is scheduled to a node with no resource guarantees. In practice, the container is able to consume as much of the specified resource as is available with the lowest local priority. In low resource situations, containers that specify no resource requests are given the lowest quality of service.
Scheduling is based on resources requested, while quota and hard limits refer to resource limits, which can be set higher than requested resources. The difference between request and limit determines the level of overcommit; for instance, if a container is given a memory request of 1Gi and a memory limit of 2Gi, it is scheduled based on the 1Gi request being available on the node, but could use up to 2Gi; so it is 200% overcommitted.
Cluster-level overcommit using the Cluster Resource Override Operator
The Cluster Resource Override Operator is an admission webhook that allows you to control the level of overcommit and manage container density across all the nodes in your cluster. The Operator controls how nodes in specific projects can exceed defined memory and CPU limits.
You must install the Cluster Resource Override Operator using the OKD console or CLI as shown in the following sections. During the installation, you create a ClusterResourceOverride
custom resource (CR), where you set the level of overcommit, as shown in the following example:
apiVersion: operator.autoscaling.openshift.io/v1
kind: ClusterResourceOverride
metadata:
- name: cluster (1)
spec:
memoryRequestToLimitPercent: 50 (2)
cpuRequestToLimitPercent: 25 (3)
limitCPUToMemoryPercent: 200 (4)
1 | The name must be cluster . |
2 | Optional. If a container memory limit has been specified or defaulted, the memory request is overridden to this percentage of the limit, between 1-100. The default is 50. |
3 | Optional. If a container CPU limit has been specified or defaulted, the CPU request is overridden to this percentage of the limit, between 1-100. The default is 25. |
4 | Optional. If a container memory limit has been specified or defaulted, the CPU limit is overridden to a percentage of the memory limit, if specified. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request (if configured). The default is 200. |
The Cluster Resource Override Operator overrides have no effect if limits have not been set on containers. Create a |
When configured, overrides can be enabled per-project by applying the following label to the Namespace object for each project:
apiVersion: v1
kind: Namespace
metadata:
....
labels:
clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true"
....
The Operator watches for the ClusterResourceOverride
CR and ensures that the ClusterResourceOverride
admission webhook is installed into the same namespace as the operator.
Installing the Cluster Resource Override Operator using the web console
You can use the OKD web console to install the Cluster Resource Override Operator to help control overcommit in your cluster.
Prerequisites
- The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a
LimitRange
object or configure limits inPod
specs for the overrides to apply.
Procedure
To install the Cluster Resource Override Operator using the OKD web console:
In the OKD web console, navigate to Home → Projects
Click Create Project.
Specify
clusterresourceoverride-operator
as the name of the project.Click Create.
Navigate to Operators → OperatorHub.
Choose ClusterResourceOverride Operator from the list of available Operators and click Install.
On the Install Operator page, make sure A specific Namespace on the cluster is selected for Installation Mode.
Make sure clusterresourceoverride-operator is selected for Installed Namespace.
Select an Update Channel and Approval Strategy.
Click Install.
On the Installed Operators page, click ClusterResourceOverride.
On the ClusterResourceOverride Operator details page, click Create Instance.
On the Create ClusterResourceOverride page, edit the YAML template to set the overcommit values as needed:
apiVersion: operator.autoscaling.openshift.io/v1
kind: ClusterResourceOverride
metadata:
name: cluster (1)
spec:
podResourceOverride:
spec:
memoryRequestToLimitPercent: 50 (2)
cpuRequestToLimitPercent: 25 (3)
limitCPUToMemoryPercent: 200 (4)
1 The name must be cluster
.2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Click Create.
Check the current state of the admission webhook by checking the status of the cluster custom resource:
On the ClusterResourceOverride Operator page, click cluster.
On the ClusterResourceOverride Details age, click YAML. The
mutatingWebhookConfigurationRef
section appears when the webhook is called.apiVersion: operator.autoscaling.openshift.io/v1
kind: ClusterResourceOverride
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}}
creationTimestamp: "2019-12-18T22:35:02Z"
generation: 1
name: cluster
resourceVersion: "127622"
selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster
uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d
spec:
podResourceOverride:
spec:
cpuRequestToLimitPercent: 25
limitCPUToMemoryPercent: 200
memoryRequestToLimitPercent: 50
status:
....
mutatingWebhookConfigurationRef: (1)
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
name: clusterresourceoverrides.admission.autoscaling.openshift.io
resourceVersion: "127621"
uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3
....
1 Reference to the ClusterResourceOverride
admission webhook.
Installing the Cluster Resource Override Operator using the CLI
You can use the OKD CLI to install the Cluster Resource Override Operator to help control overcommit in your cluster.
Prerequisites
- The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a
LimitRange
object or configure limits inPod
specs for the overrides to apply.
Procedure
To install the Cluster Resource Override Operator using the CLI:
Create a namespace for the Cluster Resource Override Operator:
Create a
Namespace
object YAML file (for example,cro-namespace.yaml
) for the Cluster Resource Override Operator:apiVersion: v1
kind: Namespace
metadata:
name: clusterresourceoverride-operator
Create the namespace:
$ oc create -f <file-name>.yaml
For example:
$ oc create -f cro-namespace.yaml
Create an Operator group:
Create an
OperatorGroup
object YAML file (for example, cro-og.yaml) for the Cluster Resource Override Operator:apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: clusterresourceoverride-operator
namespace: clusterresourceoverride-operator
spec:
targetNamespaces:
- clusterresourceoverride-operator
Create the Operator Group:
$ oc create -f <file-name>.yaml
For example:
$ oc create -f cro-og.yaml
Create a subscription:
Create a
Subscription
object YAML file (for example, cro-sub.yaml) for the Cluster Resource Override Operator:apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: clusterresourceoverride
namespace: clusterresourceoverride-operator
spec:
channel: "4.6"
name: clusterresourceoverride
source: redhat-operators
sourceNamespace: openshift-marketplace
Create the subscription:
$ oc create -f <file-name>.yaml
For example:
$ oc create -f cro-sub.yaml
Create a
ClusterResourceOverride
custom resource (CR) object in theclusterresourceoverride-operator
namespace:Change to the
clusterresourceoverride-operator
namespace.$ oc project clusterresourceoverride-operator
Create a
ClusterResourceOverride
object YAML file (for example, cro-cr.yaml) for the Cluster Resource Override Operator:apiVersion: operator.autoscaling.openshift.io/v1
kind: ClusterResourceOverride
metadata:
name: cluster (1)
spec:
podResourceOverride:
spec:
memoryRequestToLimitPercent: 50 (2)
cpuRequestToLimitPercent: 25 (3)
limitCPUToMemoryPercent: 200 (4)
1 The name must be cluster
.2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Create the
ClusterResourceOverride
object:$ oc create -f <file-name>.yaml
For example:
$ oc create -f cro-cr.yaml
Verify the current state of the admission webhook by checking the status of the cluster custom resource.
$ oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml
The
mutatingWebhookConfigurationRef
section appears when the webhook is called.Example output
apiVersion: operator.autoscaling.openshift.io/v1
kind: ClusterResourceOverride
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}}
creationTimestamp: "2019-12-18T22:35:02Z"
generation: 1
name: cluster
resourceVersion: "127622"
selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster
uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d
spec:
podResourceOverride:
spec:
cpuRequestToLimitPercent: 25
limitCPUToMemoryPercent: 200
memoryRequestToLimitPercent: 50
status:
....
mutatingWebhookConfigurationRef: (1)
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
name: clusterresourceoverrides.admission.autoscaling.openshift.io
resourceVersion: "127621"
uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3
....
1 Reference to the ClusterResourceOverride
admission webhook.
Configuring cluster-level overcommit
The Cluster Resource Override Operator requires a ClusterResourceOverride
custom resource (CR) and a label for each project where you want the Operator to control overcommit.
Prerequisites
- The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a
LimitRange
object or configure limits inPod
specs for the overrides to apply.
Procedure
To modify cluster-level overcommit:
Edit the
ClusterResourceOverride
CR:apiVersion: operator.autoscaling.openshift.io/v1
kind: ClusterResourceOverride
metadata:
- name: cluster
spec:
memoryRequestToLimitPercent: 50 (1)
cpuRequestToLimitPercent: 25 (2)
limitCPUToMemoryPercent: 200 (3)
1 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 2 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 3 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Ensure the following label has been added to the Namespace object for each project where you want the Cluster Resource Override Operator to control overcommit:
apiVersion: v1
kind: Namespace
metadata:
....
labels:
clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" (1)
....
1 Add this label to each project.
Node-level overcommit
You can use various ways to control overcommit on specific nodes, such as quality of service (QOS) guarantees, CPU limits, or reserve resources. You can also disable overcommit for specific nodes and specific projects.
Understanding compute resources and containers
The node-enforced behavior for compute resources is specific to the resource type.
Understanding container CPU requests
A container is guaranteed the amount of CPU it requests and is additionally able to consume excess CPU available on the node, up to any limit specified by the container. If multiple containers are attempting to use excess CPU, CPU time is distributed based on the amount of CPU requested by each container.
For example, if one container requested 500m of CPU time and another container requested 250m of CPU time, then any extra CPU time available on the node is distributed among the containers in a 2:1 ratio. If a container specified a limit, it will be throttled not to use more CPU than the specified limit. CPU requests are enforced using the CFS shares support in the Linux kernel. By default, CPU limits are enforced using the CFS quota support in the Linux kernel over a 100ms measuring interval, though this can be disabled.
Understanding container memory requests
A container is guaranteed the amount of memory it requests. A container can use more memory than requested, but once it exceeds its requested amount, it could be terminated in a low memory situation on the node. If a container uses less memory than requested, it will not be terminated unless system tasks or daemons need more memory than was accounted for in the node’s resource reservation. If a container specifies a limit on memory, it is immediately terminated if it exceeds the limit amount.
Understanding overcomitment and quality of service classes
A node is overcommitted when it has a pod scheduled that makes no request, or when the sum of limits across all pods on that node exceeds available machine capacity.
In an overcommitted environment, it is possible that the pods on the node will attempt to use more compute resource than is available at any given point in time. When this occurs, the node must give priority to one pod over another. The facility used to make this decision is referred to as a Quality of Service (QoS) Class.
For each compute resource, a container is divided into one of three QoS classes with decreasing order of priority:
Priority | Class Name | Description |
---|---|---|
1 (highest) | Guaranteed | If limits and optionally requests are set (not equal to 0) for all resources and they are equal, then the container is classified as Guaranteed. |
2 | Burstable | If requests and optionally limits are set (not equal to 0) for all resources, and they are not equal, then the container is classified as Burstable. |
3 (lowest) | BestEffort | If requests and limits are not set for any of the resources, then the container is classified as BestEffort. |
Memory is an incompressible resource, so in low memory situations, containers that have the lowest priority are terminated first:
Guaranteed containers are considered top priority, and are guaranteed to only be terminated if they exceed their limits, or if the system is under memory pressure and there are no lower priority containers that can be evicted.
Burstable containers under system memory pressure are more likely to be terminated once they exceed their requests and no other BestEffort containers exist.
BestEffort containers are treated with the lowest priority. Processes in these containers are first to be terminated if the system runs out of memory.
Understanding how to reserve memory across quality of service tiers
You can use the qos-reserved
parameter to specify a percentage of memory to be reserved by a pod in a particular QoS level. This feature attempts to reserve requested resources to exclude pods from lower OoS classes from using resources requested by pods in higher QoS classes.
OKD uses the qos-reserved
parameter as follows:
A value of
qos-reserved=memory=100%
will prevent theBurstable
andBestEffort
QOS classes from consuming memory that was requested by a higher QoS class. This increases the risk of inducing OOM onBestEffort
andBurstable
workloads in favor of increasing memory resource guarantees forGuaranteed
andBurstable
workloads.A value of
qos-reserved=memory=50%
will allow theBurstable
andBestEffort
QOS classes to consume half of the memory requested by a higher QoS class.A value of
qos-reserved=memory=0%
will allow aBurstable
andBestEffort
QoS classes to consume up to the full node allocatable amount if available, but increases the risk that aGuaranteed
workload will not have access to requested memory. This condition effectively disables this feature.
Understanding swap memory and QOS
You can disable swap by default on your nodes in order to preserve quality of service (QOS) guarantees. Otherwise, physical resources on a node can oversubscribe, affecting the resource guarantees the Kubernetes scheduler makes during pod placement.
For example, if two guaranteed pods have reached their memory limit, each container could start using swap memory. Eventually, if there is not enough swap space, processes in the pods can be terminated due to the system being oversubscribed.
Failing to disable swap results in nodes not recognizing that they are experiencing MemoryPressure, resulting in pods not receiving the memory they made in their scheduling request. As a result, additional pods are placed on the node to further increase memory pressure, ultimately increasing your risk of experiencing a system out of memory (OOM) event.
If swap is enabled, any out-of-resource handling eviction thresholds for available memory will not work as expected. Take advantage of out-of-resource handling to allow pods to be evicted from a node when it is under memory pressure, and rescheduled on an alternative node that has no such pressure. |
Understanding nodes overcommitment
In an overcommitted environment, it is important to properly configure your node to provide best system behavior.
When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory.
To ensure this behavior, OKD configures the kernel to always overcommit memory by setting the vm.overcommit_memory
parameter to 1
, overriding the default operating system setting.
OKD also configures the kernel not to panic when it runs out of memory by setting the vm.panic_on_oom
parameter to 0
. A setting of 0 instructs the kernel to call oom_killer in an Out of Memory (OOM) condition, which kills processes based on priority
You can view the current setting by running the following commands on your nodes:
$ sysctl -a |grep commit
Example output
vm.overcommit_memory = 1
$ sysctl -a |grep panic
Example output
vm.panic_on_oom = 0
The above flags should already be set on nodes, and no further action is required. |
You can also perform the following configurations for each node:
Disable or enforce CPU limits using CPU CFS quotas
Reserve resources for system processes
Reserve memory across quality of service tiers
Disabling or enforcing CPU limits using CPU CFS quotas
Nodes by default enforce specified CPU limits using the Completely Fair Scheduler (CFS) quota support in the Linux kernel.
If you disable CPU limit enforcement, it is important to understand the impact on your node:
If a container has a CPU request, the request continues to be enforced by CFS shares in the Linux kernel.
If a container does not have a CPU request, but does have a CPU limit, the CPU request defaults to the specified CPU limit, and is enforced by CFS shares in the Linux kernel.
If a container has both a CPU request and limit, the CPU request is enforced by CFS shares in the Linux kernel, and the CPU limit has no impact on the node.
Prerequisites
Obtain the label associated with the static
MachineConfigPool
CRD for the type of node you want to configure. Perform one of the following steps:View the machine config pool:
$ oc describe machineconfigpool <name>
For example:
$ oc describe machineconfigpool worker
Example output
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
creationTimestamp: 2019-02-08T14:52:39Z
generation: 1
labels:
custom-kubelet: small-pods (1)
1 If a label has been added it appears under labels
.If the label is not present, add a key/value pair:
$ oc label machineconfigpool worker custom-kubelet=small-pods
Procedure
Create a custom resource (CR) for your configuration change.
Sample configuration for a disabling CPU limits
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: disable-cpu-units (1)
spec:
machineConfigPoolSelector:
matchLabels:
custom-kubelet: small-pods (2)
kubeletConfig:
cpuCfsQuota: (3)
- "false"
1 Assign a name to CR. 2 Specify the label to apply the configuration change. 3 Set the cpuCfsQuota
parameter tofalse
.
Reserving resources for system processes
To provide more reliable scheduling and minimize node resource overcommitment, each node can reserve a portion of its resources for use by system daemons that are required to run on your node for your cluster to function. In particular, it is recommended that you reserve resources for incompressible resources such as memory.
Procedure
To explicitly reserve resources for non-pod processes, allocate node resources by specifying resources available for scheduling. For more details, see Allocating Resources for Nodes.
Disabling overcommitment for a node
When enabled, overcommitment can be disabled on each node.
Procedure
To disable overcommitment in a node run the following command on that node:
$ sysctl -w vm.overcommit_memory=0
Project-level limits
To help control overcommit, you can set per-project resource limit ranges, specifying memory and CPU limits and defaults for a project that overcommit cannot exceed.
For information on project-level resource limits, see Additional Resources.
Alternatively, you can disable overcommitment for specific projects.
Disabling overcommitment for a project
When enabled, overcommitment can be disabled per-project. For example, you can allow infrastructure components to be configured independently of overcommitment.
Procedure
To disable overcommitment in a project:
Edit the project object file
Add the following annotation:
quota.openshift.io/cluster-resource-override-enabled: "false"
Create the project object:
$ oc create -f <file-name>.yaml
Additional resources
For information setting per-project resource limits, see Setting deployment resources.
For more information about explicitly reserving resources for non-pod processes, see Allocating resources for nodes.