Cluster Autoscaler with AWS EC2 Auto Scaling Groups

This guide will show you how to install and use Kubernetes cluster-autoscaler on Rancher custom clusters using AWS EC2 Auto Scaling Groups.

We are going to install a Rancher RKE custom cluster with a fixed number of nodes with the etcd and controlplane roles, and a variable nodes with the worker role, managed by cluster-autoscaler.

Prerequisites

These elements are required to follow this guide:

  • The Rancher server is up and running
  • You have an AWS EC2 user with proper permissions to create virtual machines, auto scaling groups, and IAM profiles and roles

1. Create a Custom Cluster

On Rancher server, we should create a custom k8s cluster. Refer here to check for version compatibility.

Be sure that cloud_provider name is set to amazonec2. Once cluster is created we need to get:

  • clusterID: c-xxxxx will be used on EC2 kubernetes.io/cluster/<clusterID> instance tag

  • clusterName: will be used on EC2 k8s.io/cluster-autoscaler/<clusterName> instance tag

  • nodeCommand: will be added on EC2 instance user_data to include new nodes on cluster

    1. sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:<RANCHER_VERSION> --server https://<RANCHER_URL> --token <RANCHER_TOKEN> --ca-checksum <RANCHER_CHECKSUM> <roles>

2. Configure the Cloud Provider

On AWS EC2, we should create a few objects to configure our system. We’ve defined three distinct groups and IAM profiles to configure on AWS.

  1. Autoscaling group: Nodes that will be part of the EC2 Auto Scaling Group (ASG). The ASG will be used by cluster-autoscaler to scale up and down.
  • IAM profile: Required by k8s nodes where cluster-autoscaler will be running. It is recommended for Kubernetes master nodes. This profile is called K8sAutoscalerProfile.

    1. {
    2. "Version": "2012-10-17",
    3. "Statement": [
    4. {
    5. "Effect": "Allow",
    6. "Action": [
    7. "autoscaling:DescribeAutoScalingGroups",
    8. "autoscaling:DescribeAutoScalingInstances",
    9. "autoscaling:DescribeLaunchConfigurations",
    10. "autoscaling:SetDesiredCapacity",
    11. "autoscaling:TerminateInstanceInAutoScalingGroup",
    12. "autoscaling:DescribeTags",
    13. "autoscaling:DescribeLaunchConfigurations",
    14. "ec2:DescribeLaunchTemplateVersions"
    15. ],
    16. "Resource": [
    17. "*"
    18. ]
    19. }
    20. ]
    21. }
  1. Master group: Nodes that will be part of the Kubernetes etcd and/or control planes. This will be out of the ASG.
  • IAM profile: Required by the Kubernetes cloud_provider integration. Optionally, AWS_ACCESS_KEY and AWS_SECRET_KEY can be used instead using-aws-credentials. This profile is called K8sMasterProfile.

    1. {
    2. "Version": "2012-10-17",
    3. "Statement": [
    4. {
    5. "Effect": "Allow",
    6. "Action": [
    7. "autoscaling:DescribeAutoScalingGroups",
    8. "autoscaling:DescribeLaunchConfigurations",
    9. "autoscaling:DescribeTags",
    10. "ec2:DescribeInstances",
    11. "ec2:DescribeRegions",
    12. "ec2:DescribeRouteTables",
    13. "ec2:DescribeSecurityGroups",
    14. "ec2:DescribeSubnets",
    15. "ec2:DescribeVolumes",
    16. "ec2:CreateSecurityGroup",
    17. "ec2:CreateTags",
    18. "ec2:CreateVolume",
    19. "ec2:ModifyInstanceAttribute",
    20. "ec2:ModifyVolume",
    21. "ec2:AttachVolume",
    22. "ec2:AuthorizeSecurityGroupIngress",
    23. "ec2:CreateRoute",
    24. "ec2:DeleteRoute",
    25. "ec2:DeleteSecurityGroup",
    26. "ec2:DeleteVolume",
    27. "ec2:DetachVolume",
    28. "ec2:RevokeSecurityGroupIngress",
    29. "ec2:DescribeVpcs",
    30. "elasticloadbalancing:AddTags",
    31. "elasticloadbalancing:AttachLoadBalancerToSubnets",
    32. "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
    33. "elasticloadbalancing:CreateLoadBalancer",
    34. "elasticloadbalancing:CreateLoadBalancerPolicy",
    35. "elasticloadbalancing:CreateLoadBalancerListeners",
    36. "elasticloadbalancing:ConfigureHealthCheck",
    37. "elasticloadbalancing:DeleteLoadBalancer",
    38. "elasticloadbalancing:DeleteLoadBalancerListeners",
    39. "elasticloadbalancing:DescribeLoadBalancers",
    40. "elasticloadbalancing:DescribeLoadBalancerAttributes",
    41. "elasticloadbalancing:DetachLoadBalancerFromSubnets",
    42. "elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
    43. "elasticloadbalancing:ModifyLoadBalancerAttributes",
    44. "elasticloadbalancing:RegisterInstancesWithLoadBalancer",
    45. "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer",
    46. "elasticloadbalancing:AddTags",
    47. "elasticloadbalancing:CreateListener",
    48. "elasticloadbalancing:CreateTargetGroup",
    49. "elasticloadbalancing:DeleteListener",
    50. "elasticloadbalancing:DeleteTargetGroup",
    51. "elasticloadbalancing:DescribeListeners",
    52. "elasticloadbalancing:DescribeLoadBalancerPolicies",
    53. "elasticloadbalancing:DescribeTargetGroups",
    54. "elasticloadbalancing:DescribeTargetHealth",
    55. "elasticloadbalancing:ModifyListener",
    56. "elasticloadbalancing:ModifyTargetGroup",
    57. "elasticloadbalancing:RegisterTargets",
    58. "elasticloadbalancing:SetLoadBalancerPoliciesOfListener",
    59. "iam:CreateServiceLinkedRole",
    60. "ecr:GetAuthorizationToken",
    61. "ecr:BatchCheckLayerAvailability",
    62. "ecr:GetDownloadUrlForLayer",
    63. "ecr:GetRepositoryPolicy",
    64. "ecr:DescribeRepositories",
    65. "ecr:ListImages",
    66. "ecr:BatchGetImage",
    67. "kms:DescribeKey"
    68. ],
    69. "Resource": [
    70. "*"
    71. ]
    72. }
    73. ]
    74. }
    • IAM role: K8sMasterRole: [K8sMasterProfile,K8sAutoscalerProfile]

    • Security group: K8sMasterSg More info at RKE ports (custom nodes tab)

    • Tags: kubernetes.io/cluster/<clusterID>: owned

    • User data: K8sMasterUserData Ubuntu 18.04(ami-0e11cbb34015ff725), installs docker and add etcd+controlplane node to the k8s cluster

      1. #!/bin/bash -x
      2. cat <<EOF > /etc/sysctl.d/90-kubelet.conf
      3. vm.overcommit_memory = 1
      4. vm.panic_on_oom = 0
      5. kernel.panic = 10
      6. kernel.panic_on_oops = 1
      7. kernel.keys.root_maxkeys = 1000000
      8. kernel.keys.root_maxbytes = 25000000
      9. EOF
      10. sysctl -p /etc/sysctl.d/90-kubelet.conf
      11. curl -sL https://releases.rancher.com/install-docker/19.03.sh | sh
      12. sudo usermod -aG docker ubuntu
      13. TOKEN=$(curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
      14. PRIVATE_IP=$(curl -H "X-aws-ec2-metadata-token: ${TOKEN}" -s http://169.254.169.254/latest/meta-data/local-ipv4)
      15. PUBLIC_IP=$(curl -H "X-aws-ec2-metadata-token: ${TOKEN}" -s http://169.254.169.254/latest/meta-data/public-ipv4)
      16. K8S_ROLES="--etcd --controlplane"
      17. sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:<RANCHER_VERSION> --server https://<RANCHER_URL> --token <RANCHER_TOKEN> --ca-checksum <RANCHER_CA_CHECKSUM> --address ${PUBLIC_IP} --internal-address ${PRIVATE_IP} ${K8S_ROLES}
  1. Worker group: Nodes that will be part of the k8s worker plane. Worker nodes will be scaled by cluster-autoscaler using the ASG.
  • IAM profile: Provides cloud_provider worker integration. This profile is called K8sWorkerProfile.

    1. ```json
    2. {
    3. "Version": "2012-10-17",
    4. "Statement": [
    5. {
    6. "Effect": "Allow",
    7. "Action": [
    8. "ec2:DescribeInstances",
    9. "ec2:DescribeRegions",
    10. "ecr:GetAuthorizationToken",
    11. "ecr:BatchCheckLayerAvailability",
    12. "ecr:GetDownloadUrlForLayer",
    13. "ecr:GetRepositoryPolicy",
    14. "ecr:DescribeRepositories",
    15. "ecr:ListImages",
    16. "ecr:BatchGetImage"
    17. ],
    18. "Resource": "*"
    19. }
    20. ]
    21. }
    22. ```
  • IAM role: K8sWorkerRole: [K8sWorkerProfile]

  • Security group: K8sWorkerSg More info at RKE ports (custom nodes tab)

  • Tags:

    • kubernetes.io/cluster/<clusterID>: owned
    • k8s.io/cluster-autoscaler/<clusterName>: true
    • k8s.io/cluster-autoscaler/enabled: true
  • User data: K8sWorkerUserData Ubuntu 18.04(ami-0e11cbb34015ff725), installs docker and add worker node to the k8s cluster

    1. #!/bin/bash -x
    2. cat <<EOF > /etc/sysctl.d/90-kubelet.conf
    3. vm.overcommit_memory = 1
    4. vm.panic_on_oom = 0
    5. kernel.panic = 10
    6. kernel.panic_on_oops = 1
    7. kernel.keys.root_maxkeys = 1000000
    8. kernel.keys.root_maxbytes = 25000000
    9. EOF
    10. sysctl -p /etc/sysctl.d/90-kubelet.conf
    11. curl -sL https://releases.rancher.com/install-docker/19.03.sh | sh
    12. sudo usermod -aG docker ubuntu
    13. TOKEN=$(curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
    14. PRIVATE_IP=$(curl -H "X-aws-ec2-metadata-token: ${TOKEN}" -s http://169.254.169.254/latest/meta-data/local-ipv4)
    15. PUBLIC_IP=$(curl -H "X-aws-ec2-metadata-token: ${TOKEN}" -s http://169.254.169.254/latest/meta-data/public-ipv4)
    16. K8S_ROLES="--worker"
    17. sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:<RANCHER_VERSION> --server https://<RANCHER_URL> --token <RANCHER_TOKEN> --ca-checksum <RANCHER_CA_CHECKCSUM> --address ${PUBLIC_IP} --internal-address ${PRIVATE_IP} ${K8S_ROLES}

More info is at RKE clusters on AWS and Cluster Autoscaler on AWS.

3. Deploy Nodes

Once we’ve configured AWS, let’s create VMs to bootstrap our cluster:

  • master (etcd+controlplane): Depending your needs, deploy three master instances with proper size. More info is at the recommendations for production-ready clusters.

    • IAM role: K8sMasterRole
    • Security group: K8sMasterSg
    • Tags:
      • kubernetes.io/cluster/<clusterID>: owned
    • User data: K8sMasterUserData
  • worker: Define an ASG on EC2 with the following settings:

    • Name: K8sWorkerAsg
    • IAM role: K8sWorkerRole
    • Security group: K8sWorkerSg
    • Tags:
      • kubernetes.io/cluster/<clusterID>: owned
      • k8s.io/cluster-autoscaler/<clusterName>: true
      • k8s.io/cluster-autoscaler/enabled: true
    • User data: K8sWorkerUserData
    • Instances:
      • minimum: 2
      • desired: 2
      • maximum: 10

Once the VMs are deployed, you should have a Rancher custom cluster up and running with three master and two worker nodes.

4. Install Cluster-autoscaler

At this point, we should have rancher cluster up and running. We are going to install cluster-autoscaler on master nodes and kube-system namespace, following cluster-autoscaler recommendation.

Parameters

This table shows cluster-autoscaler parameters for fine tuning:

ParameterDefaultDescription
cluster-name-Autoscaled cluster name, if available
address:8085The address to expose Prometheus metrics
kubernetes-Kubernetes master location. Leave blank for default
kubeconfig-Path to kubeconfig file with authorization and master location information
cloud-config-The path to the cloud provider configuration file. Empty string for no configuration file
namespace“kube-system”Namespace in which cluster-autoscaler run
scale-down-enabledtrueShould CA scale down the cluster
scale-down-delay-after-add“10m”How long after scale up that scale down evaluation resumes
scale-down-delay-after-delete0How long after node deletion that scale down evaluation resumes, defaults to scanInterval
scale-down-delay-after-failure“3m”How long after scale down failure that scale down evaluation resumes
scale-down-unneeded-time“10m”How long a node should be unneeded before it is eligible for scale down
scale-down-unready-time“20m”How long an unready node should be unneeded before it is eligible for scale down
scale-down-utilization-threshold0.5Sum of cpu or memory of all pods running on the node divided by node’s corresponding allocatable resource, below which a node can be considered for scale down
scale-down-gpu-utilization-threshold0.5Sum of gpu requests of all pods running on the node divided by node’s allocatable resource, below which a node can be considered for scale down
scale-down-non-empty-candidates-count30Maximum number of non empty nodes considered in one iteration as candidates for scale down with drain
scale-down-candidates-pool-ratio0.1A ratio of nodes that are considered as additional non empty candidates for scale down when some candidates from previous iteration are no longer valid
scale-down-candidates-pool-min-count50Minimum number of nodes that are considered as additional non empty candidates for scale down when some candidates from previous iteration are no longer valid
node-deletion-delay-timeout“2m”Maximum time CA waits for removing delay-deletion.cluster-autoscaler.kubernetes.io/ annotations before deleting the node
scan-interval“10s”How often cluster is reevaluated for scale up or down
max-nodes-total0Maximum number of nodes in all node groups. Cluster autoscaler will not grow the cluster beyond this number
cores-total“0:320000”Minimum and maximum number of cores in cluster, in the format <min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers
memory-total“0:6400000”Minimum and maximum number of gigabytes of memory in cluster, in the format <min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers
cloud-provider-Cloud provider type
max-bulk-soft-taint-count10Maximum number of nodes that can be tainted/untainted PreferNoSchedule at the same time. Set to 0 to turn off such tainting
max-bulk-soft-taint-time“3s”Maximum duration of tainting/untainting nodes as PreferNoSchedule at the same time
max-empty-bulk-delete10Maximum number of empty nodes that can be deleted at the same time
max-graceful-termination-sec600Maximum number of seconds CA waits for pod termination when trying to scale down a node
max-total-unready-percentage45Maximum percentage of unready nodes in the cluster. After this is exceeded, CA halts operations
ok-total-unready-count3Number of allowed unready nodes, irrespective of max-total-unready-percentage
scale-up-from-zerotrueShould CA scale up when there 0 ready nodes
max-node-provision-time“15m”Maximum time CA waits for node to be provisioned
nodes-sets min,max size and other configuration data for a node group in a format accepted by cloud provider. Can be used multiple times. Format: <min>:<max>:<other…>
node-group-auto-discovery-One or more definition(s) of node group auto-discovery. A definition is expressed <name of discoverer>:[<key>[=<value>]]
estimator“binpacking”Type of resource estimator to be used in scale up. Available values: [“binpacking”]
expander“random”Type of node group expander to be used in scale up. Available values: [“random”,”most-pods”,”least-waste”,”price”,”priority”]
ignore-daemonsets-utilizationfalseShould CA ignore DaemonSet pods when calculating resource utilization for scaling down
ignore-mirror-pods-utilizationfalseShould CA ignore Mirror pods when calculating resource utilization for scaling down
write-status-configmaptrueShould CA write status information to a configmap
max-inactivity“10m”Maximum time from last recorded autoscaler activity before automatic restart
max-failing-time“15m”Maximum time from last recorded successful autoscaler run before automatic restart
balance-similar-node-groupsfalseDetect similar node groups and balance the number of nodes between them
node-autoprovisioning-enabledfalseShould CA autoprovision node groups when needed
max-autoprovisioned-node-group-count15The maximum number of autoprovisioned groups in the cluster
unremovable-node-recheck-timeout“5m”The timeout before we check again a node that couldn’t be removed before
expendable-pods-priority-cutoff-10Pods with priority below cutoff will be expendable. They can be killed without any consideration during scale down and they don’t cause scale up. Pods with null priority (PodPriority disabled) are non expendable
regionalfalseCluster is regional
new-pod-scale-up-delay“0s”Pods less than this old will not be considered for scale-up
ignore-taint-Specifies a taint to ignore in node templates when considering to scale a node group
balancing-ignore-label-Specifies a label to ignore in addition to the basic and cloud-provider set of labels when comparing if two node groups are similar
aws-use-static-instance-listfalseShould CA fetch instance types in runtime or use a static list. AWS only
profilingfalseIs debug/pprof endpoint enabled

Deployment

Based on the cluster-autoscaler-run-on-control-plane.yaml example, we’ve created our own cluster-autoscaler-deployment.yaml to use preferred auto-discovery setup, updating tolerations, nodeSelector, image version and command config:

  1. ---
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. labels:
  6. k8s-addon: cluster-autoscaler.addons.k8s.io
  7. k8s-app: cluster-autoscaler
  8. name: cluster-autoscaler
  9. namespace: kube-system
  10. ---
  11. apiVersion: rbac.authorization.k8s.io/v1
  12. kind: ClusterRole
  13. metadata:
  14. name: cluster-autoscaler
  15. labels:
  16. k8s-addon: cluster-autoscaler.addons.k8s.io
  17. k8s-app: cluster-autoscaler
  18. rules:
  19. - apiGroups: [""]
  20. resources: ["events", "endpoints"]
  21. verbs: ["create", "patch"]
  22. - apiGroups: [""]
  23. resources: ["pods/eviction"]
  24. verbs: ["create"]
  25. - apiGroups: [""]
  26. resources: ["pods/status"]
  27. verbs: ["update"]
  28. - apiGroups: [""]
  29. resources: ["endpoints"]
  30. resourceNames: ["cluster-autoscaler"]
  31. verbs: ["get", "update"]
  32. - apiGroups: [""]
  33. resources: ["nodes"]
  34. verbs: ["watch", "list", "get", "update"]
  35. - apiGroups: [""]
  36. resources:
  37. - "pods"
  38. - "services"
  39. - "replicationcontrollers"
  40. - "persistentvolumeclaims"
  41. - "persistentvolumes"
  42. verbs: ["watch", "list", "get"]
  43. - apiGroups: ["extensions"]
  44. resources: ["replicasets", "daemonsets"]
  45. verbs: ["watch", "list", "get"]
  46. - apiGroups: ["policy"]
  47. resources: ["poddisruptionbudgets"]
  48. verbs: ["watch", "list"]
  49. - apiGroups: ["apps"]
  50. resources: ["statefulsets", "replicasets", "daemonsets"]
  51. verbs: ["watch", "list", "get"]
  52. - apiGroups: ["storage.k8s.io"]
  53. resources: ["storageclasses", "csinodes"]
  54. verbs: ["watch", "list", "get"]
  55. - apiGroups: ["batch", "extensions"]
  56. resources: ["jobs"]
  57. verbs: ["get", "list", "watch", "patch"]
  58. - apiGroups: ["coordination.k8s.io"]
  59. resources: ["leases"]
  60. verbs: ["create"]
  61. - apiGroups: ["coordination.k8s.io"]
  62. resourceNames: ["cluster-autoscaler"]
  63. resources: ["leases"]
  64. verbs: ["get", "update"]
  65. ---
  66. apiVersion: rbac.authorization.k8s.io/v1
  67. kind: Role
  68. metadata:
  69. name: cluster-autoscaler
  70. namespace: kube-system
  71. labels:
  72. k8s-addon: cluster-autoscaler.addons.k8s.io
  73. k8s-app: cluster-autoscaler
  74. rules:
  75. - apiGroups: [""]
  76. resources: ["configmaps"]
  77. verbs: ["create","list","watch"]
  78. - apiGroups: [""]
  79. resources: ["configmaps"]
  80. resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"]
  81. verbs: ["delete", "get", "update", "watch"]
  82. ---
  83. apiVersion: rbac.authorization.k8s.io/v1
  84. kind: ClusterRoleBinding
  85. metadata:
  86. name: cluster-autoscaler
  87. labels:
  88. k8s-addon: cluster-autoscaler.addons.k8s.io
  89. k8s-app: cluster-autoscaler
  90. roleRef:
  91. apiGroup: rbac.authorization.k8s.io
  92. kind: ClusterRole
  93. name: cluster-autoscaler
  94. subjects:
  95. - kind: ServiceAccount
  96. name: cluster-autoscaler
  97. namespace: kube-system
  98. ---
  99. apiVersion: rbac.authorization.k8s.io/v1
  100. kind: RoleBinding
  101. metadata:
  102. name: cluster-autoscaler
  103. namespace: kube-system
  104. labels:
  105. k8s-addon: cluster-autoscaler.addons.k8s.io
  106. k8s-app: cluster-autoscaler
  107. roleRef:
  108. apiGroup: rbac.authorization.k8s.io
  109. kind: Role
  110. name: cluster-autoscaler
  111. subjects:
  112. - kind: ServiceAccount
  113. name: cluster-autoscaler
  114. namespace: kube-system
  115. ---
  116. apiVersion: apps/v1
  117. kind: Deployment
  118. metadata:
  119. name: cluster-autoscaler
  120. namespace: kube-system
  121. labels:
  122. app: cluster-autoscaler
  123. spec:
  124. replicas: 1
  125. selector:
  126. matchLabels:
  127. app: cluster-autoscaler
  128. template:
  129. metadata:
  130. labels:
  131. app: cluster-autoscaler
  132. annotations:
  133. prometheus.io/scrape: 'true'
  134. prometheus.io/port: '8085'
  135. spec:
  136. serviceAccountName: cluster-autoscaler
  137. tolerations:
  138. - effect: NoSchedule
  139. operator: "Equal"
  140. value: "true"
  141. key: node-role.kubernetes.io/controlplane
  142. nodeSelector:
  143. node-role.kubernetes.io/controlplane: "true"
  144. containers:
  145. - image: eu.gcr.io/k8s-artifacts-prod/autoscaling/cluster-autoscaler:<VERSION>
  146. name: cluster-autoscaler
  147. resources:
  148. limits:
  149. cpu: 100m
  150. memory: 300Mi
  151. requests:
  152. cpu: 100m
  153. memory: 300Mi
  154. command:
  155. - ./cluster-autoscaler
  156. - --v=4
  157. - --stderrthreshold=info
  158. - --cloud-provider=aws
  159. - --skip-nodes-with-local-storage=false
  160. - --expander=least-waste
  161. - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/<clusterName>
  162. volumeMounts:
  163. - name: ssl-certs
  164. mountPath: /etc/ssl/certs/ca-certificates.crt
  165. readOnly: true
  166. imagePullPolicy: "Always"
  167. volumes:
  168. - name: ssl-certs
  169. hostPath:
  170. path: "/etc/ssl/certs/ca-certificates.crt"

Once the manifest file is prepared, deploy it in the Kubernetes cluster (Rancher UI can be used instead):

  1. kubectl -n kube-system apply -f cluster-autoscaler-deployment.yaml

Cluster Autoscaler with AWS EC2 Auto Scaling Groups - 图1note

Cluster-autoscaler deployment can also be set up using manual configuration

Testing

At this point, we should have a cluster-scaler up and running in our Rancher custom cluster. Cluster-scale should manage K8sWorkerAsg ASG to scale up and down between 2 and 10 nodes, when one of the following conditions is true:

  • There are pods that failed to run in the cluster due to insufficient resources. In this case, the cluster is scaled up.
  • There are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes. In this case, the cluster is scaled down.

Generating Load

We’ve prepared a test-deployment.yaml just to generate load on the Kubernetes cluster and see if cluster-autoscaler is working properly. The test deployment is requesting 1000m CPU and 1024Mi memory by three replicas. Adjust the requested resources and/or replica to be sure you exhaust the Kubernetes cluster resources:

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. labels:
  5. app: hello-world
  6. name: hello-world
  7. spec:
  8. replicas: 3
  9. selector:
  10. matchLabels:
  11. app: hello-world
  12. strategy:
  13. rollingUpdate:
  14. maxSurge: 1
  15. maxUnavailable: 0
  16. type: RollingUpdate
  17. template:
  18. metadata:
  19. labels:
  20. app: hello-world
  21. spec:
  22. containers:
  23. - image: rancher/hello-world
  24. imagePullPolicy: Always
  25. name: hello-world
  26. ports:
  27. - containerPort: 80
  28. protocol: TCP
  29. resources:
  30. limits:
  31. cpu: 1000m
  32. memory: 1024Mi
  33. requests:
  34. cpu: 1000m
  35. memory: 1024Mi

Once the test deployment is prepared, deploy it in the Kubernetes cluster default namespace (Rancher UI can be used instead):

  1. kubectl -n default apply -f test-deployment.yaml

Checking Scale

Once the Kubernetes resources got exhausted, cluster-autoscaler should scale up worker nodes where pods failed to be scheduled. It should scale up until up until all pods became scheduled. You should see the new nodes on the ASG and on the Kubernetes cluster. Check the logs on the kube-system cluster-autoscaler pod.

Once scale up is checked, let check for scale down. To do it, reduce the replica number on the test deployment until you release enough Kubernetes cluster resources to scale down. You should see nodes disappear on the ASG and on the Kubernetes cluster. Check the logs on the kube-system cluster-autoscaler pod.