Horizontal Pod Autoscaler


Using the Kubernetes Horizontal Pod Autoscaler feature (HPA), you can configure your cluster to automatically scale the services it’s running up or down.

Why Use Horizontal Pod Autoscaler?

Using HPA, you can automatically scale the number of pods within a replication controller, deployment, or replica set up or down. HPA automatically scales the number of pods that are running for maximum efficiency. Factors that affect the number of pods include:

  • A minimum and maximum number of pods allowed to run, as defined by the user.
  • Observed CPU/memory use, as reported in resource metrics.
  • Custom metrics provided by third-party metrics application like Prometheus, Datadog, etc.HPA improves your services by:

  • Releasing hardware resources that would otherwise be wasted by an excessive number of pods.

  • Increase/decrease performance as needed to accomplish service level agreements.

How HPA Works

HPA Schema

HPA is implemented as a control loop, with a period controlled by the kube-controller-manager flags below:

FlagDefaultDescription
—horizontal-pod-autoscaler-sync-period30sHow often HPA audits resource/custom metrics in a deployment.
—horizontal-pod-autoscaler-downscale-delay5m0sFollowing completion of a downscale operation, how long HPA must wait before launching another downscale operations.
—horizontal-pod-autoscaler-upscale-delay3m0sFollowing completion of an upscale operation, how long HPA must wait before launching another upscale operation.

For full documentation on HPA, refer to the Kubernetes Documentation.

Horizontal Pod Autoscaler API Objects

HPA is an API resource in the Kubernetes autoscaling API group. The current stable version is autoscaling/v1, which only includes support for CPU autoscaling. To get additional support for scaling based on memory and custom metrics, use the beta version instead: autoscaling/v2beta1.

For more information about the HPA API object, see the HPA GitHub Readme.

kubectl Commands

You can create, manage, and delete HPAs using kubectl:

  • Creating HPA

    • With manifest: kubectl —kubeconfig=kube_configxxx.yml create -f <HPA_MANIFEST>

    • Without manifest (Just support CPU): kubectl autoscale deployment hello-world —min=2 —max=5 —cpu-percent=50

  • Getting HPA info

    • Basic: kubectl —kubeconfig=kube_configxxx.yml get hpa hello-world

    • Detailed description: kubectl —kubeconfig=kube_configxxx.yml describe hpa hello-world

  • Deleting HPA

    • kubectl —kubeconfig=kube_configxxx.yml delete hpa hello-world

HPA Manifest Definition Example

The following snippet demonstrates use of different directives in an HPA manifest. See the list below the sample to understand the purpose of each directive.

  1. apiVersion: autoscaling/v2beta1
  2. kind: HorizontalPodAutoscaler
  3. metadata:
  4. name: hello-world
  5. spec:
  6. scaleTargetRef:
  7. apiVersion: extensions/v1beta1
  8. kind: Deployment
  9. name: hello-world
  10. minReplicas: 1
  11. maxReplicas: 10
  12. metrics:
  13. - type: Resource
  14. resource:
  15. name: cpu
  16. targetAverageUtilization: 50
  17. - type: Resource
  18. resource:
  19. name: memory
  20. targetAverageValue: 100Mi
DirectiveDescription
apiVersion: autoscaling/v2beta1The version of the Kubernetes autoscaling API group in use. This example manifest uses the beta version, so scaling by CPU and memory is enabled.
name: hello-worldIndicates that HPA is performing autoscaling for the hello-word deployment.
minReplicas: 1Indicates that the minimum number of replicas running can’t go below 1.
maxReplicas: 10Indicates the maximum number of replicas in the deployment can’t go above 10.
targetAverageUtilization: 50Indicates the deployment will scale pods up when the average running pod uses more than 50% of its requested CPU.
targetAverageValue: 100MiIndicates the deployment will scale pods up when the average running pod uses more that 100Mi of memory.

Installation

Before you can use HPA in your Kubernetes cluster, you must fulfill some requirements.

Requirements

Be sure that your Kubernetes cluster services are running with these flags at minimum:

  • kube-api: requestheader-client-ca-file
  • kubelet: read-only-port at 10255
  • kube-controller: Optional, just needed if distinct values than default are required.

    • horizontal-pod-autoscaler-downscale-delay: "5m0s"
    • horizontal-pod-autoscaler-upscale-delay: "3m0s"
    • horizontal-pod-autoscaler-sync-period: "30s"For an RKE Kubernetes cluster definition, add this snippet in the services section. To add this snippet using the Rancher v2.0 UI, open the Clusters view and select Ellipsis (…) > Edit for the cluster in which you want to use HPA. Then, from Cluster Options, click Edit as YAML. Add the following snippet to the services section:
  1. services:
  2. ...
  3. kube-api:
  4. extra_args:
  5. requestheader-client-ca-file: "/etc/kubernetes/ssl/kube-ca.pem"
  6. kube-controller:
  7. extra_args:
  8. horizontal-pod-autoscaler-downscale-delay: "5m0s"
  9. horizontal-pod-autoscaler-upscale-delay: "1m0s"
  10. horizontal-pod-autoscaler-sync-period: "30s"
  11. kubelet:
  12. extra_args:
  13. read-only-port: 10255

Once the Kubernetes cluster is configured and deployed, you can deploy metrics services.

Note: kubectl command samples in the sections that follow were tested in a cluster running Rancher v2.0.6 and Kubernetes v1.10.1.

Configuring HPA to Scale Using Resource Metrics

To create HPA resources based on resource metrics such as CPU and memory use, you need to deploy the metrics-server package in the kube-system namespace of your Kubernetes cluster. This deployment allows HPA to consume the metrics.k8s.io API.

Prerequisite: You must be running kubectl 1.8 or later.

  • Connect to your Kubernetes cluster using kubectl.

  • Clone the GitHub metrics-server repo:

  1. git clone https://github.com/kubernetes-incubator/metrics-server
  • Install the metrics-server package.
  1. kubectl --kubeconfig=kube_configxxx.yml create -f metrics-server/deploy/1.8+/
  • Check that metrics-server is running properly. Check the service pod and logs in the kube-system namespace.

    • Check the service pod for a status of running. Enter the following command:
  1. kubectl --kubeconfig=kube_configxxx.yml get pods -n kube-system

Then check for the status of running.

  1. NAME READY STATUS RESTARTS AGE
  2. ...
  3. metrics-server-6fbfb84cdd-t2fk9 1/1 Running 0 8h
  4. ...
  • Check the service logs for service availability. Enter the following command:
  1. # kubectl --kubeconfig=kube_configxxx.yml -n kube-system logs metrics-server-6fbfb84cdd-t2fk9

Then review the log to confirm that that the metrics-server package is running.

Metrics Server Log Output

  1. I0723 08:09:56.193136 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:''
  2. I0723 08:09:56.193574 1 heapster.go:72] Metrics Server version v0.2.1
  3. I0723 08:09:56.194480 1 configs.go:61] Using Kubernetes client with master "https://10.43.0.1:443" and version
  4. I0723 08:09:56.194501 1 configs.go:62] Using kubelet port 10255
  5. I0723 08:09:56.198612 1 heapster.go:128] Starting with Metric Sink
  6. I0723 08:09:56.780114 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
  7. I0723 08:09:57.391518 1 heapster.go:101] Starting Heapster API server...
  8. [restful] 2018/07/23 08:09:57 log.go:33: [restful/swagger] listing is available at https:///swaggerapi
  9. [restful] 2018/07/23 08:09:57 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/
  10. I0723 08:09:57.394080 1 serve.go:85] Serving securely on 0.0.0.0:443
  • Check that the metrics api is accessible from kubectl.

    • If you are accessing the cluster directly, enter your Server URL in the kubectl config in the following format: https://<K8s_URL>:6443.
  1. # kubectl --kubeconfig=kube_configxxx.yml get --raw /apis/metrics.k8s.io/v1beta1

If the the API is working correctly, you should receive output similar to the output below.

  1. {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]}
  1. # kubectl --kubeconfig=kube_configxxx.yml get --raw /k8s/clusters/<CLUSTER_ID>/apis/metrics.k8s.io/v1beta1

If the the API is working correctly, you should receive output similar to the output below.

  1. {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]}

Configuring HPA to Scale Using Custom Metrics (Prometheus)

You can also configure HPA to autoscale based on custom metrics provided by third-party software. The most common use case for autoscaling using third-party software is based on application-level metrics (i.e., HTTP requests per second). HPA uses the custom.metrics.k8s.io API to consume these metrics. This API is enabled by deploying a custom metrics adapter for the metrics collection solution.

For this example, we are going to use Prometheus. We are beginning with the following assumptions:

  • Prometheus is deployed in the cluster.
  • Prometheus is configured correctly and collecting proper metrics from pods, nodes, namespaces, etc.
  • Prometheus is exposed at the following URL and port: http://prometheus.mycompany.io:80Prometheus is available for deployment in the Rancher v2.0 catalog. Deploy it from Rancher catalog if it isn’t already running in your cluster.

For HPA to use custom metrics from Prometheus, package k8s-prometheus-adapter is required in the kube-system namespace of your cluster. To install k8s-prometheus-adapter, we are using the Helm chart available at banzai-charts.

  • Initialize Helm in your cluster.
  1. # kubectl --kubeconfig=kube_configxxx.yml -n kube-system create serviceaccount tiller
  2. kubectl --kubeconfig=kube_configxxx.yml create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
  3. helm init --service-account tiller
  • Clone the banzai-charts repo from GitHub:
  1. # git clone https://github.com/banzaicloud/banzai-charts
  • Install the prometheus-adapter chart, specifying the Prometheus URL and port number.
  1. helm install --name prometheus-adapter banzai-charts/prometheus-adapter \
  2. --set prometheus.url="http://prometheus.mycompany.io",prometheus.port="80" --namespace kube-system
  • Check that prometheus-adapter is running properly. Check the service pod and logs in the kube-system namespace.

    • Check that the service pod is Running. Enter the following command.
  1. # kubectl --kubeconfig=kube_configxxx.yml get pods -n kube-system

From the resulting output, look for a status of Running.

  1. NAME READY STATUS RESTARTS AGE
  2. ...
  3. prometheus-adapter-prometheus-adapter-568674d97f-hbzfx 1/1 Running 0 7h
  4. ...
  • Check the service logs to make sure the service is running correctly by entering the command that follows.
  1. # kubectl logs prometheus-adapter-prometheus-adapter-568674d97f-hbzfx -n kube-system

Then review the log output to confirm the service is running.

Prometheus Adaptor Logs

  1. ...
  2. I0724 10:18:45.696679 1 round_trippers.go:436] GET https://10.43.0.1:443/api/v1/namespaces/default/pods?labelSelector=app%3Dhello-world 200 OK in 2 milliseconds
  3. I0724 10:18:45.696695 1 round_trippers.go:442] Response Headers:
  4. I0724 10:18:45.696699 1 round_trippers.go:445] Date: Tue, 24 Jul 2018 10:18:45 GMT
  5. I0724 10:18:45.696703 1 round_trippers.go:445] Content-Type: application/json
  6. I0724 10:18:45.696706 1 round_trippers.go:445] Content-Length: 2581
  7. I0724 10:18:45.696766 1 request.go:836] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/default/pods","resourceVersion":"6237"},"items":[{"metadata":{"name":"hello-world-54764dfbf8-q6l82","generateName":"hello-world-54764dfbf8-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/hello-world-54764dfbf8-q6l82","uid":"484cb929-8f29-11e8-99d2-067cac34e79c","resourceVersion":"4066","creationTimestamp":"2018-07-24T10:06:50Z","labels":{"app":"hello-world","pod-template-hash":"1032089694"},"annotations":{"cni.projectcalico.org/podIP":"10.42.0.7/32"},"ownerReferences":[{"apiVersion":"extensions/v1beta1","kind":"ReplicaSet","name":"hello-world-54764dfbf8","uid":"4849b9b1-8f29-11e8-99d2-067cac34e79c","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-ncvts","secret":{"secretName":"default-token-ncvts","defaultMode":420}}],"containers":[{"name":"hello-world","image":"rancher/hello-world","ports":[{"containerPort":80,"protocol":"TCP"}],"resources":{"requests":{"cpu":"500m","memory":"64Mi"}},"volumeMounts":[{"name":"default-token-ncvts","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"34.220.18.140","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:50Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:54Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:50Z"}],"hostIP":"34.220.18.140","podIP":"10.42.0.7","startTime":"2018-07-24T10:06:50Z","containerStatuses":[{"name":"hello-world","state":{"running":{"startedAt":"2018-07-24T10:06:54Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"rancher/hello-world:latest","imageID":"docker-pullable://rancher/hello-world@sha256:4b1559cb4b57ca36fa2b313a3c7dde774801aa3a2047930d94e11a45168bc053","containerID":"docker://cce4df5fc0408f03d4adf82c90de222f64c302bf7a04be1c82d584ec31530773"}],"qosClass":"Burstable"}}]}
  8. I0724 10:18:45.699525 1 api.go:74] GET http://prometheus-server.prometheus.34.220.18.140.xip.io/api/v1/query?query=sum%28rate%28container_fs_read_seconds_total%7Bpod_name%3D%22hello-world-54764dfbf8-q6l82%22%2Ccontainer_name%21%3D%22POD%22%2Cnamespace%3D%22default%22%7D%5B5m%5D%29%29+by+%28pod_name%29&time=1532427525.697 200 OK
  9. I0724 10:18:45.699620 1 api.go:93] Response Body: {"status":"success","data":{"resultType":"vector","result":[{"metric":{"pod_name":"hello-world-54764dfbf8-q6l82"},"value":[1532427525.697,"0"]}]}}
  10. I0724 10:18:45.699939 1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/fs_read?labelSelector=app%3Dhello-world: (12.431262ms) 200 [[kube-controller-manager/v1.10.1 (linux/amd64) kubernetes/d4ab475/system:serviceaccount:kube-system:horizontal-pod-autoscaler] 10.42.0.0:24268]
  11. I0724 10:18:51.727845 1 request.go:836] Request Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1beta1","metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/","verb":"get"},"user":"system:anonymous","group":["system:unauthenticated"]},"status":{"allowed":false}}
  12. ...
  • Check that the metrics API is accessible from kubectl.

  1. # kubectl --kubeconfig=kube_configxxx.yml get --raw /apis/custom.metrics.k8s.io/v1beta1

If the API is accessible, you should receive output that’s similar to what follows.

API Response

  1. {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"custom.metrics.k8s.io/v1beta1","resources":[{"name":"pods/fs_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_rss","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_period","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_read","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_user","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/last_seen","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/tasks_state","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_quota","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/start_time_seconds","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_write","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_cache","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_working_set_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_udp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes_free","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time_weighted","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failures","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_swap","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_shares","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_swap_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_current","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failcnt","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_tcp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_max_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_reservation_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_load_average_10s","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_system","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]}]}
  1. # kubectl --kubeconfig=kube_configxxx.yml get --raw /k8s/clusters/<CLUSTER_ID>/apis/custom.metrics.k8s.io/v1beta1

If the API is accessible, you should receive output that’s similar to what follows.

API Response

  1. {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"custom.metrics.k8s.io/v1beta1","resources":[{"name":"pods/fs_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_rss","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_period","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_read","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_user","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/last_seen","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/tasks_state","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_quota","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/start_time_seconds","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_write","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_cache","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_working_set_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_udp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes_free","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time_weighted","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failures","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_swap","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_shares","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_swap_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_current","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failcnt","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_tcp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_max_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_reservation_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_load_average_10s","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_system","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]}]}

Assigning Additional Required Roles to Your HPA

By default, HPA reads resource and custom metrics with the user system:anonymous. Assign system:anonymous the the view-resource-metrics and view-custom-metrics in the ClusterRole and ClusterRoleBindings manifests. These roles are used to access metrics.

To do it, follow these steps:

  • Configure kubectl to connect to your cluster.

  • Copy the ClusterRole and ClusterRoleBinding manifest for the type of metrics you’re using for your HPA.

Resource Metrics: ApiGroups resource.metrics.k8s.io

  1. apiVersion: rbac.authorization.k8s.io/v1
  2. kind: ClusterRole
  3. metadata:
  4. name: view-resource-metrics
  5. rules:
  6. - apiGroups:
  7. - metrics.k8s.io
  8. resources:
  9. - pods
  10. - nodes
  11. verbs:
  12. - get
  13. - list
  14. - watch
  15. ---
  16. apiVersion: rbac.authorization.k8s.io/v1
  17. kind: ClusterRoleBinding
  18. metadata:
  19. name: view-resource-metrics
  20. roleRef:
  21. apiGroup: rbac.authorization.k8s.io
  22. kind: ClusterRole
  23. name: view-resource-metrics
  24. subjects:
  25. - apiGroup: rbac.authorization.k8s.io
  26. kind: User
  27. name: system:anonymous

Custom Metrics: ApiGroups custom.metrics.k8s.io

  1. apiVersion: rbac.authorization.k8s.io/v1
  2. kind: ClusterRole
  3. metadata:
  4. name: view-custom-metrics
  5. rules:
  6. - apiGroups:
  7. - custom.metrics.k8s.io
  8. resources:
  9. - "*"
  10. verbs:
  11. - get
  12. - list
  13. - watch
  14. ---
  15. apiVersion: rbac.authorization.k8s.io/v1
  16. kind: ClusterRoleBinding
  17. metadata:
  18. name: view-custom-metrics
  19. roleRef:
  20. apiGroup: rbac.authorization.k8s.io
  21. kind: ClusterRole
  22. name: view-custom-metrics
  23. subjects:
  24. - apiGroup: rbac.authorization.k8s.io
  25. kind: User
  26. name: system:anonymous
  • Create them in your cluster using one of the follow commands, depending on the metrics you’re using.
  1. # kubectl --kubeconfig=kube_configxxx.yml create -f <RESOURCE_METRICS_MANIFEST>
  2. # kubectl --kubeconfig=kube_configxxx.yml create -f <CUSTOM_METRICS_MANIFEST>

Testing HPAs with a Service Deployment

For HPA to work correctly, service deployments should have resources request definitions for containers. Follow this hello-world example to test if HPA is working correctly.

  • Configure kubectl to connect to your Kubernetes cluster.

  • Copy the hello-world deployment manifest below.

  1. apiVersion: apps/v1beta2
  2. kind: Deployment
  3. metadata:
  4. labels:
  5. app: hello-world
  6. name: hello-world
  7. namespace: default
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. app: hello-world
  13. strategy:
  14. rollingUpdate:
  15. maxSurge: 1
  16. maxUnavailable: 0
  17. type: RollingUpdate
  18. template:
  19. metadata:
  20. labels:
  21. app: hello-world
  22. spec:
  23. containers:
  24. - image: rancher/hello-world
  25. imagePullPolicy: Always
  26. name: hello-world
  27. resources:
  28. requests:
  29. cpu: 500m
  30. memory: 64Mi
  31. ports:
  32. - containerPort: 80
  33. protocol: TCP
  34. restartPolicy: Always
  35. ---
  36. apiVersion: v1
  37. kind: Service
  38. metadata:
  39. name: hello-world
  40. namespace: default
  41. spec:
  42. ports:
  43. - port: 80
  44. protocol: TCP
  45. targetPort: 80
  46. selector:
  47. app: hello-world
  • Deploy it to your cluster.
  1. # kubectl --kubeconfig=kube_configxxx.yml create -f <HELLO_WORLD_MANIFEST>
  • Copy one of the HPAs below based on the metric type you’re using:

Hello World HPA: Resource Metrics

  1. apiVersion: autoscaling/v2beta1
  2. kind: HorizontalPodAutoscaler
  3. metadata:
  4. name: hello-world
  5. namespace: default
  6. spec:
  7. scaleTargetRef:
  8. apiVersion: extensions/v1beta1
  9. kind: Deployment
  10. name: hello-world
  11. minReplicas: 1
  12. maxReplicas: 10
  13. metrics:
  14. - type: Resource
  15. resource:
  16. name: cpu
  17. targetAverageUtilization: 50
  18. - type: Resource
  19. resource:
  20. name: memory
  21. targetAverageValue: 1000Mi

Hello World HPA: Custom Metrics

  1. apiVersion: autoscaling/v2beta1
  2. kind: HorizontalPodAutoscaler
  3. metadata:
  4. name: hello-world
  5. namespace: default
  6. spec:
  7. scaleTargetRef:
  8. apiVersion: extensions/v1beta1
  9. kind: Deployment
  10. name: hello-world
  11. minReplicas: 1
  12. maxReplicas: 10
  13. metrics:
  14. - type: Resource
  15. resource:
  16. name: cpu
  17. targetAverageUtilization: 50
  18. - type: Resource
  19. resource:
  20. name: memory
  21. targetAverageValue: 100Mi
  22. - type: Pods
  23. pods:
  24. metricName: cpu_system
  25. targetAverageValue: 20m
  • View the HPA info and description. Confirm that metric data is shown.

Resource Metrics

  • Enter the following command.
  1. # kubectl --kubeconfig=kube_configxxx.yml get hpa

You should receive the output that follows:

  1. NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
  2. hello-world Deployment/hello-world 1253376 / 100Mi, 0% / 50% 1 10 1 6m
  3. # kubectl --kubeconfig=kube_configxxx.yml describe hpa
  4. Name: hello-world
  5. Namespace: default
  6. Labels: <none>
  7. Annotations: <none>
  8. CreationTimestamp: Mon, 23 Jul 2018 20:21:16 +0200
  9. Reference: Deployment/hello-world
  10. Metrics: ( current / target )
  11. resource memory on pods: 1253376 / 100Mi
  12. resource cpu on pods (as a percentage of request): 0% (0) / 50%
  13. Min replicas: 1
  14. Max replicas: 10
  15. Conditions:
  16. Type Status Reason Message
  17. ---- ------ ------ -------
  18. AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale
  19. ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource
  20. ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
  21. Events: <none>

Custom Metrics

  • Enter the following command.
  1. # kubectl --kubeconfig=kube_configxxx.yml describe hpa

You should receive the output that follows.

  1. Name: hello-world
  2. Namespace: default
  3. Labels: <none>
  4. Annotations: <none>
  5. CreationTimestamp: Tue, 24 Jul 2018 18:36:28 +0200
  6. Reference: Deployment/hello-world
  7. Metrics: ( current / target )
  8. resource memory on pods: 3514368 / 100Mi
  9. "cpu_system" on pods: 0 / 20m
  10. resource cpu on pods (as a percentage of request): 0% (0) / 50%
  11. Min replicas: 1
  12. Max replicas: 10
  13. Conditions:
  14. Type Status Reason Message
  15. ---- ------ ------ -------
  16. AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale
  17. ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource
  18. ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
  19. Events: <none>
  • Generate a load for the service to test that your pods autoscale as intended. You can use any load-testing tool (Hey, Gatling, etc.), but we’re using Hey.

  • Test that pod autoscaling works as intended.To Test Autoscaling Using Resource Metrics:

Upscale to 2 Pods: CPU Usage Up to Target

Use your load testing tool to to scale up to two pods based on CPU Usage.

  • View your HPA.
  1. # kubectl --kubeconfig=kube_configxxx.yml describe hpa

You should receive output similar to what follows.

  1. Name: hello-world
  2. Namespace: default
  3. Labels: <none>
  4. Annotations: <none>
  5. CreationTimestamp: Mon, 23 Jul 2018 22:22:04 +0200
  6. Reference: Deployment/hello-world
  7. Metrics: ( current / target )
  8. resource memory on pods: 10928128 / 100Mi
  9. resource cpu on pods (as a percentage of request): 56% (280m) / 50%
  10. Min replicas: 1
  11. Max replicas: 10
  12. Conditions:
  13. Type Status Reason Message
  14. ---- ------ ------ -------
  15. AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 2
  16. ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
  17. ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
  18. Events:
  19. Type Reason Age From Message
  20. ---- ------ ---- ---- -------
  21. Normal SuccessfulRescale 13s horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target
  • Enter the following command to confirm you’ve scaled to two pods.
  1. kubectl --kubeconfig=kube_configxxx.yml get pods

You should receive output similar to what follows:

  1. NAME READY STATUS RESTARTS AGE
  2. hello-world-54764dfbf8-k8ph2 1/1 Running 0 1m
  3. hello-world-54764dfbf8-q6l4v 1/1 Running 0 3h

Upscale to 3 pods: CPU Usage Up to Target

Use your load testing tool to upspace to 3 pods based on CPU usage with horizontal-pod-autoscaler-upscale-delay set to 3 minutes.

  • Enter the following command.
  1. # kubectl --kubeconfig=kube_configxxx.yml describe hpa

You should receive output similar to what follows

  1. Name: hello-world
  2. Namespace: default
  3. Labels: <none>
  4. Annotations: <none>
  5. CreationTimestamp: Mon, 23 Jul 2018 22:22:04 +0200
  6. Reference: Deployment/hello-world
  7. Metrics: ( current / target )
  8. resource memory on pods: 9424896 / 100Mi
  9. resource cpu on pods (as a percentage of request): 66% (333m) / 50%
  10. Min replicas: 1
  11. Max replicas: 10
  12. Conditions:
  13. Type Status Reason Message
  14. ---- ------ ------ -------
  15. AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 3
  16. ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
  17. ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
  18. Events:
  19. Type Reason Age From Message
  20. ---- ------ ---- ---- -------
  21. Normal SuccessfulRescale 4m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target
  22. Normal SuccessfulRescale 16s horizontal-pod-autoscaler New size: 3; reason: cpu resource utilization (percentage of request) above target
  • Enter the following command to confirm three pods are running.
  1. # kubectl --kubeconfig=kube_configxxx.yml get pods

You should receive output similar to what follows.

  1. NAME READY STATUS RESTARTS AGE
  2. hello-world-54764dfbf8-f46kh 0/1 Running 0 1m
  3. hello-world-54764dfbf8-k8ph2 1/1 Running 0 5m
  4. hello-world-54764dfbf8-q6l4v 1/1 Running 0 3h

Downscale to 1 Pod: All Metrics Below Target

Use your load testing to to scale down to 1 pod when all metrics are below target for horizontal-pod-autoscaler-downscale-delay (5 minutes by default).

  • Enter the following command.
  1. # kubectl --kubeconfig=kube_configxxx.yml describe hpa

You should receive output similar to what follows.

  1. Name: hello-world
  2. Namespace: default
  3. Labels: <none>
  4. Annotations: <none>
  5. CreationTimestamp: Mon, 23 Jul 2018 22:22:04 +0200
  6. Reference: Deployment/hello-world
  7. Metrics: ( current / target )
  8. resource memory on pods: 10070016 / 100Mi
  9. resource cpu on pods (as a percentage of request): 0% (0) / 50%
  10. Min replicas: 1
  11. Max replicas: 10
  12. Conditions:
  13. Type Status Reason Message
  14. ---- ------ ------ -------
  15. AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 1
  16. ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource
  17. ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
  18. Events:
  19. Type Reason Age From Message
  20. ---- ------ ---- ---- -------
  21. Normal SuccessfulRescale 10m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target
  22. Normal SuccessfulRescale 6m horizontal-pod-autoscaler New size: 3; reason: cpu resource utilization (percentage of request) above target
  23. Normal SuccessfulRescale 1s horizontal-pod-autoscaler New size: 1; reason: All metrics below target

To Test Autoscaling Using Custom Metrics:

Use your load testing tool to upscale two pods based on CPU usage.

  • Enter the following command.
  1. # kubectl --kubeconfig=kube_configxxx.yml describe hpa

You should receive output similar to what follows.

  1. Name: hello-world
  2. Namespace: default
  3. Labels: <none>
  4. Annotations: <none>
  5. CreationTimestamp: Tue, 24 Jul 2018 18:01:11 +0200
  6. Reference: Deployment/hello-world
  7. Metrics: ( current / target )
  8. resource memory on pods: 8159232 / 100Mi
  9. "cpu_system" on pods: 7m / 20m
  10. resource cpu on pods (as a percentage of request): 64% (321m) / 50%
  11. Min replicas: 1
  12. Max replicas: 10
  13. Conditions:
  14. Type Status Reason Message
  15. ---- ------ ------ -------
  16. AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 2
  17. ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
  18. ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
  19. Events:
  20. Type Reason Age From Message
  21. ---- ------ ---- ---- -------
  22. Normal SuccessfulRescale 16s horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target
  • Enter the following command to confirm two pods are running.
  1. # kubectl --kubeconfig=kube_configxxx.yml get pods

You should receive output similar to what follows.

  1. NAME READY STATUS RESTARTS AGE
  2. hello-world-54764dfbf8-5pfdr 1/1 Running 0 3s
  3. hello-world-54764dfbf8-q6l82 1/1 Running 0 6h

Use your load testing tool to scale up to three pods when the cpu_system usage limit is up to target.

  • Enter the following command.
  1. # kubectl --kubeconfig=kube_configxxx.yml describe hpa

You should receive output similar to what follows:

  1. Name: hello-world
  2. Namespace: default
  3. Labels: <none>
  4. Annotations: <none>
  5. CreationTimestamp: Tue, 24 Jul 2018 18:01:11 +0200
  6. Reference: Deployment/hello-world
  7. Metrics: ( current / target )
  8. resource memory on pods: 8374272 / 100Mi
  9. "cpu_system" on pods: 27m / 20m
  10. resource cpu on pods (as a percentage of request): 71% (357m) / 50%
  11. Min replicas: 1
  12. Max replicas: 10
  13. Conditions:
  14. Type Status Reason Message
  15. ---- ------ ------ -------
  16. AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 3
  17. ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
  18. ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
  19. Events:
  20. Type Reason Age From Message
  21. ---- ------ ---- ---- -------
  22. Normal SuccessfulRescale 3m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target
  23. Normal SuccessfulRescale 3s horizontal-pod-autoscaler New size: 3; reason: pods metric cpu_system above target
  • Enter the following command to confirm three pods are running.
  1. # kubectl --kubeconfig=kube_configxxx.yml get pods

You should receive output similar to what follows:

  1. # kubectl --kubeconfig=kube_configxxx.yml get pods
  2. NAME READY STATUS RESTARTS AGE
  3. hello-world-54764dfbf8-5pfdr 1/1 Running 0 3m
  4. hello-world-54764dfbf8-m2hrl 1/1 Running 0 1s
  5. hello-world-54764dfbf8-q6l82 1/1 Running 0 6h

Use your load testing tool to upscale to four pods based on CPU usage. horizontal-pod-autoscaler-upscale-delay is set to three minutes by default.

  • Enter the following command.
  1. # kubectl --kubeconfig=kube_configxxx.yml describe hpa

You should receive output similar to what follows.

  1. Name: hello-world
  2. Namespace: default
  3. Labels: <none>
  4. Annotations: <none>
  5. CreationTimestamp: Tue, 24 Jul 2018 18:01:11 +0200
  6. Reference: Deployment/hello-world
  7. Metrics: ( current / target )
  8. resource memory on pods: 8374272 / 100Mi
  9. "cpu_system" on pods: 27m / 20m
  10. resource cpu on pods (as a percentage of request): 71% (357m) / 50%
  11. Min replicas: 1
  12. Max replicas: 10
  13. Conditions:
  14. Type Status Reason Message
  15. ---- ------ ------ -------
  16. AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 3
  17. ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
  18. ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
  19. Events:
  20. Type Reason Age From Message
  21. ---- ------ ---- ---- -------
  22. Normal SuccessfulRescale 5m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target
  23. Normal SuccessfulRescale 3m horizontal-pod-autoscaler New size: 3; reason: pods metric cpu_system above target
  24. Normal SuccessfulRescale 4s horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target
  • Enter the following command to confirm four pods are running.
  1. # kubectl --kubeconfig=kube_configxxx.yml get pods

You should receive output similar to what follows.

  1. NAME READY STATUS RESTARTS AGE
  2. hello-world-54764dfbf8-2p9xb 1/1 Running 0 5m
  3. hello-world-54764dfbf8-5pfdr 1/1 Running 0 2m
  4. hello-world-54764dfbf8-m2hrl 1/1 Running 0 1s
  5. hello-world-54764dfbf8-q6l82 1/1 Running 0 6h

Use your load testing tool to scale down to one pod when all metrics below target for horizontal-pod-autoscaler-downscale-delay.

  • Enter the following command.
  1. # kubectl --kubeconfig=kube_configxxx.yml describe hpa

You should receive similar output to what follows.

  1. Name: hello-world
  2. Namespace: default
  3. Labels: <none>
  4. Annotations: <none>
  5. CreationTimestamp: Tue, 24 Jul 2018 18:01:11 +0200
  6. Reference: Deployment/hello-world
  7. Metrics: ( current / target )
  8. resource memory on pods: 8101888 / 100Mi
  9. "cpu_system" on pods: 8m / 20m
  10. resource cpu on pods (as a percentage of request): 0% (0) / 50%
  11. Min replicas: 1
  12. Max replicas: 10
  13. Conditions:
  14. Type Status Reason Message
  15. ---- ------ ------ -------
  16. AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 1
  17. ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource
  18. ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
  19. Events:
  20. Type Reason Age From Message
  21. ---- ------ ---- ---- -------
  22. Normal SuccessfulRescale 10m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target
  23. Normal SuccessfulRescale 8m horizontal-pod-autoscaler New size: 3; reason: pods metric cpu_system above target
  24. Normal SuccessfulRescale 5m horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target
  25. Normal SuccessfulRescale 13s horizontal-pod-autoscaler New size: 1; reason: All metrics below target
  • Enter the following command to confirm a single pods is running.
  1. # kubectl --kubeconfig=kube_configxxx.yml get pods

You should receive output similar to what follows.

  1. NAME READY STATUS RESTARTS AGE
  2. hello-world-54764dfbf8-q6l82 1/1 Running 0 6h

Conclusion

Horizontal Pod Autoscaling is a great way to automate the number of pod you have deployed for maximum efficency. You can use it to accomodate deployment scale to real service load and to meet service level agreements.

By adjusting the horizontal-pod-autoscaler-downscale-delay and horizontal-pod-autoscaler-upscale-delay flag values, you can adjust the time needed before kube-controller scales your pods up or down.

We’ve demonstrated how to setup an HPA based on custom metrics provided by Prometheus. We used the cpu_system metric as an example, but you can use other metrics that monitor service performance, like http_request_number, http_response_time, etc.

Note:To facilitate HPA use, we are working to integrate metric-server as an addon on RKE cluster deployments. This feature is included in RKE v0.1.9-rc2 for testing, but is not officially supported as of yet. It would be supported at rke v0.1.9.