RKE Hardening Guides

This document provides prescriptive guidance for how to harden an RKE cluster intended for production, before provisioning it with Rancher. It outlines the configurations and controls required for Center for Information Security (CIS) Kubernetes benchmark controls.

RKE Hardening Guides - 图1note

This hardening guide describes how to secure the nodes in your cluster. We recommended that you follow this guide before you install Kubernetes.

This hardening guide is intended to be used for RKE clusters and is associated with the following versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher:

Rancher VersionCIS Benchmark VersionKubernetes Version
Rancher v2.7Benchmark v1.23Kubernetes v1.23
Rancher v2.7Benchmark v1.24Kubernetes v1.24
Rancher v2.7Benchmark v1.7Kubernetes v1.25 up to v1.26

RKE Hardening Guides - 图2note

  • In Benchmark v1.24 and later, check id 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated) might fail, as /etc/kubernetes/ssl/kube-ca.pem is set to 644 by default.
  • In Benchmark v1.7, the --protect-kernel-defaults (4.2.6) parameter isn’t required anymore, and was removed by CIS.

For more details on how to evaluate a hardened RKE cluster against the official CIS benchmark, refer to the RKE self-assessment guides for specific Kubernetes and CIS benchmark versions.

Host-level requirements

Configure Kernel Runtime Parameters

The following sysctl configuration is recommended for all nodes types in the cluster. Set the following parameters in /etc/sysctl.d/90-kubelet.conf:

  1. vm.overcommit_memory=1
  2. vm.panic_on_oom=0
  3. kernel.panic=10
  4. kernel.panic_on_oops=1

Run sysctl -p /etc/sysctl.d/90-kubelet.conf to enable the settings.

Configure etcd user and group

A user account and group for the etcd service is required to be set up before installing RKE.

Create etcd user and group

To create the etcd user and group run the following console commands. The commands below use 52034 for uid and gid for example purposes. Any valid unused uid or gid could also be used in lieu of 52034.

  1. groupadd --gid 52034 etcd
  2. useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd --shell /usr/sbin/nologin

When deploying RKE through its cluster configuration config.yml file, update the uid and gid of the etcd user:

  1. services:
  2. etcd:
  3. gid: 52034
  4. uid: 52034

Kubernetes runtime requirements

Configure default Service Account

Set automountServiceAccountToken to false for default service accounts

Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments.

For each namespace including default and kube-system on a standard RKE install, the default service account must include this value:

  1. automountServiceAccountToken: false

Save the following configuration to a file called account_update.yaml.

  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4. name: default
  5. automountServiceAccountToken: false

Create a bash script file called account_update.sh. Be sure to chmod +x account_update.sh so the script has execute permissions.

  1. #!/bin/bash -e
  2. for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do
  3. kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)"
  4. done

Execute this script to apply the account_update.yaml configuration to default service account in all namespaces.

Configure Network Policy

Ensure that all Namespaces have Network Policies defined

Running different applications on the same Kubernetes cluster creates a risk of one compromised application attacking a neighboring application. Network segmentation is important to ensure that containers can communicate only with those they are supposed to. A network policy is a specification of how selections of pods are allowed to communicate with each other and other network endpoints.

Network Policies are namespace scoped. When a network policy is introduced to a given namespace, all traffic not allowed by the policy is denied. However, if there are no network policies in a namespace all traffic will be allowed into and out of the pods in that namespace. To enforce network policies, a container network interface (CNI) plugin must be enabled. This guide uses Canal to provide the policy enforcement. Additional information about CNI providers can be found here.

Once a CNI provider is enabled on a cluster a default network policy can be applied. For reference purposes a permissive example is provided below. If you want to allow all traffic to all pods in a namespace (even if policies are added that cause some pods to be treated as “isolated”), you can create a policy that explicitly allows all traffic in that namespace. Save the following configuration as default-allow-all.yaml. Additional documentation about network policies can be found on the Kubernetes site.

RKE Hardening Guides - 图3caution

This network policy is just an example and is not recommended for production use.

  1. ---
  2. apiVersion: networking.k8s.io/v1
  3. kind: NetworkPolicy
  4. metadata:
  5. name: default-allow-all
  6. spec:
  7. podSelector: {}
  8. ingress:
  9. - {}
  10. egress:
  11. - {}
  12. policyTypes:
  13. - Ingress
  14. - Egress

Create a bash script file called apply_networkPolicy_to_all_ns.sh. Be sure to chmod +x apply_networkPolicy_to_all_ns.sh so the script has execute permissions.

  1. #!/bin/bash -e
  2. for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do
  3. kubectl apply -f default-allow-all.yaml -n ${namespace}
  4. done

Execute this script to apply the default-allow-all.yaml configuration with the permissive NetworkPolicy to all namespaces.

Known Limitations

  • Rancher exec shell and view logs for pods are not functional in a hardened setup when only a public IP is provided when registering custom nodes. This functionality requires a private IP to be provided when registering the custom nodes.
  • When setting default_pod_security_policy_template_id: to restricted or restricted-noroot, based on the pod security policies (PSP) provided by Rancher, Rancher creates RoleBindings and ClusterRoleBindings on the default service accounts. The CIS check 5.1.5 requires that the default service accounts have no roles or cluster roles bound to it apart from the defaults. In addition, the default service accounts should be configured such that it does not provide a service account token and does not have any explicit rights assignments.

Reference Hardened RKE cluster.yml Configuration

The reference cluster.yml is used by the RKE CLI that provides the configuration needed to achieve a hardened installation of RKE. RKE documentation provides additional details about the configuration items. This reference cluster.yml does not include the required nodes directive which will vary depending on your environment. Documentation for node configuration in RKE can be found here.

The example cluster.yml configuration file contains an Admission Configuration policy in the services.kube-api.admission_configuration field. This sample policy contains the namespace exemptions necessary for an imported RKE cluster to run properly in Rancher, similar to Rancher’s pre-defined rancher-restricted policy.

If you prefer to use RKE’s default restricted policy, then leave the services.kube-api.admission_configuration field empty and set services.pod_security_configuration to restricted. See the RKE docs for more information.

  • v1.25 and Newer
  • v1.24 and Older

RKE Hardening Guides - 图4note

If you intend to import an RKE cluster into Rancher, please consult the documentation for how to configure the PSA to exempt Rancher system namespaces.

  1. # If you intend to deploy Kubernetes in an air-gapped environment,
  2. # please consult the documentation on how to configure custom RKE images.
  3. nodes: []
  4. kubernetes_version: # Define RKE version
  5. services:
  6. etcd:
  7. uid: 52034
  8. gid: 52034
  9. kube-api:
  10. secrets_encryption_config:
  11. enabled: true
  12. audit_log:
  13. enabled: true
  14. event_rate_limit:
  15. enabled: true
  16. # Leave `pod_security_configuration` out if you are setting a
  17. # custom policy in `admission_configuration`. Otherwise set
  18. # it to `restricted` to use RKE's pre-defined restricted policy,
  19. # and remove everything inside `admission_configuration` field.
  20. #
  21. # pod_security_configuration: restricted
  22. #
  23. admission_configuration:
  24. apiVersion: apiserver.config.k8s.io/v1
  25. kind: AdmissionConfiguration
  26. plugins:
  27. - name: PodSecurity
  28. configuration:
  29. apiVersion: pod-security.admission.config.k8s.io/v1
  30. kind: PodSecurityConfiguration
  31. defaults:
  32. enforce: "restricted"
  33. enforce-version: "latest"
  34. audit: "restricted"
  35. audit-version: "latest"
  36. warn: "restricted"
  37. warn-version: "latest"
  38. exemptions:
  39. usernames: []
  40. runtimeClasses: []
  41. namespaces: [calico-apiserver,
  42. calico-system,
  43. cattle-alerting,
  44. cattle-csp-adapter-system,
  45. cattle-elemental-system,
  46. cattle-epinio-system,
  47. cattle-externalip-system,
  48. cattle-fleet-local-system,
  49. cattle-fleet-system,
  50. cattle-gatekeeper-system,
  51. cattle-global-data,
  52. cattle-global-nt,
  53. cattle-impersonation-system,
  54. cattle-istio,
  55. cattle-istio-system,
  56. cattle-logging,
  57. cattle-logging-system,
  58. cattle-monitoring-system,
  59. cattle-neuvector-system,
  60. cattle-prometheus,
  61. cattle-provisioning-capi-system,
  62. cattle-resources-system,
  63. cattle-sriov-system,
  64. cattle-system,
  65. cattle-ui-plugin-system,
  66. cattle-windows-gmsa-system,
  67. cert-manager,
  68. cis-operator-system,
  69. fleet-default,
  70. ingress-nginx,
  71. istio-system,
  72. kube-node-lease,
  73. kube-public,
  74. kube-system,
  75. longhorn-system,
  76. rancher-alerting-drivers,
  77. security-scan,
  78. tigera-operator]
  79. kube-controller:
  80. extra_args:
  81. feature-gates: RotateKubeletServerCertificate=true
  82. kubelet:
  83. extra_args:
  84. feature-gates: RotateKubeletServerCertificate=true
  85. generate_serving_certificate: true
  86. addons: |
  87. apiVersion: networking.k8s.io/v1
  88. kind: NetworkPolicy
  89. metadata:
  90. name: default-allow-all
  91. spec:
  92. podSelector: {}
  93. ingress:
  94. - {}
  95. egress:
  96. - {}
  97. policyTypes:
  98. - Ingress
  99. - Egress
  100. ---
  101. apiVersion: v1
  102. kind: ServiceAccount
  103. metadata:
  104. name: default
  105. automountServiceAccountToken: false
  1. # If you intend to deploy Kubernetes in an air-gapped environment,
  2. # please consult the documentation on how to configure custom RKE images.
  3. nodes: []
  4. kubernetes_version: # Define RKE version
  5. services:
  6. etcd:
  7. uid: 52034
  8. gid: 52034
  9. kube-api:
  10. secrets_encryption_config:
  11. enabled: true
  12. audit_log:
  13. enabled: true
  14. event_rate_limit:
  15. enabled: true
  16. pod_security_policy: true
  17. kube-controller:
  18. extra_args:
  19. feature-gates: RotateKubeletServerCertificate=true
  20. kubelet:
  21. extra_args:
  22. feature-gates: RotateKubeletServerCertificate=true
  23. protect-kernel-defaults: true
  24. generate_serving_certificate: true
  25. addons: |
  26. # Upstream Kubernetes restricted PSP policy
  27. # https://github.com/kubernetes/website/blob/564baf15c102412522e9c8fc6ef2b5ff5b6e766c/content/en/examples/policy/restricted-psp.yaml
  28. apiVersion: policy/v1beta1
  29. kind: PodSecurityPolicy
  30. metadata:
  31. name: restricted-noroot
  32. spec:
  33. privileged: false
  34. # Required to prevent escalations to root.
  35. allowPrivilegeEscalation: false
  36. requiredDropCapabilities:
  37. - ALL
  38. # Allow core volume types.
  39. volumes:
  40. - 'configMap'
  41. - 'emptyDir'
  42. - 'projected'
  43. - 'secret'
  44. - 'downwardAPI'
  45. # Assume that ephemeral CSI drivers & persistentVolumes set up by the cluster admin are safe to use.
  46. - 'csi'
  47. - 'persistentVolumeClaim'
  48. - 'ephemeral'
  49. hostNetwork: false
  50. hostIPC: false
  51. hostPID: false
  52. runAsUser:
  53. # Require the container to run without root privileges.
  54. rule: 'MustRunAsNonRoot'
  55. seLinux:
  56. # This policy assumes the nodes are using AppArmor rather than SELinux.
  57. rule: 'RunAsAny'
  58. supplementalGroups:
  59. rule: 'MustRunAs'
  60. ranges:
  61. # Forbid adding the root group.
  62. - min: 1
  63. max: 65535
  64. fsGroup:
  65. rule: 'MustRunAs'
  66. ranges:
  67. # Forbid adding the root group.
  68. - min: 1
  69. max: 65535
  70. readOnlyRootFilesystem: false
  71. ---
  72. apiVersion: rbac.authorization.k8s.io/v1
  73. kind: ClusterRole
  74. metadata:
  75. name: psp:restricted-noroot
  76. rules:
  77. - apiGroups:
  78. - extensions
  79. resourceNames:
  80. - restricted-noroot
  81. resources:
  82. - podsecuritypolicies
  83. verbs:
  84. - use
  85. ---
  86. apiVersion: rbac.authorization.k8s.io/v1
  87. kind: ClusterRoleBinding
  88. metadata:
  89. name: psp:restricted-noroot
  90. roleRef:
  91. apiGroup: rbac.authorization.k8s.io
  92. kind: ClusterRole
  93. name: psp:restricted-noroot
  94. subjects:
  95. - apiGroup: rbac.authorization.k8s.io
  96. kind: Group
  97. name: system:serviceaccounts
  98. - apiGroup: rbac.authorization.k8s.io
  99. kind: Group
  100. name: system:authenticated
  101. ---
  102. apiVersion: networking.k8s.io/v1
  103. kind: NetworkPolicy
  104. metadata:
  105. name: default-allow-all
  106. spec:
  107. podSelector: {}
  108. ingress:
  109. - {}
  110. egress:
  111. - {}
  112. policyTypes:
  113. - Ingress
  114. - Egress
  115. ---
  116. apiVersion: v1
  117. kind: ServiceAccount
  118. metadata:
  119. name: default
  120. automountServiceAccountToken: false

Reference Hardened RKE Cluster Template Configuration

The reference RKE cluster template provides the minimum required configuration to achieve a hardened installation of Kubernetes. RKE templates are used to provision Kubernetes and define Rancher settings. Follow the Rancher documentation for additional information about installing RKE and its template details.

  • v1.25 and Newer
  • v1.24 and Older
  1. #
  2. # Cluster Config
  3. #
  4. default_pod_security_admission_configuration_template_name: rancher-restricted
  5. enable_network_policy: true
  6. local_cluster_auth_endpoint:
  7. enabled: true
  8. name: # Define cluster name
  9. #
  10. # Rancher Config
  11. #
  12. rancher_kubernetes_engine_config:
  13. addon_job_timeout: 45
  14. authentication:
  15. strategy: x509|webhook
  16. kubernetes_version: # Define RKE version
  17. services:
  18. etcd:
  19. uid: 52034
  20. gid: 52034
  21. kube-api:
  22. audit_log:
  23. enabled: true
  24. event_rate_limit:
  25. enabled: true
  26. pod_security_policy: false
  27. secrets_encryption_config:
  28. enabled: true
  29. kube-controller:
  30. extra_args:
  31. feature-gates: RotateKubeletServerCertificate=true
  32. tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
  33. kubelet:
  34. extra_args:
  35. feature-gates: RotateKubeletServerCertificate=true
  36. tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
  37. generate_serving_certificate: true
  38. scheduler:
  39. extra_args:
  40. tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
  1. #
  2. # Cluster Config
  3. #
  4. default_pod_security_policy_template_id: restricted-noroot
  5. enable_network_policy: true
  6. local_cluster_auth_endpoint:
  7. enabled: true
  8. name: # Define cluster name
  9. #
  10. # Rancher Config
  11. #
  12. rancher_kubernetes_engine_config:
  13. addon_job_timeout: 45
  14. authentication:
  15. strategy: x509|webhook
  16. kubernetes_version: # Define RKE version
  17. services:
  18. etcd:
  19. uid: 52034
  20. gid: 52034
  21. kube-api:
  22. audit_log:
  23. enabled: true
  24. event_rate_limit:
  25. enabled: true
  26. pod_security_policy: true
  27. secrets_encryption_config:
  28. enabled: true
  29. kube-controller:
  30. extra_args:
  31. feature-gates: RotateKubeletServerCertificate=true
  32. tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
  33. kubelet:
  34. extra_args:
  35. feature-gates: RotateKubeletServerCertificate=true
  36. protect-kernel-defaults: true
  37. tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
  38. generate_serving_certificate: true
  39. scheduler:
  40. extra_args:
  41. tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256

Conclusion

If you have followed this guide, your RKE custom cluster provisioned by Rancher will be configured to pass the CIS Kubernetes Benchmark. You can review our RKE self-assessment guides to understand how we verified each of the benchmarks and how you can do the same on your cluster.