CIS Hardening Guide

CIS Hardening Guide - 图1信息

请知悉,本文仅提供英文版。

This document provides prescriptive guidance for hardening a production installation of K3s. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Internet Security (CIS).

K3s has a number of security mitigations applied and turned on by default and will pass a number of the Kubernetes CIS controls without modification. There are some notable exceptions to this that require manual intervention to fully comply with the CIS Benchmark:

  1. K3s will not modify the host operating system. Any host-level modifications will need to be done manually.
  2. Certain CIS policy controls for NetworkPolicies and PodSecurityStandards (PodSecurityPolicies on v1.24 and older) will restrict the functionality of the cluster. You must opt into having K3s configure these by adding the appropriate options (enabling of admission plugins) to your command-line flags or configuration file as well as manually applying appropriate policies. Further details are presented in the sections below.

The first section (1.1) of the CIS Benchmark concerns itself primarily with pod manifest permissions and ownership. K3s doesn’t utilize these for the core components since everything is packaged into a single binary.

Host-level Requirements

There are two areas of host-level requirements: kernel parameters and etcd process/directory configuration. These are outlined in this section.

Ensure protect-kernel-defaults is set

This is a kubelet flag that will cause the kubelet to exit if the required kernel parameters are unset or are set to values that are different from the kubelet’s defaults.

Note: protect-kernel-defaults is exposed as a top-level flag for K3s.

Set kernel parameters

Create a file called /etc/sysctl.d/90-kubelet.conf and add the snippet below. Then run sysctl -p /etc/sysctl.d/90-kubelet.conf.

  1. vm.panic_on_oom=0
  2. vm.overcommit_memory=1
  3. kernel.panic=10
  4. kernel.panic_on_oops=1
  5. kernel.keys.root_maxbytes=25000000

Kubernetes Runtime Requirements

The runtime requirements to comply with the CIS Benchmark are centered around pod security (via PSP or PSA), network policies and API Server auditing logs. These are outlined in this section.

By default, K3s does not include any pod security or network policies. However, K3s ships with a controller that will enforce network policies, if any are created. K3s doesn’t enable auditing by default, so audit log configuration and audit policy must be created manually. By default, K3s runs with the both the PodSecurity and NodeRestriction admission controllers enabled, among others.

Pod Security

  • v1.25 and Newer
  • v1.24 and Older

K3s v1.25 and newer support Pod Security Admissions (PSAs) for controlling pod security. PSAs are enabled by passing the following flag to the K3s server:

  1. --kube-apiserver-arg="admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml"

The policy should be written to a file named psa.yaml in /var/lib/rancher/k3s/server directory.

Here is an example of a compliant PSA:

  1. apiVersion: apiserver.config.k8s.io/v1
  2. kind: AdmissionConfiguration
  3. plugins:
  4. - name: PodSecurity
  5. configuration:
  6. apiVersion: pod-security.admission.config.k8s.io/v1beta1
  7. kind: PodSecurityConfiguration
  8. defaults:
  9. enforce: "restricted"
  10. enforce-version: "latest"
  11. audit: "restricted"
  12. audit-version: "latest"
  13. warn: "restricted"
  14. warn-version: "latest"
  15. exemptions:
  16. usernames: []
  17. runtimeClasses: []
  18. namespaces: [kube-system, cis-operator-system]

K3s v1.24 and older support Pod Security Policies (PSPs) for controlling pod security. PSPs are enabled by passing the following flag to the K3s server:

  1. --kube-apiserver-arg="enable-admission-plugins=NodeRestriction,PodSecurityPolicy"

This will have the effect of maintaining the NodeRestriction plugin as well as enabling the PodSecurityPolicy.

When PSPs are enabled, a policy can be applied to satisfy the necessary controls described in section 5.2 of the CIS Benchmark.

Here is an example of a compliant PSP:

  1. apiVersion: policy/v1beta1
  2. kind: PodSecurityPolicy
  3. metadata:
  4. name: restricted-psp
  5. spec:
  6. privileged: false # CIS - 5.2.1
  7. allowPrivilegeEscalation: false # CIS - 5.2.5
  8. requiredDropCapabilities: # CIS - 5.2.7/8/9
  9. - ALL
  10. volumes:
  11. - 'configMap'
  12. - 'emptyDir'
  13. - 'projected'
  14. - 'secret'
  15. - 'downwardAPI'
  16. - 'csi'
  17. - 'persistentVolumeClaim'
  18. - 'ephemeral'
  19. hostNetwork: false # CIS - 5.2.4
  20. hostIPC: false # CIS - 5.2.3
  21. hostPID: false # CIS - 5.2.2
  22. runAsUser:
  23. rule: 'MustRunAsNonRoot' # CIS - 5.2.6
  24. seLinux:
  25. rule: 'RunAsAny'
  26. supplementalGroups:
  27. rule: 'MustRunAs'
  28. ranges:
  29. - min: 1
  30. max: 65535
  31. fsGroup:
  32. rule: 'MustRunAs'
  33. ranges:
  34. - min: 1
  35. max: 65535
  36. readOnlyRootFilesystem: false

For the above PSP to be effective, we need to create a ClusterRole and a ClusterRoleBinding. We also need to include a “system unrestricted policy” which is needed for system-level pods that require additional privileges, and an additional policy that allows sysctls necessary for servicelb to function properly.

Combining the configuration above with the Network Policy described in the next section, a single file can be placed in the /var/lib/rancher/k3s/server/manifests directory. Here is an example of a policy.yaml file:

  1. apiVersion: policy/v1beta1
  2. kind: PodSecurityPolicy
  3. metadata:
  4. name: restricted-psp
  5. spec:
  6. privileged: false
  7. allowPrivilegeEscalation: false
  8. requiredDropCapabilities:
  9. - ALL
  10. volumes:
  11. - 'configMap'
  12. - 'emptyDir'
  13. - 'projected'
  14. - 'secret'
  15. - 'downwardAPI'
  16. - 'csi'
  17. - 'persistentVolumeClaim'
  18. - 'ephemeral'
  19. hostNetwork: false
  20. hostIPC: false
  21. hostPID: false
  22. runAsUser:
  23. rule: 'MustRunAsNonRoot'
  24. seLinux:
  25. rule: 'RunAsAny'
  26. supplementalGroups:
  27. rule: 'MustRunAs'
  28. ranges:
  29. - min: 1
  30. max: 65535
  31. fsGroup:
  32. rule: 'MustRunAs'
  33. ranges:
  34. - min: 1
  35. max: 65535
  36. readOnlyRootFilesystem: false
  37. ---
  38. apiVersion: policy/v1beta1
  39. kind: PodSecurityPolicy
  40. metadata:
  41. name: system-unrestricted-psp
  42. annotations:
  43. seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
  44. spec:
  45. allowPrivilegeEscalation: true
  46. allowedCapabilities:
  47. - '*'
  48. fsGroup:
  49. rule: RunAsAny
  50. hostIPC: true
  51. hostNetwork: true
  52. hostPID: true
  53. hostPorts:
  54. - max: 65535
  55. min: 0
  56. privileged: true
  57. runAsUser:
  58. rule: RunAsAny
  59. seLinux:
  60. rule: RunAsAny
  61. supplementalGroups:
  62. rule: RunAsAny
  63. volumes:
  64. - '*'
  65. ---
  66. apiVersion: policy/v1beta1
  67. kind: PodSecurityPolicy
  68. metadata:
  69. name: svclb-psp
  70. annotations:
  71. seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
  72. spec:
  73. allowPrivilegeEscalation: false
  74. allowedCapabilities:
  75. - NET_ADMIN
  76. allowedUnsafeSysctls:
  77. - net.ipv4.ip_forward
  78. - net.ipv6.conf.all.forwarding
  79. fsGroup:
  80. rule: RunAsAny
  81. hostPorts:
  82. - max: 65535
  83. min: 0
  84. runAsUser:
  85. rule: RunAsAny
  86. seLinux:
  87. rule: RunAsAny
  88. supplementalGroups:
  89. rule: RunAsAny
  90. ---
  91. apiVersion: rbac.authorization.k8s.io/v1
  92. kind: ClusterRole
  93. metadata:
  94. name: psp:restricted-psp
  95. rules:
  96. - apiGroups:
  97. - policy
  98. resources:
  99. - podsecuritypolicies
  100. verbs:
  101. - use
  102. resourceNames:
  103. - restricted-psp
  104. ---
  105. apiVersion: rbac.authorization.k8s.io/v1
  106. kind: ClusterRole
  107. metadata:
  108. name: psp:system-unrestricted-psp
  109. rules:
  110. - apiGroups:
  111. - policy
  112. resources:
  113. - podsecuritypolicies
  114. resourceNames:
  115. - system-unrestricted-psp
  116. verbs:
  117. - use
  118. ---
  119. apiVersion: rbac.authorization.k8s.io/v1
  120. kind: ClusterRole
  121. metadata:
  122. name: psp:svclb-psp
  123. rules:
  124. - apiGroups:
  125. - policy
  126. resources:
  127. - podsecuritypolicies
  128. resourceNames:
  129. - svclb-psp
  130. verbs:
  131. - use
  132. ---
  133. apiVersion: rbac.authorization.k8s.io/v1
  134. kind: ClusterRoleBinding
  135. metadata:
  136. name: default:restricted-psp
  137. roleRef:
  138. apiGroup: rbac.authorization.k8s.io
  139. kind: ClusterRole
  140. name: psp:restricted-psp
  141. subjects:
  142. - kind: Group
  143. name: system:authenticated
  144. apiGroup: rbac.authorization.k8s.io
  145. ---
  146. apiVersion: rbac.authorization.k8s.io/v1
  147. kind: ClusterRoleBinding
  148. metadata:
  149. name: system-unrestricted-node-psp-rolebinding
  150. roleRef:
  151. apiGroup: rbac.authorization.k8s.io
  152. kind: ClusterRole
  153. name: psp:system-unrestricted-psp
  154. subjects:
  155. - apiGroup: rbac.authorization.k8s.io
  156. kind: Group
  157. name: system:nodes
  158. ---
  159. apiVersion: rbac.authorization.k8s.io/v1
  160. kind: RoleBinding
  161. metadata:
  162. name: system-unrestricted-svc-acct-psp-rolebinding
  163. namespace: kube-system
  164. roleRef:
  165. apiGroup: rbac.authorization.k8s.io
  166. kind: ClusterRole
  167. name: psp:system-unrestricted-psp
  168. subjects:
  169. - apiGroup: rbac.authorization.k8s.io
  170. kind: Group
  171. name: system:serviceaccounts
  172. ---
  173. apiVersion: rbac.authorization.k8s.io/v1
  174. kind: RoleBinding
  175. metadata:
  176. name: svclb-psp-rolebinding
  177. namespace: kube-system
  178. roleRef:
  179. apiGroup: rbac.authorization.k8s.io
  180. kind: ClusterRole
  181. name: psp:svclb-psp
  182. subjects:
  183. - kind: ServiceAccount
  184. name: svclb
  185. ---
  186. kind: NetworkPolicy
  187. apiVersion: networking.k8s.io/v1
  188. metadata:
  189. name: intra-namespace
  190. namespace: kube-system
  191. spec:
  192. podSelector: {}
  193. ingress:
  194. - from:
  195. - namespaceSelector:
  196. matchLabels:
  197. name: kube-system
  198. ---
  199. kind: NetworkPolicy
  200. apiVersion: networking.k8s.io/v1
  201. metadata:
  202. name: intra-namespace
  203. namespace: default
  204. spec:
  205. podSelector: {}
  206. ingress:
  207. - from:
  208. - namespaceSelector:
  209. matchLabels:
  210. name: default
  211. ---
  212. kind: NetworkPolicy
  213. apiVersion: networking.k8s.io/v1
  214. metadata:
  215. name: intra-namespace
  216. namespace: kube-public
  217. spec:
  218. podSelector: {}
  219. ingress:
  220. - from:
  221. - namespaceSelector:
  222. matchLabels:
  223. name: kube-public

Note: The Kubernetes critical additions such as CNI, DNS, and Ingress are run as pods in the kube-system namespace. Therefore, this namespace will have a policy that is less restrictive so that these components can run properly.

NetworkPolicies

CIS requires that all namespaces have a network policy applied that reasonably limits traffic into namespaces and pods.

Network policies should be placed the /var/lib/rancher/k3s/server/manifests directory, where they will automatically be deployed on startup.

Here is an example of a compliant network policy.

  1. kind: NetworkPolicy
  2. apiVersion: networking.k8s.io/v1
  3. metadata:
  4. name: intra-namespace
  5. namespace: kube-system
  6. spec:
  7. podSelector: {}
  8. ingress:
  9. - from:
  10. - namespaceSelector:
  11. matchLabels:
  12. name: kube-system

With the applied restrictions, DNS will be blocked unless purposely allowed. Below is a network policy that will allow for traffic to exist for DNS.

  1. apiVersion: networking.k8s.io/v1
  2. kind: NetworkPolicy
  3. metadata:
  4. name: default-network-dns-policy
  5. namespace: <NAMESPACE>
  6. spec:
  7. ingress:
  8. - ports:
  9. - port: 53
  10. protocol: TCP
  11. - port: 53
  12. protocol: UDP
  13. podSelector:
  14. matchLabels:
  15. k8s-app: kube-dns
  16. policyTypes:
  17. - Ingress

The metrics-server and Traefik ingress controller will be blocked by default if network policies are not created to allow access. Traefik v1 as packaged in K3s version 1.20 and below uses different labels than Traefik v2. Ensure that you only use the sample yaml below that is associated with the version of Traefik present on your cluster.

  • v1.21 and Newer
  • v1.20 and Older
  1. apiVersion: networking.k8s.io/v1
  2. kind: NetworkPolicy
  3. metadata:
  4. name: allow-all-metrics-server
  5. namespace: kube-system
  6. spec:
  7. podSelector:
  8. matchLabels:
  9. k8s-app: metrics-server
  10. ingress:
  11. - {}
  12. policyTypes:
  13. - Ingress
  14. ---
  15. apiVersion: networking.k8s.io/v1
  16. kind: NetworkPolicy
  17. metadata:
  18. name: allow-all-svclbtraefik-ingress
  19. namespace: kube-system
  20. spec:
  21. podSelector:
  22. matchLabels:
  23. svccontroller.k3s.cattle.io/svcname: traefik
  24. ingress:
  25. - {}
  26. policyTypes:
  27. - Ingress
  28. ---
  29. apiVersion: networking.k8s.io/v1
  30. kind: NetworkPolicy
  31. metadata:
  32. name: allow-all-traefik-v121-ingress
  33. namespace: kube-system
  34. spec:
  35. podSelector:
  36. matchLabels:
  37. app.kubernetes.io/name: traefik
  38. ingress:
  39. - {}
  40. policyTypes:
  41. - Ingress
  42. ---
  1. apiVersion: networking.k8s.io/v1
  2. kind: NetworkPolicy
  3. metadata:
  4. name: allow-all-metrics-server
  5. namespace: kube-system
  6. spec:
  7. podSelector:
  8. matchLabels:
  9. k8s-app: metrics-server
  10. ingress:
  11. - {}
  12. policyTypes:
  13. - Ingress
  14. ---
  15. apiVersion: networking.k8s.io/v1
  16. kind: NetworkPolicy
  17. metadata:
  18. name: allow-all-svclbtraefik-ingress
  19. namespace: kube-system
  20. spec:
  21. podSelector:
  22. matchLabels:
  23. svccontroller.k3s.cattle.io/svcname: traefik
  24. ingress:
  25. - {}
  26. policyTypes:
  27. - Ingress
  28. ---
  29. apiVersion: networking.k8s.io/v1
  30. kind: NetworkPolicy
  31. metadata:
  32. name: allow-all-traefik-v120-ingress
  33. namespace: kube-system
  34. spec:
  35. podSelector:
  36. matchLabels:
  37. app: traefik
  38. ingress:
  39. - {}
  40. policyTypes:
  41. - Ingress
  42. ---

CIS Hardening Guide - 图2信息

Operators must manage network policies as normal for additional namespaces that are created.

API Server audit configuration

CIS requirements 1.2.22 to 1.2.25 are related to configuring audit logs for the API Server. K3s doesn’t create by default the log directory and audit policy, as auditing requirements are specific to each user’s policies and environment.

The log directory, ideally, must be created before starting K3s. A restrictive access permission is recommended to avoid leaking potential sensitive information.

  1. sudo mkdir -p -m 700 /var/lib/rancher/k3s/server/logs

A starter audit policy to log request metadata is provided below. The policy should be written to a file named audit.yaml in /var/lib/rancher/k3s/server directory. Detailed information about policy configuration for the API server can be found in the Kubernetes documentation.

  1. apiVersion: audit.k8s.io/v1
  2. kind: Policy
  3. rules:
  4. - level: Metadata

Both configurations must be passed as arguments to the API Server as:

  1. --kube-apiserver-arg='audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log'
  2. --kube-apiserver-arg='audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml'

If the configurations are created after K3s is installed, they must be added to K3s’ systemd service in /etc/systemd/system/k3s.service.

  1. ExecStart=/usr/local/bin/k3s \
  2. server \
  3. '--kube-apiserver-arg=audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log' \
  4. '--kube-apiserver-arg=audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml' \

K3s must be restarted to load the new configuration.

  1. sudo systemctl daemon-reload
  2. sudo systemctl restart k3s.service

Configuration for Kubernetes Components

The configuration below should be placed in the configuration file, and contains all the necessary remediations to harden the Kubernetes components.

  • v1.25 and Newer
  • v1.24 and Older
  1. protect-kernel-defaults: true
  2. secrets-encryption: true
  3. kube-apiserver-arg:
  4. - 'admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml'
  5. - 'audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log'
  6. - 'audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml'
  7. - 'audit-log-maxage=30'
  8. - 'audit-log-maxbackup=10'
  9. - 'audit-log-maxsize=100'
  10. kube-controller-manager-arg:
  11. - 'terminated-pod-gc-threshold=10'
  12. - 'use-service-account-credentials=true'
  13. kubelet-arg:
  14. - 'streaming-connection-idle-timeout=5m'
  15. - 'make-iptables-util-chains=true'
  1. protect-kernel-defaults: true
  2. secrets-encryption: true
  3. kube-apiserver-arg:
  4. - 'enable-admission-plugins=NodeRestriction,PodSecurityPolicy,NamespaceLifecycle,ServiceAccount'
  5. - 'audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log'
  6. - 'audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml'
  7. - 'audit-log-maxage=30'
  8. - 'audit-log-maxbackup=10'
  9. - 'audit-log-maxsize=100'
  10. kube-controller-manager-arg:
  11. - 'terminated-pod-gc-threshold=10'
  12. - 'use-service-account-credentials=true'
  13. kubelet-arg:
  14. - 'streaming-connection-idle-timeout=5m'
  15. - 'make-iptables-util-chains=true'

Control Plane Execution and Arguments

Listed below are the K3s control plane components and the arguments they are given at start, by default. Commented to their right is the CIS 1.6 control that they satisfy.

  1. kube-apiserver
  2. --advertise-port=6443
  3. --allow-privileged=true
  4. --anonymous-auth=false # 1.2.1
  5. --api-audiences=unknown
  6. --authorization-mode=Node,RBAC
  7. --bind-address=127.0.0.1
  8. --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs
  9. --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt # 1.2.31
  10. --enable-admission-plugins=NodeRestriction,PodSecurityPolicy # 1.2.17
  11. --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt # 1.2.32
  12. --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt # 1.2.29
  13. --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key # 1.2.29
  14. --etcd-servers=https://127.0.0.1:2379
  15. --insecure-port=0 # 1.2.19
  16. --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt
  17. --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt
  18. --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key
  19. --profiling=false # 1.2.21
  20. --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt
  21. --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key
  22. --requestheader-allowed-names=system:auth-proxy
  23. --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt
  24. --requestheader-extra-headers-prefix=X-Remote-Extra-
  25. --requestheader-group-headers=X-Remote-Group
  26. --requestheader-username-headers=X-Remote-User
  27. --secure-port=6444 # 1.2.20
  28. --service-account-issuer=k3s
  29. --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key # 1.2.28
  30. --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key
  31. --service-cluster-ip-range=10.43.0.0/16
  32. --storage-backend=etcd3
  33. --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt # 1.2.30
  34. --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key # 1.2.30
  35. --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
  1. kube-controller-manager
  2. --address=127.0.0.1
  3. --allocate-node-cidrs=true
  4. --bind-address=127.0.0.1 # 1.3.7
  5. --cluster-cidr=10.42.0.0/16
  6. --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt
  7. --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key
  8. --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig
  9. --port=10252
  10. --profiling=false # 1.3.2
  11. --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt # 1.3.5
  12. --secure-port=0
  13. --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key # 1.3.4
  14. --use-service-account-credentials=true # 1.3.3
  1. kube-scheduler
  2. --address=127.0.0.1
  3. --bind-address=127.0.0.1 # 1.4.2
  4. --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig
  5. --port=10251
  6. --profiling=false # 1.4.1
  7. --secure-port=0
  1. kubelet
  2. --address=0.0.0.0
  3. --anonymous-auth=false # 4.2.1
  4. --authentication-token-webhook=true
  5. --authorization-mode=Webhook # 4.2.2
  6. --cgroup-driver=cgroupfs
  7. --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt # 4.2.3
  8. --cloud-provider=external
  9. --cluster-dns=10.43.0.10
  10. --cluster-domain=cluster.local
  11. --cni-bin-dir=/var/lib/rancher/k3s/data/223e6420f8db0d8828a8f5ed3c44489bb8eb47aa71485404f8af8c462a29bea3/bin
  12. --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d
  13. --container-runtime-endpoint=/run/k3s/containerd/containerd.sock
  14. --container-runtime=remote
  15. --containerd=/run/k3s/containerd/containerd.sock
  16. --eviction-hard=imagefs.available<5%,nodefs.available<5%
  17. --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10%
  18. --fail-swap-on=false
  19. --healthz-bind-address=127.0.0.1
  20. --hostname-override=hostname01
  21. --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig
  22. --kubelet-cgroups=/systemd/system.slice
  23. --node-labels=
  24. --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests
  25. --protect-kernel-defaults=true # 4.2.6
  26. --read-only-port=0 # 4.2.4
  27. --resolv-conf=/run/systemd/resolve/resolv.conf
  28. --runtime-cgroups=/systemd/system.slice
  29. --serialize-image-pulls=false
  30. --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt # 4.2.10
  31. --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key # 4.2.10

Additional information about CIS requirements 1.2.22 to 1.2.25 is presented below.

Known Issues

The following are controls that K3s currently does not pass by default. Each gap will be explained, along with a note clarifying whether it can be passed through manual operator intervention, or if it will be addressed in a future release of K3s.

Control 1.2.15

Ensure that the admission control plugin NamespaceLifecycle is set.

Details

Rationale Setting admission control policy to NamespaceLifecycle ensures that objects cannot be created in non-existent namespaces, and that namespaces undergoing termination are not used for creating the new objects. This is recommended to enforce the integrity of the namespace termination process and also for the availability of the newer objects.

This can be remediated by passing this argument as a value to the enable-admission-plugins= and pass that to --kube-apiserver-arg= argument to k3s server. An example can be found below.

Control 1.2.16

Ensure that the admission control plugin PodSecurityPolicy is set.

Details

Rationale A Pod Security Policy is a cluster-level resource that controls the actions that a pod can perform and what it has the ability to access. The PodSecurityPolicy objects define a set of conditions that a pod must run with in order to be accepted into the system. Pod Security Policies are comprised of settings and strategies that control the security features a pod has access to and hence this must be used to control pod access permissions.

This can be remediated by passing this argument as a value to the enable-admission-plugins= and pass that to --kube-apiserver-arg= argument to k3s server. An example can be found below.

Control 1.2.22

Ensure that the --audit-log-path argument is set.

Details

Rationale Auditing the Kubernetes API Server provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators or other components of the system. Even though currently, Kubernetes provides only basic audit capabilities, it should be enabled. You can enable it by setting an appropriate audit log path.

This can be remediated by passing this argument as a value to the --kube-apiserver-arg= argument to k3s server. An example can be found below.

Control 1.2.23

Ensure that the --audit-log-maxage argument is set to 30 or as appropriate.

Details

Rationale Retaining logs for at least 30 days ensures that you can go back in time and investigate or correlate any events. Set your audit log retention period to 30 days or as per your business requirements.

This can be remediated by passing this argument as a value to the --kube-apiserver-arg= argument to k3s server. An example can be found below.

Control 1.2.24

Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate.

Details

Rationale Kubernetes automatically rotates the log files. Retaining old log files ensures that you would have sufficient log data available for carrying out any investigation or correlation. For example, if you have set file size of 100 MB and the number of old log files to keep as 10, you would approximate have 1 GB of log data that you could potentially use for your analysis.

This can be remediated by passing this argument as a value to the --kube-apiserver-arg= argument to k3s server. An example can be found below.

Control 1.2.25

Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate.

Details

Rationale Kubernetes automatically rotates the log files. Retaining old log files ensures that you would have sufficient log data available for carrying out any investigation or correlation. If you have set file size of 100 MB and the number of old log files to keep as 10, you would approximate have 1 GB of log data that you could potentially use for your analysis.

This can be remediated by passing this argument as a value to the --kube-apiserver-arg= argument to k3s server. An example can be found below.

Control 1.2.26

Ensure that the --request-timeout argument is set as appropriate.

Details

Rationale Setting global request timeout allows extending the API server request timeout limit to a duration appropriate to the user’s connection speed. By default, it is set to 60 seconds which might be problematic on slower connections making cluster resources inaccessible once the data volume for requests exceeds what can be transmitted in 60 seconds. But, setting this timeout limit to be too large can exhaust the API server resources making it prone to Denial-of-Service attack. Hence, it is recommended to set this limit as appropriate and change the default limit of 60 seconds only if needed.

This can be remediated by passing this argument as a value to the --kube-apiserver-arg= argument to k3s server. An example can be found below.

Control 1.2.27

Ensure that the --service-account-lookup argument is set to true.

Details

Rationale If --service-account-lookup is not enabled, the apiserver only verifies that the authentication token is valid, and does not validate that the service account token mentioned in the request is actually present in etcd. This allows using a service account token even after the corresponding service account is deleted. This is an example of time of check to time of use security issue.

This can be remediated by passing this argument as a value to the --kube-apiserver-arg= argument to k3s server. An example can be found below.

Control 1.2.33

Ensure that the --encryption-provider-config argument is set as appropriate.

Details

Rationale etcd is a highly available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be encrypted at rest to avoid any disclosures.

Detailed steps on how to configure secrets encryption in K3s are available in Secrets Encryption.

Control 1.2.34

Ensure that encryption providers are appropriately configured.

Details

Rationale Where etcd encryption is used, it is important to ensure that the appropriate set of encryption providers is used. Currently, the aescbc, kms and secretbox are likely to be appropriate options.

This can be remediated by passing a valid configuration to k3s as outlined above. Detailed steps on how to configure secrets encryption in K3s are available in Secrets Encryption.

Control 1.3.1

Ensure that the --terminated-pod-gc-threshold argument is set as appropriate.

Details

Rationale Garbage collection is important to ensure sufficient resource availability and avoiding degraded performance and availability. In the worst case, the system might crash or just be unusable for a long period of time. The current setting for garbage collection is 12,500 terminated pods which might be too high for your system to sustain. Based on your system resources and tests, choose an appropriate threshold value to activate garbage collection.

This can be remediated by passing this argument as a value to the --kube-apiserver-arg= argument to k3s server. An example can be found below.

Control 3.2.1

Ensure that a minimal audit policy is created.

Details

Rationale Logging is an important detective control for all systems, to detect potential unauthorized access.

This can be remediated by passing controls 1.2.22 - 1.2.25 and verifying their efficacy.

Control 4.2.7

Ensure that the --make-iptables-util-chains argument is set to true.

Details

Rationale Kubelets can automatically manage the required changes to iptables based on how you choose your networking options for the pods. It is recommended to let kubelets manage the changes to iptables. This ensures that the iptables configuration remains in sync with pods networking configuration. Manually configuring iptables with dynamic pod network configuration changes might hamper the communication between pods/containers and to the outside world. You might have iptables rules too restrictive or too open.

This can be remediated by passing this argument as a value to the --kube-apiserver-arg= argument to k3s server. An example can be found below.

Control 5.1.5

Ensure that default service accounts are not actively used

Details

Rationale Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod.

Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account.

The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments.

This can be remediated by updating the automountServiceAccountToken field to false for the default service account in each namespace.

For default service accounts in the built-in namespaces (kube-system, kube-public, kube-node-lease, and default), K3s does not automatically do this. You can manually update this field on these service accounts to pass the control.

Conclusion

If you have followed this guide, your K3s cluster will be configured to comply with the CIS Kubernetes Benchmark. You can review the CIS Benchmark Self-Assessment Guide to understand the expectations of each of the benchmark’s checks and how you can do the same on your cluster.