K3s Hardening Guides

This document provides prescriptive guidance for how to harden a K3s cluster intended for production, before provisioning it with Rancher. It outlines the configurations and controls required for Center for Information Security (CIS) Kubernetes benchmark controls.

K3s Hardening Guides - 图1note

This hardening guide describes how to secure the nodes in your cluster. We recommended that you follow this guide before you install Kubernetes.

This hardening guide is intended to be used for K3s clusters and is associated with the following versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher:

Rancher VersionCIS Benchmark VersionKubernetes Version
Rancher v2.7Benchmark v1.23Kubernetes v1.23
Rancher v2.7Benchmark v1.24Kubernetes v1.24
Rancher v2.7Benchmark v1.7Kubernetes v1.25 up to v1.26

K3s Hardening Guides - 图2note

In Benchmark v1.7, the --protect-kernel-defaults (4.2.6) parameter isn’t required anymore, and was removed by CIS.

For more details on how to evaluate a hardened K3s cluster against the official CIS benchmark, refer to the K3s self-assessment guides for specific Kubernetes and CIS benchmark versions.

K3s passes a number of the Kubernetes CIS controls without modification, as it applies several security mitigations by default. There are some notable exceptions to this that require manual intervention to fully comply with the CIS Benchmark:

  1. K3s does not modify the host operating system. Any host-level modifications need to be done manually.
  2. Certain CIS policy controls for NetworkPolicies and PodSecurityStandards (PodSecurityPolicies on v1.24 and older) restrict cluster functionality. You must opt into having K3s configure these policies. Add the appropriate options to your command-line flags or configuration file (enable admission plugins), and manually apply the appropriate policies. See further for more details.

The first section (1.1) of the CIS Benchmark primarily focuses on pod manifest permissions and ownership. Since everything in the distribution is packaged in a single binary, this section does not apply to the core components of K3s.

Host-level Requirements

Ensure protect-kernel-defaults is set

  • v1.25 and Newer
  • v1.24 and Older

The protect-kernel-defaults is no longer required since CIS benchmark 1.7.

This is a kubelet flag that will cause the kubelet to exit if the required kernel parameters are unset or are set to values that are different from the kubelet’s defaults.

The protect-kernel-defaults flag can be set in the cluster configuration in Rancher.

  1. spec:
  2. rkeConfig:
  3. machineSelectorConfig:
  4. - config:
  5. protect-kernel-defaults: true

Set kernel parameters

The following sysctl configuration is recommended for all nodes type in the cluster. Set the following parameters in /etc/sysctl.d/90-kubelet.conf:

  1. vm.panic_on_oom=0
  2. vm.overcommit_memory=1
  3. kernel.panic=10
  4. kernel.panic_on_oops=1

Run sudo sysctl -p /etc/sysctl.d/90-kubelet.conf to enable the settings.

This configuration needs to be done before setting the kubelet flag, otherwise K3s will fail to start.

Kubernetes Runtime Requirements

The CIS Benchmark runtime requirements center around pod security (via PSP or PSA), network policies and API Server auditing logs.

By default, K3s does not include any pod security or network policies. However, K3s ships with a controller that enforces any network policies you create. By default, K3s enables both the PodSecurity and NodeRestriction admission controllers, among others.

Pod Security

  • v1.25 and Newer
  • v1.24 and Older

K3s v1.25 and newer support Pod Security admission (PSA) for controlling pod security.

You can specify the PSA configuration by setting the defaultPodSecurityAdmissionConfigurationTemplateName field in the cluster configuration in Rancher:

  1. spec:
  2. defaultPodSecurityAdmissionConfigurationTemplateName: rancher-restricted

The rancher-restricted template is provided by Rancher to enforce the highly-restrictive Kubernetes upstream Restricted profile with best practices for pod hardening.

K3s v1.24 and older support Pod Security Policy (PSP) for controlling pod security.

You can enable PSPs by passing the following flags in the cluster configuration in Rancher:

  1. spec:
  2. rkeConfig:
  3. machineGlobalConfig:
  4. kube-apiserver-arg:
  5. - enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount

This maintains the NodeRestriction plugin and enables the PodSecurityPolicy.

Once you enable PSPs, you can apply a policy to satisfy the necessary controls described in section 5.2 of the CIS Benchmark.

K3s Hardening Guides - 图3note

These are manual checks in the CIS Benchmark. The CIS scan flags the results as warning, because manual inspection is necessary by the cluster operator.

Here is an example of a compliant PSP:

  1. ---
  2. apiVersion: policy/v1beta1
  3. kind: PodSecurityPolicy
  4. metadata:
  5. name: restricted-psp
  6. spec:
  7. privileged: false # CIS - 5.2.1
  8. allowPrivilegeEscalation: false # CIS - 5.2.5
  9. requiredDropCapabilities: # CIS - 5.2.7/8/9
  10. - ALL
  11. volumes:
  12. - 'configMap'
  13. - 'emptyDir'
  14. - 'projected'
  15. - 'secret'
  16. - 'downwardAPI'
  17. - 'csi'
  18. - 'persistentVolumeClaim'
  19. - 'ephemeral'
  20. hostNetwork: false # CIS - 5.2.4
  21. hostIPC: false # CIS - 5.2.3
  22. hostPID: false # CIS - 5.2.2
  23. runAsUser:
  24. rule: 'MustRunAsNonRoot' # CIS - 5.2.6
  25. seLinux:
  26. rule: 'RunAsAny'
  27. supplementalGroups:
  28. rule: 'MustRunAs'
  29. ranges:
  30. - min: 1
  31. max: 65535
  32. fsGroup:
  33. rule: 'MustRunAs'
  34. ranges:
  35. - min: 1
  36. max: 65535
  37. readOnlyRootFilesystem: false

For the example PSP to be effective, we need to create a ClusterRole and a ClusterRoleBinding. We also need to include a “system unrestricted policy” for system-level pods that require additional privileges, and an additional policy that allows the necessary sysctls for full functionality of ServiceLB.

  1. ---
  2. apiVersion: policy/v1beta1
  3. kind: PodSecurityPolicy
  4. metadata:
  5. name: restricted-psp
  6. spec:
  7. privileged: false
  8. allowPrivilegeEscalation: false
  9. requiredDropCapabilities:
  10. - ALL
  11. volumes:
  12. - 'configMap'
  13. - 'emptyDir'
  14. - 'projected'
  15. - 'secret'
  16. - 'downwardAPI'
  17. - 'csi'
  18. - 'persistentVolumeClaim'
  19. - 'ephemeral'
  20. hostNetwork: false
  21. hostIPC: false
  22. hostPID: false
  23. runAsUser:
  24. rule: 'MustRunAsNonRoot'
  25. seLinux:
  26. rule: 'RunAsAny'
  27. supplementalGroups:
  28. rule: 'MustRunAs'
  29. ranges:
  30. - min: 1
  31. max: 65535
  32. fsGroup:
  33. rule: 'MustRunAs'
  34. ranges:
  35. - min: 1
  36. max: 65535
  37. readOnlyRootFilesystem: false
  38. ---
  39. apiVersion: policy/v1beta1
  40. kind: PodSecurityPolicy
  41. metadata:
  42. name: system-unrestricted-psp
  43. annotations:
  44. seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
  45. spec:
  46. allowPrivilegeEscalation: true
  47. allowedCapabilities:
  48. - '*'
  49. fsGroup:
  50. rule: RunAsAny
  51. hostIPC: true
  52. hostNetwork: true
  53. hostPID: true
  54. hostPorts:
  55. - max: 65535
  56. min: 0
  57. privileged: true
  58. runAsUser:
  59. rule: RunAsAny
  60. seLinux:
  61. rule: RunAsAny
  62. supplementalGroups:
  63. rule: RunAsAny
  64. volumes:
  65. - '*'
  66. ---
  67. apiVersion: policy/v1beta1
  68. kind: PodSecurityPolicy
  69. metadata:
  70. name: svclb-psp
  71. annotations:
  72. seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
  73. spec:
  74. allowPrivilegeEscalation: false
  75. allowedCapabilities:
  76. - NET_ADMIN
  77. allowedUnsafeSysctls:
  78. - net.ipv4.ip_forward
  79. - net.ipv6.conf.all.forwarding
  80. fsGroup:
  81. rule: RunAsAny
  82. hostPorts:
  83. - max: 65535
  84. min: 0
  85. runAsUser:
  86. rule: RunAsAny
  87. seLinux:
  88. rule: RunAsAny
  89. supplementalGroups:
  90. rule: RunAsAny
  91. ---
  92. apiVersion: rbac.authorization.k8s.io/v1
  93. kind: ClusterRole
  94. metadata:
  95. name: psp:restricted-psp
  96. rules:
  97. - apiGroups:
  98. - policy
  99. resources:
  100. - podsecuritypolicies
  101. verbs:
  102. - use
  103. resourceNames:
  104. - restricted-psp
  105. ---
  106. apiVersion: rbac.authorization.k8s.io/v1
  107. kind: ClusterRole
  108. metadata:
  109. name: psp:system-unrestricted-psp
  110. rules:
  111. - apiGroups:
  112. - policy
  113. resources:
  114. - podsecuritypolicies
  115. resourceNames:
  116. - system-unrestricted-psp
  117. verbs:
  118. - use
  119. ---
  120. apiVersion: rbac.authorization.k8s.io/v1
  121. kind: ClusterRole
  122. metadata:
  123. name: psp:svclb-psp
  124. rules:
  125. - apiGroups:
  126. - policy
  127. resources:
  128. - podsecuritypolicies
  129. resourceNames:
  130. - svclb-psp
  131. verbs:
  132. - use
  133. ---
  134. apiVersion: rbac.authorization.k8s.io/v1
  135. kind: ClusterRole
  136. metadata:
  137. name: psp:svc-local-path-provisioner-psp
  138. rules:
  139. - apiGroups:
  140. - policy
  141. resources:
  142. - podsecuritypolicies
  143. resourceNames:
  144. - system-unrestricted-psp
  145. verbs:
  146. - use
  147. ---
  148. apiVersion: rbac.authorization.k8s.io/v1
  149. kind: ClusterRole
  150. metadata:
  151. name: psp:svc-coredns-psp
  152. rules:
  153. - apiGroups:
  154. - policy
  155. resources:
  156. - podsecuritypolicies
  157. resourceNames:
  158. - system-unrestricted-psp
  159. verbs:
  160. - use
  161. ---
  162. apiVersion: rbac.authorization.k8s.io/v1
  163. kind: ClusterRole
  164. metadata:
  165. name: psp:svc-cis-operator-psp
  166. rules:
  167. - apiGroups:
  168. - policy
  169. resources:
  170. - podsecuritypolicies
  171. resourceNames:
  172. - system-unrestricted-psp
  173. verbs:
  174. - use
  175. ---
  176. apiVersion: rbac.authorization.k8s.io/v1
  177. kind: ClusterRoleBinding
  178. metadata:
  179. name: default:restricted-psp
  180. roleRef:
  181. apiGroup: rbac.authorization.k8s.io
  182. kind: ClusterRole
  183. name: psp:restricted-psp
  184. subjects:
  185. - kind: Group
  186. name: system:authenticated
  187. apiGroup: rbac.authorization.k8s.io
  188. ---
  189. apiVersion: rbac.authorization.k8s.io/v1
  190. kind: ClusterRoleBinding
  191. metadata:
  192. name: system-unrestricted-node-psp-rolebinding
  193. roleRef:
  194. apiGroup: rbac.authorization.k8s.io
  195. kind: ClusterRole
  196. name: psp:system-unrestricted-psp
  197. subjects:
  198. - apiGroup: rbac.authorization.k8s.io
  199. kind: Group
  200. name: system:nodes
  201. ---
  202. apiVersion: rbac.authorization.k8s.io/v1
  203. kind: RoleBinding
  204. metadata:
  205. name: system-unrestricted-svc-acct-psp-rolebinding
  206. namespace: kube-system
  207. roleRef:
  208. apiGroup: rbac.authorization.k8s.io
  209. kind: ClusterRole
  210. name: psp:system-unrestricted-psp
  211. subjects:
  212. - apiGroup: rbac.authorization.k8s.io
  213. kind: Group
  214. name: system:serviceaccounts
  215. ---
  216. apiVersion: rbac.authorization.k8s.io/v1
  217. kind: RoleBinding
  218. metadata:
  219. name: svclb-psp-rolebinding
  220. namespace: kube-system
  221. roleRef:
  222. apiGroup: rbac.authorization.k8s.io
  223. kind: ClusterRole
  224. name: psp:svclb-psp
  225. subjects:
  226. - kind: ServiceAccount
  227. name: svclb
  228. ---
  229. apiVersion: rbac.authorization.k8s.io/v1
  230. kind: RoleBinding
  231. metadata:
  232. name: svc-local-path-provisioner-psp-rolebinding
  233. namespace: kube-system
  234. roleRef:
  235. apiGroup: rbac.authorization.k8s.io
  236. kind: ClusterRole
  237. name: psp:svc-local-path-provisioner-psp
  238. subjects:
  239. - kind: ServiceAccount
  240. name: local-path-provisioner-service-account
  241. ---
  242. apiVersion: rbac.authorization.k8s.io/v1
  243. kind: RoleBinding
  244. metadata:
  245. name: svc-coredns-psp-rolebinding
  246. namespace: kube-system
  247. roleRef:
  248. apiGroup: rbac.authorization.k8s.io
  249. kind: ClusterRole
  250. name: psp:svc-coredns-psp
  251. subjects:
  252. - kind: ServiceAccount
  253. name: coredns
  254. ---
  255. apiVersion: rbac.authorization.k8s.io/v1
  256. kind: RoleBinding
  257. metadata:
  258. name: svc-cis-operator-psp-rolebinding
  259. namespace: cis-operator-system
  260. roleRef:
  261. apiGroup: rbac.authorization.k8s.io
  262. kind: ClusterRole
  263. name: psp:svc-cis-operator-psp
  264. subjects:
  265. - kind: ServiceAccount
  266. name: cis-operator-serviceaccount

The policies presented above can be placed in a file named policy.yaml in the /var/lib/rancher/k3s/server/manifests directory. Both the policy file and the its directory hierarchy must be created before starting K3s. A restrictive access permission is recommended to avoid leaking potential sensitive information.

  1. sudo mkdir -p -m 700 /var/lib/rancher/k3s/server/manifests

K3s Hardening Guides - 图4note

The critical Kubernetes additions such as CNI, DNS, and Ingress are run as pods in the kube-system namespace. Therefore, this namespace has a less restrictive policy, so that these components can run properly.

Network Policies

CIS requires that all namespaces apply a network policy that reasonably limits traffic into namespaces and pods.

K3s Hardening Guides - 图5note

This is a manual check in the CIS Benchmark. The CIS scan flags the result as a warning, because manual inspection is necessary by the cluster operator.

The network policies can be placed in the policy.yaml file in /var/lib/rancher/k3s/server/manifests directory. If the directory was not created as part of the PSP (as described above), it must be created first.

  1. sudo mkdir -p -m 700 /var/lib/rancher/k3s/server/manifests

Here is an example of a compliant network policy:

  1. ---
  2. kind: NetworkPolicy
  3. apiVersion: networking.k8s.io/v1
  4. metadata:
  5. name: intra-namespace
  6. namespace: kube-system
  7. spec:
  8. podSelector: {}
  9. ingress:
  10. - from:
  11. - namespaceSelector:
  12. matchLabels:
  13. name: kube-system
  14. ---
  15. kind: NetworkPolicy
  16. apiVersion: networking.k8s.io/v1
  17. metadata:
  18. name: intra-namespace
  19. namespace: default
  20. spec:
  21. podSelector: {}
  22. ingress:
  23. - from:
  24. - namespaceSelector:
  25. matchLabels:
  26. name: default
  27. ---
  28. kind: NetworkPolicy
  29. apiVersion: networking.k8s.io/v1
  30. metadata:
  31. name: intra-namespace
  32. namespace: kube-public
  33. spec:
  34. podSelector: {}
  35. ingress:
  36. - from:
  37. - namespaceSelector:
  38. matchLabels:
  39. name: kube-public

The active restrictions block DNS unless purposely allowed. Below is a network policy that allows DNS-related traffic:

  1. ---
  2. apiVersion: networking.k8s.io/v1
  3. kind: NetworkPolicy
  4. metadata:
  5. name: default-network-dns-policy
  6. namespace: <NAMESPACE>
  7. spec:
  8. ingress:
  9. - ports:
  10. - port: 53
  11. protocol: TCP
  12. - port: 53
  13. protocol: UDP
  14. podSelector:
  15. matchLabels:
  16. k8s-app: kube-dns
  17. policyTypes:
  18. - Ingress

The metrics-server and Traefik ingress controller are blocked by default if network policies are not created to allow access.

  1. ---
  2. apiVersion: networking.k8s.io/v1
  3. kind: NetworkPolicy
  4. metadata:
  5. name: allow-all-metrics-server
  6. namespace: kube-system
  7. spec:
  8. podSelector:
  9. matchLabels:
  10. k8s-app: metrics-server
  11. ingress:
  12. - {}
  13. policyTypes:
  14. - Ingress
  15. ---
  16. apiVersion: networking.k8s.io/v1
  17. kind: NetworkPolicy
  18. metadata:
  19. name: allow-all-svclbtraefik-ingress
  20. namespace: kube-system
  21. spec:
  22. podSelector:
  23. matchLabels:
  24. svccontroller.k3s.cattle.io/svcname: traefik
  25. ingress:
  26. - {}
  27. policyTypes:
  28. - Ingress
  29. ---
  30. apiVersion: networking.k8s.io/v1
  31. kind: NetworkPolicy
  32. metadata:
  33. name: allow-all-traefik-v121-ingress
  34. namespace: kube-system
  35. spec:
  36. podSelector:
  37. matchLabels:
  38. app.kubernetes.io/name: traefik
  39. ingress:
  40. - {}
  41. policyTypes:
  42. - Ingress

K3s Hardening Guides - 图6note

You must manage network policies as normal for any additional namespaces you create.

API Server audit configuration

CIS requirements 1.2.22 to 1.2.25 are related to configuring audit logs for the API Server. K3s does not create by default the log directory and audit policy, as auditing requirements are specific to each user’s policies and environment.

If you need a log directory, it must be created before you start K3s. We recommend a restrictive access permission to avoid leaking sensitive information.

  1. sudo mkdir -p -m 700 /var/lib/rancher/k3s/server/logs

The following is a starter audit policy to log request metadata. This policy should be written to a file named audit.yaml in the /var/lib/rancher/k3s/server directory. Detailed information about policy configuration for the API server can be found in the official Kubernetes documentation.

  1. ---
  2. apiVersion: audit.k8s.io/v1
  3. kind: Policy
  4. rules:
  5. - level: Metadata

Further configurations are also needed to pass CIS checks. These are not configured by default in K3s, because they vary based on your environment and needs:

  • Ensure that the --audit-log-path argument is set.
  • Ensure that the --audit-log-maxage argument is set to 30 or as appropriate.
  • Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate.
  • Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate.

Combined, to enable and configure audit logs, add the following lines to the K3s cluster configuration file in Rancher:

  1. spec:
  2. rkeConfig:
  3. machineGlobalConfig:
  4. kube-apiserver-arg:
  5. - audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml # CIS 3.2.1
  6. - audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log # CIS 1.2.18
  7. - audit-log-maxage=30 # CIS 1.2.19
  8. - audit-log-maxbackup=10 # CIS 1.2.20
  9. - audit-log-maxsize=100 # CIS 1.2.21

Controller Manager Requirements

CIS requirement 1.3.1 checks for garbage collection settings in the Controller Manager. Garbage collection is important to ensure sufficient resource availability and avoid degraded performance and availability. Based on your system resources and tests, choose an appropriate threshold value to activate garbage collection.

This can be remediated by setting the following configuration in the K3s cluster file in Rancher. The value below is only an example. The appropriate threshold value is specific to each user’s environment.

  1. spec:
  2. rkeConfig:
  3. machineGlobalConfig:
  4. kube-controller-manager-arg:
  5. - terminated-pod-gc-threshold=10 # CIS 1.3.1

Configure default Service Account

Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account.

For CIS requirement 5.1.5 the default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments.

This can be remediated by updating the automountServiceAccountToken field to false for the default service account in each namespace.

For default service accounts in the built-in namespaces (kube-system, kube-public, kube-node-lease, and default), K3s does not automatically do this.

Save the following configuration to a file called account_update.yaml.

  1. ---
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: default
  6. automountServiceAccountToken: false

Create a bash script file called account_update.sh. Be sure to chmod +x account_update.sh so the script has execute permissions.

  1. #!/bin/bash -e
  2. for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do
  3. kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)"
  4. done

Run the script every time a new service account is added to your cluster.

Reference Hardened K3s Template Configuration

The following reference template configuration is used in Rancher to create a hardened K3s custom cluster based on each CIS control in this guide. This reference does not include other required cluster configuration directives, which vary based on your environment.

  • v1.25 and Newer
  • v1.24 and Older
  1. apiVersion: provisioning.cattle.io/v1
  2. kind: Cluster
  3. metadata:
  4. name: # Define cluster name
  5. spec:
  6. defaultPodSecurityAdmissionConfigurationTemplateName: rancher-restricted
  7. enableNetworkPolicy: true
  8. kubernetesVersion: # Define K3s version
  9. rkeConfig:
  10. machineGlobalConfig:
  11. kube-apiserver-arg:
  12. - enable-admission-plugins=NodeRestriction,ServiceAccount # CIS 1.2.15, 1.2.13
  13. - audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml # CIS 3.2.1
  14. - audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log # CIS 1.2.18
  15. - audit-log-maxage=30 # CIS 1.2.19
  16. - audit-log-maxbackup=10 # CIS 1.2.20
  17. - audit-log-maxsize=100 # CIS 1.2.21
  18. - request-timeout=300s # CIS 1.2.22
  19. - service-account-lookup=true # CIS 1.2.24
  20. kube-controller-manager-arg:
  21. - terminated-pod-gc-threshold=10 # CIS 1.3.1
  22. secrets-encryption: true
  23. machineSelectorConfig:
  24. - config:
  25. kubelet-arg:
  26. - make-iptables-util-chains=true # CIS 4.2.7
  1. apiVersion: provisioning.cattle.io/v1
  2. kind: Cluster
  3. metadata:
  4. name: # Define cluster name
  5. spec:
  6. enableNetworkPolicy: true
  7. kubernetesVersion: # Define K3s version
  8. rkeConfig:
  9. machineGlobalConfig:
  10. kube-apiserver-arg:
  11. - enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount # CIS 1.2.15, 5.2, 1.2.13
  12. - audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml # CIS 3.2.1
  13. - audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log # CIS 1.2.18
  14. - audit-log-maxage=30 # CIS 1.2.19
  15. - audit-log-maxbackup=10 # CIS 1.2.20
  16. - audit-log-maxsize=100 # CIS 1.2.21
  17. - request-timeout=300s # CIS 1.2.22
  18. - service-account-lookup=true # CIS 1.2.24
  19. kube-controller-manager-arg:
  20. - terminated-pod-gc-threshold=10 # CIS 1.3.1
  21. secrets-encryption: true
  22. machineSelectorConfig:
  23. - config:
  24. kubelet-arg:
  25. - make-iptables-util-chains=true # CIS 4.2.7
  26. protect-kernel-defaults: true # CIS 4.2.6

Conclusion

If you have followed this guide, your K3s custom cluster provisioned by Rancher will be configured to pass the CIS Kubernetes Benchmark. You can review our K3s self-assessment guides to understand how we verified each of the benchmarks and how you can do the same on your cluster.