K3s Self-Assessment Guide - CIS Benchmark v1.7 - K8s v1.25/v1.26/v1.27

This document is a companion to the K3s Hardening Guide, which provides prescriptive guidance on how to harden K3s clusters that are running in production and managed by Rancher. This benchmark guide helps you evaluate the security of a hardened cluster against each control in the CIS Kubernetes Benchmark.

This guide corresponds to the following versions of Rancher, CIS Benchmarks, and Kubernetes:

Rancher VersionCIS Benchmark VersionKubernetes Version
Rancher v2.7Benchmark v1.7Kubernetes v1.25/v1.26/v1.27

This document is for Rancher operators, security teams, auditors and decision makers.

For more information about each control, including detailed descriptions and remediations for failing tests, refer to the corresponding section of the CIS Kubernetes Benchmark v1.7. You can download the benchmark, after creating a free account, at Center for Internet Security (CIS).

Testing Methodology

Each control in the CIS Kubernetes Benchmark was evaluated against a K3s cluster that was configured according to the accompanying hardening guide.

Where control audits differ from the original CIS benchmark, the audit commands specific to K3s are provided for testing.

These are the possible results for each control:

  • Pass - The K3s cluster passes the audit outlined in the benchmark.
  • Not Applicable - The control is not applicable to K3s because of how it is designed to operate. The remediation section explains why.
  • Warn - The control is manual in the CIS benchmark and it depends on the cluster’s use-case or some other factor that must be determined by the cluster operator. These controls have been evaluated to ensure K3s doesn’t prevent their implementation, but no further configuration or auditing of the cluster has been performed.

This guide makes the assumption that K3s is running as a Systemd unit. Your installation may vary. Adjust the “audit” commands to fit your scenario.

K3s Self-Assessment Guide - CIS Benchmark v1.7 - K8s v1.25/v1.26/v1.27 - 图1note

This guide only covers automated (previously called scored) tests.

Controls

1.1 Control Plane Node Configuration Files

1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated)

Result: Not Applicable

Remediation: Run the below command (based on the file location on your system) on the control plane node. For example, chmod 600 /etc/kubernetes/manifests/kube-apiserver.yaml Not Applicable.

1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)

Result: Not Applicable

Remediation: Run the below command (based on the file location on your system) on the control plane node. For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml Not Applicable.

1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated)

Result: Not Applicable

Remediation: Run the below command (based on the file location on your system) on the control plane node. For example, chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml Not Applicable.

1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)

Result: Not Applicable

Remediation: Run the below command (based on the file location on your system) on the control plane node. For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml Not Applicable.

1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated)

Result: Not Applicable

Remediation: Run the below command (based on the file location on your system) on the control plane node. For example, chmod 600 /etc/kubernetes/manifests/kube-scheduler.yaml Not Applicable.

1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)

Result: Not Applicable

Remediation: Run the below command (based on the file location on your system) on the control plane node. For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml Not Applicable.

1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated)

Result: Not Applicable

Remediation: Run the below command (based on the file location on your system) on the control plane node. For example, chmod 600 /etc/kubernetes/manifests/etcd.yaml Not Applicable.

1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)

Result: Not Applicable

Remediation: Run the below command (based on the file location on your system) on the control plane node. For example, chown root:root /etc/kubernetes/manifests/etcd.yaml Not Applicable.

1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Automated)

Result: Not Applicable

Remediation: Run the below command (based on the file location on your system) on the control plane node. For example, chmod 600 <path/to/cni/files> Not Applicable.

1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual)

Result: Not Applicable

Remediation: Run the below command (based on the file location on your system) on the control plane node. For example, chown root:root <path/to/cni/files> Not Applicable.

1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)

Result: pass

Remediation: On the etcd server node, get the etcd data directory, passed as an argument —data-dir, from the command ‘ps -ef | grep etcd’. Run the below command (based on the etcd data directory found above). For example, chmod 700 /var/lib/etcd

Audit Script: check_for_k3s_etcd.sh

  1. #!/bin/bash
  2. # This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
  3. # before it checks the requirement
  4. set -eE
  5. handle_error() {
  6. echo "false"
  7. }
  8. trap 'handle_error' ERR
  9. if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
  10. case $1 in
  11. "1.1.11")
  12. echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
  13. "1.2.29")
  14. echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
  15. "2.1")
  16. echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
  17. "2.2")
  18. echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
  19. "2.3")
  20. echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
  21. "2.4")
  22. echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
  23. "2.5")
  24. echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
  25. "2.6")
  26. echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
  27. "2.7")
  28. echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
  29. esac
  30. else
  31. # If another database is running, return whatever is required to pass the scan
  32. case $1 in
  33. "1.1.11")
  34. echo "700";;
  35. "1.2.29")
  36. echo "--etcd-certfile AND --etcd-keyfile";;
  37. "2.1")
  38. echo "cert-file AND key-file";;
  39. "2.2")
  40. echo "--client-cert-auth=true";;
  41. "2.3")
  42. echo "false";;
  43. "2.4")
  44. echo "peer-cert-file AND peer-key-file";;
  45. "2.5")
  46. echo "--client-cert-auth=true";;
  47. "2.6")
  48. echo "--peer-auto-tls=false";;
  49. "2.7")
  50. echo "--trusted-ca-file";;
  51. esac
  52. fi

Audit Execution:

  1. ./check_for_k3s_etcd.sh 1.1.11

Expected Result:

  1. '700' is equal to '700'

Returned Value:

  1. 700

1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)

Result: Not Applicable

Remediation: On the etcd server node, get the etcd data directory, passed as an argument —data-dir, from the command ‘ps -ef | grep etcd’. Run the below command (based on the etcd data directory found above). For example, chown etcd:etcd /var/lib/etcd Not Applicable.

1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)

Result: Not Applicable

Remediation: Run the below command (based on the file location on your system) on the control plane node. For example, chmod 600 /var/lib/rancher/k3s/server/cred/admin.kubeconfig

1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)

Result: pass

Remediation: Run the below command (based on the file location on your system) on the control plane node. For example, chown root:root /etc/kubernetes/admin.conf

Audit:

  1. /bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/admin.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/admin.kubeconfig; fi'

Expected Result:

  1. 'root:root' is equal to 'root:root'

Returned Value:

  1. root:root

1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)

Result: pass

Remediation: Run the below command (based on the file location on your system) on the control plane node. For example, chmod 600 scheduler

Audit:

  1. /bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; fi'

Expected Result:

  1. permissions has permissions 600, expected 600 or more restrictive

Returned Value:

  1. permissions=600

1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)

Result: pass

Remediation: Run the below command (based on the file location on your system) on the control plane node. For example, chown root:root scheduler

Audit:

  1. /bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; fi'

Expected Result:

  1. 'root:root' is present

Returned Value:

  1. root:root

1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)

Result: pass

Remediation: Run the below command (based on the file location on your system) on the control plane node. For example, chmod 600 controllermanager

Audit:

  1. /bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/controller.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/controller.kubeconfig; fi'

Expected Result:

  1. permissions has permissions 600, expected 600 or more restrictive

Returned Value:

  1. permissions=600

1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)

Result: pass

Remediation: Run the below command (based on the file location on your system) on the control plane node. For example, chown root:root controllermanager

Audit:

  1. stat -c %U:%G /var/lib/rancher/k3s/server/cred/controller.kubeconfig

Expected Result:

  1. 'root:root' is equal to 'root:root'

Returned Value:

  1. root:root

1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)

Result: pass

Remediation: Run the below command (based on the file location on your system) on the control plane node. For example, chown -R root:root /var/lib/rancher/k3s/server/tls

Audit:

  1. stat -c %U:%G /var/lib/rancher/k3s/server/tls

Expected Result:

  1. 'root:root' is present

Returned Value:

  1. root:root

1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual)

Result: warn

Remediation: Run the below command (based on the file location on your system) on the control plane node. For example, chmod -R 600 /etc/kubernetes/pki/*.crt

Audit:

  1. stat -c %n %a /var/lib/rancher/k3s/server/tls/*.crt

1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)

Result: warn

Remediation: Run the below command (based on the file location on your system) on the control plane node. For example, chmod -R 600 /etc/kubernetes/pki/*.key

Audit:

  1. stat -c %n %a /var/lib/rancher/k3s/server/tls/*.key

1.2 API Server

1.2.1 Ensure that the —anonymous-auth argument is set to false (Manual)

Result: pass

Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the below parameter. —anonymous-auth=false

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'anonymous-auth'

Expected Result:

  1. '--anonymous-auth' is equal to 'false'

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

1.2.2 Ensure that the —token-auth-file parameter is not set (Automated)

Result: pass

Remediation: Follow the documentation and configure alternate mechanisms for authentication. Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and remove the —token-auth-file=<filename> parameter.

Audit:

  1. /bin/ps -ef | grep containerd | grep -v grep

Expected Result:

  1. '--token-auth-file' is not present

Returned Value:

  1. root 527 1 0 Sep11 ? 00:01:28 /usr/bin/containerd root 663 1 0 Sep11 ? 00:00:08 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock root 2361 2340 3 Sep11 ? 00:40:19 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd root 3021 1 0 Sep11 ? 00:00:29 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 4790d392966915d995e666002c56ed4cce6dd86f305ce2ee390547a1fcbf6c82 -address /run/k3s/containerd/containerd.sock root 3035 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 9d73e6a160ccde6c7c7d4ba0df3b9f696e3de2ebfc0f19a1dcbdf13aea496427 -address /run/k3s/containerd/containerd.sock root 3235 1 0 Sep11 ? 00:00:31 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 587f6221ee9f36c877231ada8d816a799dfda186332c33378d1eb16c72cdc87d -address /run/k3s/containerd/containerd.sock root 4435 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id a74c9ec7d99785c2f2d4e6826aa80c22eb8b38249e8f99679ece00a818e9b7b3 -address /run/k3s/containerd/containerd.sock root 4985 1 0 Sep11 ? 00:00:53 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5b0c9784dbe0fcbe8be10c857976e70ec84a208cb814e87b5ca085a02d434f8c -address /run/k3s/containerd/containerd.sock root 5056 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id f3231ff35f18056e74eda14907f296be15f7ea1c6ae5ab7904e27d4d18183301 -address /run/k3s/containerd/containerd.sock root 5868 1 0 Sep11 ? 00:00:27 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 3e908c4d0b10df275bdef6f72fbcfa09517d11cf749236ad020364e14d77bc93 -address /run/k3s/containerd/containerd.sock root 6158 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id bec5780a5c73fa3154aa3b5ee26cdf23202db821205893e7e66ae17e6103e97b -address /run/k3s/containerd/containerd.sock root 7366 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id a81b78845bdcaef710314c93e5ea0d0617f37a8929472f7b570ab90c6667f57f -address /run/k3s/containerd/containerd.sock root 97274 1 0 16:13 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id c90652c935d9e79af45b9a9ac2b4fe315e2a761a0525737fdb95c680123a164c -address /run/k3s/containerd/containerd.sock root 98309 1 0 16:16 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 431a2763636488efd7d104c228d41f4f2ecd8c06a7fc375d8977ab0d238936a8 -address /run/k3s/containerd/containerd.sock root 98493 1 0 16:16 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5b480d2fa55c8cc0105ec1902887040c017f03d5b1eb6c73c24bb9d523ad9b37 -address /run/k3s/containerd/containerd.sock

1.2.3 Ensure that the —DenyServiceExternalIPs is not set (Automated)

Result: pass

Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and remove the DenyServiceExternalIPs from enabled admission plugins.

Audit:

  1. /bin/ps -ef | grep containerd | grep -v grep

Expected Result:

  1. '--enable-admission-plugins' is present OR '--enable-admission-plugins' is not present

Returned Value:

  1. root 527 1 0 Sep11 ? 00:01:28 /usr/bin/containerd root 663 1 0 Sep11 ? 00:00:08 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock root 2361 2340 3 Sep11 ? 00:40:19 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd root 3021 1 0 Sep11 ? 00:00:29 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 4790d392966915d995e666002c56ed4cce6dd86f305ce2ee390547a1fcbf6c82 -address /run/k3s/containerd/containerd.sock root 3035 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 9d73e6a160ccde6c7c7d4ba0df3b9f696e3de2ebfc0f19a1dcbdf13aea496427 -address /run/k3s/containerd/containerd.sock root 3235 1 0 Sep11 ? 00:00:31 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 587f6221ee9f36c877231ada8d816a799dfda186332c33378d1eb16c72cdc87d -address /run/k3s/containerd/containerd.sock root 4435 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id a74c9ec7d99785c2f2d4e6826aa80c22eb8b38249e8f99679ece00a818e9b7b3 -address /run/k3s/containerd/containerd.sock root 4985 1 0 Sep11 ? 00:00:53 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5b0c9784dbe0fcbe8be10c857976e70ec84a208cb814e87b5ca085a02d434f8c -address /run/k3s/containerd/containerd.sock root 5056 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id f3231ff35f18056e74eda14907f296be15f7ea1c6ae5ab7904e27d4d18183301 -address /run/k3s/containerd/containerd.sock root 5868 1 0 Sep11 ? 00:00:27 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 3e908c4d0b10df275bdef6f72fbcfa09517d11cf749236ad020364e14d77bc93 -address /run/k3s/containerd/containerd.sock root 6158 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id bec5780a5c73fa3154aa3b5ee26cdf23202db821205893e7e66ae17e6103e97b -address /run/k3s/containerd/containerd.sock root 7366 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id a81b78845bdcaef710314c93e5ea0d0617f37a8929472f7b570ab90c6667f57f -address /run/k3s/containerd/containerd.sock root 97274 1 0 16:13 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id c90652c935d9e79af45b9a9ac2b4fe315e2a761a0525737fdb95c680123a164c -address /run/k3s/containerd/containerd.sock root 98309 1 0 16:16 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 431a2763636488efd7d104c228d41f4f2ecd8c06a7fc375d8977ab0d238936a8 -address /run/k3s/containerd/containerd.sock root 98493 1 0 16:16 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5b480d2fa55c8cc0105ec1902887040c017f03d5b1eb6c73c24bb9d523ad9b37 -address /run/k3s/containerd/containerd.sock

1.2.4 Ensure that the —kubelet-client-certificate and —kubelet-client-key arguments are set as appropriate (Automated)

Result: pass

Remediation: Follow the Kubernetes documentation and set up the TLS connection between the apiserver and kubelets. Then, edit API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the kubelet client certificate and key parameters as below. —kubelet-client-certificate=<path/to/client-certificate-file> —kubelet-client-key=<path/to/client-key-file>

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority'

Expected Result:

  1. '--kubelet-client-certificate' is present AND '--kubelet-client-key' is present

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

1.2.5 Ensure that the —kubelet-certificate-authority argument is set as appropriate (Automated)

Result: Not Applicable

Remediation: Follow the Kubernetes documentation and setup the TLS connection between the apiserver and kubelets. Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the —kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. —kubelet-certificate-authority=<ca-string> Permissive - When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers.

1.2.6 Ensure that the —authorization-mode argument is not set to AlwaysAllow (Automated)

Result: pass

Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the —authorization-mode parameter to values other than AlwaysAllow. One such example could be as below. —authorization-mode=RBAC

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'

Expected Result:

  1. '--authorization-mode' does not have 'AlwaysAllow'

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

1.2.7 Ensure that the —authorization-mode argument includes Node (Automated)

Result: pass

Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the —authorization-mode parameter to a value that includes Node. —authorization-mode=Node,RBAC

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'

Expected Result:

  1. '--authorization-mode' has 'Node'

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

1.2.8 Ensure that the —authorization-mode argument includes RBAC (Automated)

Result: pass

Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the —authorization-mode parameter to a value that includes RBAC, for example --authorization-mode=Node,RBAC.

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'

Expected Result:

  1. '--authorization-mode' has 'RBAC'

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual)

Result: warn

Remediation: Follow the Kubernetes documentation and set the desired limits in a configuration file. Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml and set the below parameters. —enable-admission-plugins=…,EventRateLimit,… —admission-control-config-file=<path/to/configuration/file>

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'

Expected Result:

  1. '--enable-admission-plugins' has 'EventRateLimit'

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)

Result: pass

Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and either remove the —enable-admission-plugins parameter, or set it to a value that does not include AlwaysAdmit.

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'

Expected Result:

  1. '--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual)

Result: warn

Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the —enable-admission-plugins parameter to include AlwaysPullImages. —enable-admission-plugins=…,AlwaysPullImages,…

Audit:

  1. /bin/ps -ef | grep containerd | grep -v grep

Expected Result:

  1. '--enable-admission-plugins' is present

Returned Value:

  1. root 527 1 0 Sep11 ? 00:01:28 /usr/bin/containerd root 663 1 0 Sep11 ? 00:00:08 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock root 2361 2340 3 Sep11 ? 00:40:19 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd root 3021 1 0 Sep11 ? 00:00:29 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 4790d392966915d995e666002c56ed4cce6dd86f305ce2ee390547a1fcbf6c82 -address /run/k3s/containerd/containerd.sock root 3035 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 9d73e6a160ccde6c7c7d4ba0df3b9f696e3de2ebfc0f19a1dcbdf13aea496427 -address /run/k3s/containerd/containerd.sock root 3235 1 0 Sep11 ? 00:00:31 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 587f6221ee9f36c877231ada8d816a799dfda186332c33378d1eb16c72cdc87d -address /run/k3s/containerd/containerd.sock root 4435 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id a74c9ec7d99785c2f2d4e6826aa80c22eb8b38249e8f99679ece00a818e9b7b3 -address /run/k3s/containerd/containerd.sock root 4985 1 0 Sep11 ? 00:00:53 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5b0c9784dbe0fcbe8be10c857976e70ec84a208cb814e87b5ca085a02d434f8c -address /run/k3s/containerd/containerd.sock root 5056 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id f3231ff35f18056e74eda14907f296be15f7ea1c6ae5ab7904e27d4d18183301 -address /run/k3s/containerd/containerd.sock root 5868 1 0 Sep11 ? 00:00:27 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 3e908c4d0b10df275bdef6f72fbcfa09517d11cf749236ad020364e14d77bc93 -address /run/k3s/containerd/containerd.sock root 6158 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id bec5780a5c73fa3154aa3b5ee26cdf23202db821205893e7e66ae17e6103e97b -address /run/k3s/containerd/containerd.sock root 7366 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id a81b78845bdcaef710314c93e5ea0d0617f37a8929472f7b570ab90c6667f57f -address /run/k3s/containerd/containerd.sock root 97274 1 0 16:13 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id c90652c935d9e79af45b9a9ac2b4fe315e2a761a0525737fdb95c680123a164c -address /run/k3s/containerd/containerd.sock root 98309 1 0 16:16 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 431a2763636488efd7d104c228d41f4f2ecd8c06a7fc375d8977ab0d238936a8 -address /run/k3s/containerd/containerd.sock root 98493 1 0 16:16 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5b480d2fa55c8cc0105ec1902887040c017f03d5b1eb6c73c24bb9d523ad9b37 -address /run/k3s/containerd/containerd.sock

1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)

Result: Not Applicable

Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the —enable-admission-plugins parameter to include SecurityContextDeny, unless PodSecurityPolicy is already in place. —enable-admission-plugins=…,SecurityContextDeny,… Permissive - Enabling Pod Security Policy can cause applications to unexpectedly fail.

1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated)

Result: pass

Remediation: Follow the documentation and create ServiceAccount objects as per your environment. Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and ensure that the —disable-admission-plugins parameter is set to a value that does not include ServiceAccount.

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep

Expected Result:

  1. '--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)

Result: pass

Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the —disable-admission-plugins parameter to ensure it does not include NamespaceLifecycle.

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep

Expected Result:

  1. '--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated)

Result: pass

Remediation: Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the —enable-admission-plugins parameter to a value that includes NodeRestriction. —enable-admission-plugins=…,NodeRestriction,…

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'

Expected Result:

  1. '--enable-admission-plugins' has 'NodeRestriction'

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

1.2.16 Ensure that the —secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated)

Result: pass

Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and either remove the —secure-port parameter or set it to a different (non-zero) desired port.

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'secure-port'

Expected Result:

  1. '--secure-port' is greater than 0 OR '--secure-port' is not present

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

1.2.17 Ensure that the —profiling argument is set to false (Automated)

Result: pass

Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the below parameter. —profiling=false

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'profiling'

Expected Result:

  1. '--profiling' is equal to 'false'

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

1.2.18 Ensure that the —audit-log-path argument is set (Automated)

Result: Not Applicable

Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the —audit-log-path parameter to a suitable path and file where you would like audit logs to be written, for example, —audit-log-path=/var/log/apiserver/audit.log Permissive.

1.2.19 Ensure that the —audit-log-maxage argument is set to 30 or as appropriate (Automated)

Result: Not Applicable

Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the —audit-log-maxage parameter to 30 or as an appropriate number of days, for example, —audit-log-maxage=30 Permissive.

1.2.20 Ensure that the —audit-log-maxbackup argument is set to 10 or as appropriate (Automated)

Result: Not Applicable

Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the —audit-log-maxbackup parameter to 10 or to an appropriate value. For example, —audit-log-maxbackup=10 Permissive.

1.2.21 Ensure that the —audit-log-maxsize argument is set to 100 or as appropriate (Automated)

Result: Not Applicable

Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the —audit-log-maxsize parameter to an appropriate size in MB. For example, to set it as 100 MB, —audit-log-maxsize=100 Permissive.

1.2.22 Ensure that the —request-timeout argument is set as appropriate (Manual)

Result: Not Applicable

Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml and set the below parameter as appropriate and if needed. For example, —request-timeout=300s Permissive.

1.2.23 Ensure that the —service-account-lookup argument is set to true (Automated)

Result: pass

Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the below parameter. —service-account-lookup=true Alternatively, you can delete the —service-account-lookup parameter from this file so that the default takes effect.

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep

Expected Result:

  1. '--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true'

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

1.2.24 Ensure that the —service-account-key-file argument is set as appropriate (Automated)

Result: Not Applicable

Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the —service-account-key-file parameter to the public key file for service accounts. For example, —service-account-key-file=<filename>

1.2.25 Ensure that the —etcd-certfile and —etcd-keyfile arguments are set as appropriate (Automated)

Result: pass

Remediation: Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the etcd certificate and key file parameters. —etcd-certfile=<path/to/client-certificate-file> —etcd-keyfile=<path/to/client-key-file>

Audit Script: check_for_k3s_etcd.sh

  1. #!/bin/bash
  2. # This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
  3. # before it checks the requirement
  4. set -eE
  5. handle_error() {
  6. echo "false"
  7. }
  8. trap 'handle_error' ERR
  9. if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
  10. case $1 in
  11. "1.1.11")
  12. echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
  13. "1.2.29")
  14. echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
  15. "2.1")
  16. echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
  17. "2.2")
  18. echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
  19. "2.3")
  20. echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
  21. "2.4")
  22. echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
  23. "2.5")
  24. echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
  25. "2.6")
  26. echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
  27. "2.7")
  28. echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
  29. esac
  30. else
  31. # If another database is running, return whatever is required to pass the scan
  32. case $1 in
  33. "1.1.11")
  34. echo "700";;
  35. "1.2.29")
  36. echo "--etcd-certfile AND --etcd-keyfile";;
  37. "2.1")
  38. echo "cert-file AND key-file";;
  39. "2.2")
  40. echo "--client-cert-auth=true";;
  41. "2.3")
  42. echo "false";;
  43. "2.4")
  44. echo "peer-cert-file AND peer-key-file";;
  45. "2.5")
  46. echo "--client-cert-auth=true";;
  47. "2.6")
  48. echo "--peer-auto-tls=false";;
  49. "2.7")
  50. echo "--trusted-ca-file";;
  51. esac
  52. fi

Audit Execution:

  1. ./check_for_k3s_etcd.sh 1.2.29

Expected Result:

  1. '--etcd-certfile' is present AND '--etcd-keyfile' is present

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

1.2.26 Ensure that the —tls-cert-file and —tls-private-key-file arguments are set as appropriate (Automated)

Result: pass

Remediation: Follow the Kubernetes documentation and set up the TLS connection on the apiserver. Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the TLS certificate and private key file parameters. —tls-cert-file=<path/to/tls-certificate-file> —tls-private-key-file=<path/to/tls-key-file>

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep -A1 'Running kube-apiserver' | tail -n2

Expected Result:

  1. '--tls-cert-file' is present AND '--tls-private-key-file' is present

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"

1.2.27 Ensure that the —client-ca-file argument is set as appropriate (Automated)

Result: pass

Remediation: Follow the Kubernetes documentation and set up the TLS connection on the apiserver. Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the client certificate authority file. —client-ca-file=<path/to/client-ca-file>

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'client-ca-file'

Expected Result:

  1. '--client-ca-file' is present

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

1.2.28 Ensure that the —etcd-cafile argument is set as appropriate (Automated)

Result: pass

Remediation: Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the etcd certificate authority file parameter. —etcd-cafile=<path/to/ca-file>

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-cafile'

Expected Result:

  1. '--etcd-cafile' is present

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

1.2.29 Ensure that the —encryption-provider-config argument is set as appropriate (Manual)

Result: Not Applicable

Remediation: Follow the Kubernetes documentation and configure a EncryptionConfig file. Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the —encryption-provider-config parameter to the path of that file. For example, —encryption-provider-config=</path/to/EncryptionConfig/File> Permissive - Enabling encryption changes how data can be recovered as data is encrypted.

1.2.30 Ensure that encryption providers are appropriately configured (Manual)

Result: Not Applicable

Remediation: Follow the Kubernetes documentation and configure a EncryptionConfig file. In this file, choose aescbc, kms or secretbox as the encryption provider. Permissive - Enabling encryption changes how data can be recovered as data is encrypted.

1.2.32 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)

Result: pass

Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the below parameter. —tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'tls-cipher-suites'

Expected Result:

  1. '--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384'

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

1.3 Controller Manager

1.3.1 Ensure that the —terminated-pod-gc-threshold argument is set as appropriate (Manual)

Result: pass

Remediation: Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml on the control plane node and set the —terminated-pod-gc-threshold to an appropriate threshold, for example, —terminated-pod-gc-threshold=10

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'terminated-pod-gc-threshold'

Expected Result:

  1. '--terminated-pod-gc-threshold' is present

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"

1.3.2 Ensure that the —profiling argument is set to false (Automated)

Result: pass

Remediation: Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml on the control plane node and set the below parameter. —profiling=false

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'profiling'

Expected Result:

  1. '--profiling' is equal to 'false'

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"

1.3.3 Ensure that the —use-service-account-credentials argument is set to true (Automated)

Result: pass

Remediation: Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml on the control plane node to set the below parameter. —use-service-account-credentials=true

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'use-service-account-credentials'

Expected Result:

  1. '--use-service-account-credentials' is not equal to 'false'

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"

1.3.4 Ensure that the —service-account-private-key-file argument is set as appropriate (Automated)

Result: pass

Remediation: Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml on the control plane node and set the —service-account-private-key-file parameter to the private key file for service accounts. —service-account-private-key-file=<filename>

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'service-account-private-key-file'

Expected Result:

  1. '--service-account-private-key-file' is present

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"

1.3.5 Ensure that the —root-ca-file argument is set as appropriate (Automated)

Result: pass

Remediation: Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml on the control plane node and set the —root-ca-file parameter to the certificate bundle file`. —root-ca-file=<path/to/file>

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'root-ca-file'

Expected Result:

  1. '--root-ca-file' is present

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"

1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)

Result: Not Applicable

Remediation: Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml on the control plane node and set the —feature-gates parameter to include RotateKubeletServerCertificate=true. —feature-gates=RotateKubeletServerCertificate=true Not Applicable.

1.3.7 Ensure that the —bind-address argument is set to 127.0.0.1 (Automated)

Result: pass

Remediation: Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml on the control plane node and ensure the correct value for the —bind-address parameter

Audit:

  1. /bin/ps -ef | grep containerd | grep -v grep

Expected Result:

  1. '--bind-address' is present OR '--bind-address' is not present

Returned Value:

  1. root 527 1 0 Sep11 ? 00:01:28 /usr/bin/containerd root 663 1 0 Sep11 ? 00:00:08 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock root 2361 2340 3 Sep11 ? 00:40:19 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd root 3021 1 0 Sep11 ? 00:00:29 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 4790d392966915d995e666002c56ed4cce6dd86f305ce2ee390547a1fcbf6c82 -address /run/k3s/containerd/containerd.sock root 3035 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 9d73e6a160ccde6c7c7d4ba0df3b9f696e3de2ebfc0f19a1dcbdf13aea496427 -address /run/k3s/containerd/containerd.sock root 3235 1 0 Sep11 ? 00:00:31 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 587f6221ee9f36c877231ada8d816a799dfda186332c33378d1eb16c72cdc87d -address /run/k3s/containerd/containerd.sock root 4435 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id a74c9ec7d99785c2f2d4e6826aa80c22eb8b38249e8f99679ece00a818e9b7b3 -address /run/k3s/containerd/containerd.sock root 4985 1 0 Sep11 ? 00:00:53 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5b0c9784dbe0fcbe8be10c857976e70ec84a208cb814e87b5ca085a02d434f8c -address /run/k3s/containerd/containerd.sock root 5056 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id f3231ff35f18056e74eda14907f296be15f7ea1c6ae5ab7904e27d4d18183301 -address /run/k3s/containerd/containerd.sock root 5868 1 0 Sep11 ? 00:00:27 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 3e908c4d0b10df275bdef6f72fbcfa09517d11cf749236ad020364e14d77bc93 -address /run/k3s/containerd/containerd.sock root 6158 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id bec5780a5c73fa3154aa3b5ee26cdf23202db821205893e7e66ae17e6103e97b -address /run/k3s/containerd/containerd.sock root 7366 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id a81b78845bdcaef710314c93e5ea0d0617f37a8929472f7b570ab90c6667f57f -address /run/k3s/containerd/containerd.sock root 97274 1 0 16:13 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id c90652c935d9e79af45b9a9ac2b4fe315e2a761a0525737fdb95c680123a164c -address /run/k3s/containerd/containerd.sock root 98309 1 0 16:16 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 431a2763636488efd7d104c228d41f4f2ecd8c06a7fc375d8977ab0d238936a8 -address /run/k3s/containerd/containerd.sock root 98493 1 0 16:16 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5b480d2fa55c8cc0105ec1902887040c017f03d5b1eb6c73c24bb9d523ad9b37 -address /run/k3s/containerd/containerd.sock

1.4 Scheduler

1.4.1 Ensure that the —profiling argument is set to false (Automated)

Result: pass

Remediation: Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file on the control plane node and set the below parameter. —profiling=false

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-scheduler' | tail -n1

Expected Result:

  1. '--profiling' is equal to 'false'

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"

1.4.2 Ensure that the —bind-address argument is set to 127.0.0.1 (Automated)

Result: pass

Remediation: Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml on the control plane node and ensure the correct value for the —bind-address parameter

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-scheduler' | tail -n1 | grep 'bind-address'

Expected Result:

  1. '--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"

2 Etcd Node Configuration

2.1 Ensure that the —cert-file and —key-file arguments are set as appropriate (Automated)

Result: pass

Remediation: Follow the etcd service documentation and configure TLS encryption. Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master node and set the below parameters. —cert-file=</path/to/ca-file> —key-file=</path/to/key-file>

Audit Script: check_for_k3s_etcd.sh

  1. #!/bin/bash
  2. # This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
  3. # before it checks the requirement
  4. set -eE
  5. handle_error() {
  6. echo "false"
  7. }
  8. trap 'handle_error' ERR
  9. if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
  10. case $1 in
  11. "1.1.11")
  12. echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
  13. "1.2.29")
  14. echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
  15. "2.1")
  16. echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
  17. "2.2")
  18. echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
  19. "2.3")
  20. echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
  21. "2.4")
  22. echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
  23. "2.5")
  24. echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
  25. "2.6")
  26. echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
  27. "2.7")
  28. echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
  29. esac
  30. else
  31. # If another database is running, return whatever is required to pass the scan
  32. case $1 in
  33. "1.1.11")
  34. echo "700";;
  35. "1.2.29")
  36. echo "--etcd-certfile AND --etcd-keyfile";;
  37. "2.1")
  38. echo "cert-file AND key-file";;
  39. "2.2")
  40. echo "--client-cert-auth=true";;
  41. "2.3")
  42. echo "false";;
  43. "2.4")
  44. echo "peer-cert-file AND peer-key-file";;
  45. "2.5")
  46. echo "--client-cert-auth=true";;
  47. "2.6")
  48. echo "--peer-auto-tls=false";;
  49. "2.7")
  50. echo "--trusted-ca-file";;
  51. esac
  52. fi

Audit Execution:

  1. ./check_for_k3s_etcd.sh 2.1

Expected Result:

  1. 'cert-file' is present AND 'key-file' is present

Returned Value:

  1. cert-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.crt key-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.key

2.2 Ensure that the —client-cert-auth argument is set to true (Automated)

Result: pass

Remediation: Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master node and set the below parameter. —client-cert-auth=”true”

Audit Script: check_for_k3s_etcd.sh

  1. #!/bin/bash
  2. # This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
  3. # before it checks the requirement
  4. set -eE
  5. handle_error() {
  6. echo "false"
  7. }
  8. trap 'handle_error' ERR
  9. if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
  10. case $1 in
  11. "1.1.11")
  12. echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
  13. "1.2.29")
  14. echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
  15. "2.1")
  16. echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
  17. "2.2")
  18. echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
  19. "2.3")
  20. echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
  21. "2.4")
  22. echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
  23. "2.5")
  24. echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
  25. "2.6")
  26. echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
  27. "2.7")
  28. echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
  29. esac
  30. else
  31. # If another database is running, return whatever is required to pass the scan
  32. case $1 in
  33. "1.1.11")
  34. echo "700";;
  35. "1.2.29")
  36. echo "--etcd-certfile AND --etcd-keyfile";;
  37. "2.1")
  38. echo "cert-file AND key-file";;
  39. "2.2")
  40. echo "--client-cert-auth=true";;
  41. "2.3")
  42. echo "false";;
  43. "2.4")
  44. echo "peer-cert-file AND peer-key-file";;
  45. "2.5")
  46. echo "--client-cert-auth=true";;
  47. "2.6")
  48. echo "--peer-auto-tls=false";;
  49. "2.7")
  50. echo "--trusted-ca-file";;
  51. esac
  52. fi

Audit Execution:

  1. ./check_for_k3s_etcd.sh 2.2

Expected Result:

  1. '--client-cert-auth' is present OR 'client-cert-auth' is equal to 'true'

Returned Value:

  1. client-cert-auth: true

2.3 Ensure that the —auto-tls argument is not set to true (Automated)

Result: pass

Remediation: Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master node and either remove the —auto-tls parameter or set it to false. —auto-tls=false

Audit Script: check_for_k3s_etcd.sh

  1. #!/bin/bash
  2. # This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
  3. # before it checks the requirement
  4. set -eE
  5. handle_error() {
  6. echo "false"
  7. }
  8. trap 'handle_error' ERR
  9. if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
  10. case $1 in
  11. "1.1.11")
  12. echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
  13. "1.2.29")
  14. echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
  15. "2.1")
  16. echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
  17. "2.2")
  18. echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
  19. "2.3")
  20. echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
  21. "2.4")
  22. echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
  23. "2.5")
  24. echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
  25. "2.6")
  26. echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
  27. "2.7")
  28. echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
  29. esac
  30. else
  31. # If another database is running, return whatever is required to pass the scan
  32. case $1 in
  33. "1.1.11")
  34. echo "700";;
  35. "1.2.29")
  36. echo "--etcd-certfile AND --etcd-keyfile";;
  37. "2.1")
  38. echo "cert-file AND key-file";;
  39. "2.2")
  40. echo "--client-cert-auth=true";;
  41. "2.3")
  42. echo "false";;
  43. "2.4")
  44. echo "peer-cert-file AND peer-key-file";;
  45. "2.5")
  46. echo "--client-cert-auth=true";;
  47. "2.6")
  48. echo "--peer-auto-tls=false";;
  49. "2.7")
  50. echo "--trusted-ca-file";;
  51. esac
  52. fi

Audit Execution:

  1. ./check_for_k3s_etcd.sh 2.3

Expected Result:

  1. 'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present

Returned Value:

  1. error: process ID list syntax error Usage: ps [options] Try 'ps --help <simple|list|output|threads|misc|all\>' or 'ps --help <s|l|o|t|m|a\>' for additional help text. For more details see ps(1). cat: /proc//environ: No such file or directory

2.4 Ensure that the —peer-cert-file and —peer-key-file arguments are set as appropriate (Automated)

Result: pass

Remediation: Follow the etcd service documentation and configure peer TLS encryption as appropriate for your etcd cluster. Then, edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master node and set the below parameters. —peer-client-file=</path/to/peer-cert-file> —peer-key-file=</path/to/peer-key-file>

Audit Script: check_for_k3s_etcd.sh

  1. #!/bin/bash
  2. # This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
  3. # before it checks the requirement
  4. set -eE
  5. handle_error() {
  6. echo "false"
  7. }
  8. trap 'handle_error' ERR
  9. if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
  10. case $1 in
  11. "1.1.11")
  12. echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
  13. "1.2.29")
  14. echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
  15. "2.1")
  16. echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
  17. "2.2")
  18. echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
  19. "2.3")
  20. echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
  21. "2.4")
  22. echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
  23. "2.5")
  24. echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
  25. "2.6")
  26. echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
  27. "2.7")
  28. echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
  29. esac
  30. else
  31. # If another database is running, return whatever is required to pass the scan
  32. case $1 in
  33. "1.1.11")
  34. echo "700";;
  35. "1.2.29")
  36. echo "--etcd-certfile AND --etcd-keyfile";;
  37. "2.1")
  38. echo "cert-file AND key-file";;
  39. "2.2")
  40. echo "--client-cert-auth=true";;
  41. "2.3")
  42. echo "false";;
  43. "2.4")
  44. echo "peer-cert-file AND peer-key-file";;
  45. "2.5")
  46. echo "--client-cert-auth=true";;
  47. "2.6")
  48. echo "--peer-auto-tls=false";;
  49. "2.7")
  50. echo "--trusted-ca-file";;
  51. esac
  52. fi

Audit Execution:

  1. ./check_for_k3s_etcd.sh 2.4

Expected Result:

  1. 'cert-file' is present AND 'key-file' is present

Returned Value:

  1. cert-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt key-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key

2.5 Ensure that the —peer-client-cert-auth argument is set to true (Automated)

Result: pass

Remediation: Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master node and set the below parameter. —peer-client-cert-auth=true

Audit Script: check_for_k3s_etcd.sh

  1. #!/bin/bash
  2. # This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
  3. # before it checks the requirement
  4. set -eE
  5. handle_error() {
  6. echo "false"
  7. }
  8. trap 'handle_error' ERR
  9. if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
  10. case $1 in
  11. "1.1.11")
  12. echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
  13. "1.2.29")
  14. echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
  15. "2.1")
  16. echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
  17. "2.2")
  18. echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
  19. "2.3")
  20. echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
  21. "2.4")
  22. echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
  23. "2.5")
  24. echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
  25. "2.6")
  26. echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
  27. "2.7")
  28. echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
  29. esac
  30. else
  31. # If another database is running, return whatever is required to pass the scan
  32. case $1 in
  33. "1.1.11")
  34. echo "700";;
  35. "1.2.29")
  36. echo "--etcd-certfile AND --etcd-keyfile";;
  37. "2.1")
  38. echo "cert-file AND key-file";;
  39. "2.2")
  40. echo "--client-cert-auth=true";;
  41. "2.3")
  42. echo "false";;
  43. "2.4")
  44. echo "peer-cert-file AND peer-key-file";;
  45. "2.5")
  46. echo "--client-cert-auth=true";;
  47. "2.6")
  48. echo "--peer-auto-tls=false";;
  49. "2.7")
  50. echo "--trusted-ca-file";;
  51. esac
  52. fi

Audit Execution:

  1. ./check_for_k3s_etcd.sh 2.5

Expected Result:

  1. '--client-cert-auth' is present OR 'client-cert-auth' is equal to 'true'

Returned Value:

  1. client-cert-auth: true

2.6 Ensure that the —peer-auto-tls argument is not set to true (Automated)

Result: pass

Remediation: Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master node and either remove the —peer-auto-tls parameter or set it to false. —peer-auto-tls=false

Audit Script: check_for_k3s_etcd.sh

  1. #!/bin/bash
  2. # This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
  3. # before it checks the requirement
  4. set -eE
  5. handle_error() {
  6. echo "false"
  7. }
  8. trap 'handle_error' ERR
  9. if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
  10. case $1 in
  11. "1.1.11")
  12. echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
  13. "1.2.29")
  14. echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
  15. "2.1")
  16. echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
  17. "2.2")
  18. echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
  19. "2.3")
  20. echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
  21. "2.4")
  22. echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
  23. "2.5")
  24. echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
  25. "2.6")
  26. echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
  27. "2.7")
  28. echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
  29. esac
  30. else
  31. # If another database is running, return whatever is required to pass the scan
  32. case $1 in
  33. "1.1.11")
  34. echo "700";;
  35. "1.2.29")
  36. echo "--etcd-certfile AND --etcd-keyfile";;
  37. "2.1")
  38. echo "cert-file AND key-file";;
  39. "2.2")
  40. echo "--client-cert-auth=true";;
  41. "2.3")
  42. echo "false";;
  43. "2.4")
  44. echo "peer-cert-file AND peer-key-file";;
  45. "2.5")
  46. echo "--client-cert-auth=true";;
  47. "2.6")
  48. echo "--peer-auto-tls=false";;
  49. "2.7")
  50. echo "--trusted-ca-file";;
  51. esac
  52. fi

Audit Execution:

  1. ./check_for_k3s_etcd.sh 2.6

Expected Result:

  1. 'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present

Returned Value:

  1. error: process ID list syntax error Usage: ps [options] Try 'ps --help <simple|list|output|threads|misc|all\>' or 'ps --help <s|l|o|t|m|a\>' for additional help text. For more details see ps(1). cat: /proc//environ: No such file or directory

2.7 Ensure that a unique Certificate Authority is used for etcd (Automated)

Result: pass

Remediation: [Manual test] Follow the etcd documentation and create a dedicated certificate authority setup for the etcd service. Then, edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master node and set the below parameter. —trusted-ca-file=</path/to/ca-file>

Audit Script: check_for_k3s_etcd.sh

  1. #!/bin/bash
  2. # This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
  3. # before it checks the requirement
  4. set -eE
  5. handle_error() {
  6. echo "false"
  7. }
  8. trap 'handle_error' ERR
  9. if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
  10. case $1 in
  11. "1.1.11")
  12. echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
  13. "1.2.29")
  14. echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
  15. "2.1")
  16. echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
  17. "2.2")
  18. echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
  19. "2.3")
  20. echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
  21. "2.4")
  22. echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
  23. "2.5")
  24. echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
  25. "2.6")
  26. echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
  27. "2.7")
  28. echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
  29. esac
  30. else
  31. # If another database is running, return whatever is required to pass the scan
  32. case $1 in
  33. "1.1.11")
  34. echo "700";;
  35. "1.2.29")
  36. echo "--etcd-certfile AND --etcd-keyfile";;
  37. "2.1")
  38. echo "cert-file AND key-file";;
  39. "2.2")
  40. echo "--client-cert-auth=true";;
  41. "2.3")
  42. echo "false";;
  43. "2.4")
  44. echo "peer-cert-file AND peer-key-file";;
  45. "2.5")
  46. echo "--client-cert-auth=true";;
  47. "2.6")
  48. echo "--peer-auto-tls=false";;
  49. "2.7")
  50. echo "--trusted-ca-file";;
  51. esac
  52. fi

Audit Execution:

  1. ./check_for_k3s_etcd.sh 2.7

Expected Result:

  1. 'trusted-ca-file' is present

Returned Value:

  1. trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt

3.1 Authentication and Authorization

3.1.1 Client certificate authentication should not be used for users (Manual)

Result: warn

Remediation: Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented in place of client certificates.

3.1.2 Service account token authentication should not be used for users (Manual)

Result: warn

Remediation: Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented in place of service account tokens.

3.1.3 Bootstrap token authentication should not be used for users (Manual)

Result: warn

Remediation: Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented in place of bootstrap tokens.

3.2 Logging

3.2.1 Ensure that a minimal audit policy is created (Manual)

Result: pass

Remediation: Create an audit policy file for your cluster.

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-policy-file'

Expected Result:

  1. '--audit-policy-file' is present

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

3.2.2 Ensure that the audit policy covers key security concerns (Manual)

Result: warn

Remediation: Review the audit policy provided for the cluster and ensure that it covers at least the following areas,

  • Access to Secrets managed by the cluster. Care should be taken to only log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in order to avoid risk of logging sensitive data.
  • Modification of Pod and Deployment objects.
  • Use of pods/exec, pods/portforward, pods/proxy and services/proxy. For most requests, minimally logging at the Metadata level is recommended (the most basic level of logging).

4.1 Worker Node Configuration Files

4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)

Result: Not Applicable

Remediation: Run the below command (based on the file location on your system) on the each worker node. For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf Not Applicable - All configuration is passed in as arguments at container run time.

4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)

Result: Not Applicable

Remediation: Run the below command (based on the file location on your system) on the each worker node. For example, chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf Not Applicable. All configuration is passed in as arguments at container run time.

4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Manual)

Result: warn

Remediation: Run the below command (based on the file location on your system) on the each worker node. For example, chmod 600 /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig

Audit:

  1. stat -c %a /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig

Expected Result:

  1. 'permissions' is present

Returned Value:

  1. 600

4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)

Result: pass

Remediation: Run the below command (based on the file location on your system) on the each worker node. For example, chown root:root /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig

Audit:

  1. /bin/sh -c 'if test -e /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig; fi'

Expected Result:

  1. 'root:root' is present

Returned Value:

  1. root:root

4.1.5 Ensure that the —kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)

Result: pass

Remediation: Run the below command (based on the file location on your system) on the each worker node. For example, chmod 600 /var/lib/rancher/k3s/server/cred/admin.kubeconfig

Audit:

  1. /bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/admin.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/admin.kubeconfig; fi'

Expected Result:

  1. permissions has permissions 600, expected 600 or more restrictive

Returned Value:

  1. permissions=600

4.1.6 Ensure that the —kubeconfig kubelet.conf file ownership is set to root:root (Automated)

Result: pass

Remediation: Run the below command (based on the file location on your system) on the each worker node. For example, chown root:root /var/lib/rancher/k3s/server/cred/admin.kubeconfig

Audit:

  1. stat -c %U:%G /var/lib/rancher/k3s/agent/kubelet.kubeconfig

Expected Result:

  1. 'root:root' is present

Returned Value:

  1. root:root

4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Manual)

Result: warn

Remediation: Run the following command to modify the file permissions of the —client-ca-file chmod 600 <filename>

Audit:

  1. stat -c %a /var/lib/rancher/k3s/server/tls/server-ca.crt

Expected Result:

  1. 'permissions' is present

Returned Value:

  1. 644

4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Manual)

Result: pass

Remediation: Run the following command to modify the ownership of the —client-ca-file. chown root:root <filename>

Audit:

  1. stat -c %U:%G /var/lib/rancher/k3s/server/tls/client-ca.crt

Expected Result:

  1. 'root:root' is equal to 'root:root'

Returned Value:

  1. root:root

4.1.9 Ensure that the kubelet —config configuration file has permissions set to 600 or more restrictive (Automated)

Result: Not Applicable

Remediation: Run the following command (using the config file location identified in the Audit step) chmod 600 /var/lib/kubelet/config.yaml

4.1.10 Ensure that the kubelet —config configuration file ownership is set to root:root (Automated)

Result: Not Applicable

Remediation: Run the following command (using the config file location identified in the Audit step) chown root:root /var/lib/kubelet/config.yaml Not Applicable. All configuration is passed in as arguments at container run time.

4.2 Kubelet

4.2.1 Ensure that the —anonymous-auth argument is set to false (Automated)

Result: pass

Remediation: If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to false. If using executable arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. --anonymous-auth=false Based on your system, restart the kubelet service. For example, systemctl daemon-reload systemctl restart kubelet.service

Audit:

  1. /bin/sh -c 'if test $(journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "anonymous-auth" | grep -v grep; else echo "--anonymous-auth=false"; fi'

Expected Result:

  1. '--anonymous-auth' is equal to 'false'

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

4.2.2 Ensure that the —authorization-mode argument is not set to AlwaysAllow (Automated)

Result: pass

Remediation: If using a Kubelet config file, edit the file to set authorization.mode to Webhook. If using executable arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and set the below parameter in KUBELET_AUTHZ_ARGS variable. —authorization-mode=Webhook Based on your system, restart the kubelet service. For example, systemctl daemon-reload systemctl restart kubelet.service

Audit:

  1. /bin/sh -c 'if test $(journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode" | grep -v grep; else echo "--authorization-mode=Webhook"; fi'

Audit Config:

  1. /bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'

Expected Result:

  1. '--authorization-mode' does not have 'AlwaysAllow'

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

4.2.3 Ensure that the —client-ca-file argument is set as appropriate (Automated)

Result: pass

Remediation: If using a Kubelet config file, edit the file to set authentication.x509.clientCAFile to the location of the client CA file. If using command line arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and set the below parameter in KUBELET_AUTHZ_ARGS variable. —client-ca-file=<path/to/client-ca-file> Based on your system, restart the kubelet service. For example, systemctl daemon-reload systemctl restart kubelet.service

Audit:

  1. /bin/sh -c 'if test $(journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "client-ca-file" | grep -v grep; else echo "--client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt"; fi'

Expected Result:

  1. '--client-ca-file' is present

Returned Value:

  1. Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

4.2.4 Verify that the —read-only-port argument is set to 0 (Manual)

Result: pass

Remediation: If using a Kubelet config file, edit the file to set readOnlyPort to 0. If using command line arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. —read-only-port=0 Based on your system, restart the kubelet service. For example, systemctl daemon-reload systemctl restart kubelet.service

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'read-only-port'

Audit Config:

  1. /bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'

Expected Result:

  1. '--read-only-port' is equal to '0' OR '--read-only-port' is not present

Returned Value:

  1. Sep 11 20:52:15 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:15Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-12-34 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=af02ecbc-1e4e-422e-8b4d-4b2aa24a9d46 --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"

4.2.5 Ensure that the —streaming-connection-idle-timeout argument is not set to 0 (Manual)

Result: warn

Remediation: If using a Kubelet config file, edit the file to set streamingConnectionIdleTimeout to a value other than 0. If using command line arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. —streaming-connection-idle-timeout=5m Based on your system, restart the kubelet service. For example, systemctl daemon-reload systemctl restart kubelet.service

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'streaming-connection-idle-timeout'

4.2.6 Ensure that the —make-iptables-util-chains argument is set to true (Automated)

Result: pass

Remediation: If using a Kubelet config file, edit the file to set makeIPTablesUtilChains to true. If using command line arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and remove the —make-iptables-util-chains argument from the KUBELET_SYSTEM_PODS_ARGS variable. Based on your system, restart the kubelet service. For example: systemctl daemon-reload systemctl restart kubelet.service

Audit:

  1. journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'make-iptables-util-chains'

Expected Result:

  1. '--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present

Returned Value:

  1. Sep 11 20:52:15 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:15Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-12-34 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=af02ecbc-1e4e-422e-8b4d-4b2aa24a9d46 --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"

4.2.7 Ensure that the —hostname-override argument is not set (Manual)

Result: Not Applicable

Remediation: Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and remove the —hostname-override argument from the KUBELET_SYSTEM_PODS_ARGS variable. Based on your system, restart the kubelet service. For example, systemctl daemon-reload systemctl restart kubelet.service Not Applicable.

4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)

Result: pass

Remediation: If using a Kubelet config file, edit the file to set eventRecordQPS to an appropriate level. If using command line arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. Based on your system, restart the kubelet service. For example, systemctl daemon-reload systemctl restart kubelet.service

Audit:

  1. /bin/ps -fC containerd

Audit Config:

  1. /bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'

Expected Result:

  1. '--event-qps' is present OR '--event-qps' is not present

Returned Value:

  1. UID PID PPID C STIME TTY TIME CMD root 527 1 0 Sep11 ? 00:01:28 /usr/bin/containerd root 2361 2340 3 Sep11 ? 00:40:19 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd

4.2.9 Ensure that the —tls-cert-file and —tls-private-key-file arguments are set as appropriate (Manual)

Result: Not Applicable

Remediation: If using a Kubelet config file, edit the file to set tlsCertFile to the location of the certificate file to use to identify this Kubelet, and tlsPrivateKeyFile to the location of the corresponding private key file. If using command line arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and set the below parameters in KUBELET_CERTIFICATE_ARGS variable. —tls-cert-file=<path/to/tls-certificate-file> —tls-private-key-file=<path/to/tls-key-file> Based on your system, restart the kubelet service. For example, systemctl daemon-reload systemctl restart kubelet.service Permissive - When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers.

4.2.10 Ensure that the —rotate-certificates argument is not set to false (Manual)

Result: pass

Remediation: If using a Kubelet config file, edit the file to add the line rotateCertificates to true or remove it altogether to use the default value. If using command line arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and remove —rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS variable. Based on your system, restart the kubelet service. For example, systemctl daemon-reload systemctl restart kubelet.service

Audit:

  1. /bin/ps -fC containerd

Audit Config:

  1. /bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'

Expected Result:

  1. '--rotate-certificates' is present OR '--rotate-certificates' is not present

Returned Value:

  1. UID PID PPID C STIME TTY TIME CMD root 527 1 0 Sep11 ? 00:01:28 /usr/bin/containerd root 2361 2340 3 Sep11 ? 00:40:19 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd

4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Manual)

Result: pass

Remediation: Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. —feature-gates=RotateKubeletServerCertificate=true Based on your system, restart the kubelet service. For example: systemctl daemon-reload systemctl restart kubelet.service Not Applicable.

Audit:

  1. /bin/ps -fC containerd

Audit Config:

  1. /bin/cat /var/lib/kubelet/config.yaml

Expected Result:

  1. 'RotateKubeletServerCertificate' is present OR 'RotateKubeletServerCertificate' is not present

Returned Value:

  1. UID PID PPID C STIME TTY TIME CMD root 527 1 0 Sep11 ? 00:01:28 /usr/bin/containerd root 2361 2340 3 Sep11 ? 00:40:19 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd

4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)

Result: warn

Remediation: If using a Kubelet config file, edit the file to set TLSCipherSuites to TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 or to a subset of these values. If using executable arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and set the —tls-cipher-suites parameter as follows, or to a subset of these values. —tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 Based on your system, restart the kubelet service. For example: systemctl daemon-reload systemctl restart kubelet.service

Audit:

  1. /bin/ps -fC containerd

Audit Config:

  1. /bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'

Expected Result:

  1. '--tls-cipher-suites' is present

Returned Value:

  1. UID PID PPID C STIME TTY TIME CMD root 527 1 0 Sep11 ? 00:01:28 /usr/bin/containerd root 2361 2340 3 Sep11 ? 00:40:19 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd

4.2.13 Ensure that a limit is set on pod PIDs (Manual)

Result: warn

Remediation: Decide on an appropriate level for this parameter and set it, either via the —pod-max-pids command line parameter or the PodPidsLimit configuration file setting.

Audit:

  1. /bin/ps -fC containerd

Audit Config:

  1. /bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'

Expected Result:

  1. '--pod-max-pids' is present

Returned Value:

  1. UID PID PPID C STIME TTY TIME CMD root 527 1 0 Sep11 ? 00:01:28 /usr/bin/containerd root 2361 2340 3 Sep11 ? 00:40:19 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd

5.1 RBAC and Service Accounts

5.1.1 Ensure that the cluster-admin role is only used where required (Manual)

Result: warn

Remediation: Identify all clusterrolebindings to the cluster-admin role. Check if they are used and if they need this role or if they could use a role with fewer privileges. Where possible, first bind users to a lower privileged role and then remove the clusterrolebinding to the cluster-admin role : kubectl delete clusterrolebinding [name]

5.1.2 Minimize access to secrets (Manual)

Result: warn

Remediation: Where possible, remove get, list and watch access to Secret objects in the cluster.

5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)

Result: warn

Remediation: Where possible replace any use of wildcards in clusterroles and roles with specific objects or actions.

5.1.4 Minimize access to create pods (Manual)

Result: warn

Remediation: Where possible, remove create access to pod objects in the cluster.

5.1.5 Ensure that default service accounts are not actively used. (Manual)

Result: pass

Remediation: Create explicit service accounts wherever a Kubernetes workload requires specific access to the Kubernetes API server. Modify the configuration of each default service account to include this value automountServiceAccountToken: false

Audit Script: check_for_default_sa.sh

  1. #!/bin/bash
  2. set -eE
  3. handle_error() {
  4. echo "false"
  5. }
  6. trap 'handle_error' ERR
  7. count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l)
  8. if [[ ${count_sa} -gt 0 ]]; then
  9. echo "false"
  10. exit
  11. fi
  12. for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name")
  13. do
  14. for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[]?.kind=="ServiceAccount" and .subjects[]?.name=="default") or (.subjects[]?.kind=="Group" and .subjects[]?.name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"')
  15. do
  16. read kind name <<<$(IFS=","; echo $result)
  17. resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[]? != "podsecuritypolicies")' | wc -l)
  18. if [[ ${resource_count} -gt 0 ]]; then
  19. echo "false"
  20. exit
  21. fi
  22. done
  23. done
  24. echo "true"

Audit Execution:

  1. ./check_for_default_sa.sh

Expected Result:

  1. 'true' is equal to 'true'

Returned Value:

  1. true

5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)

Result: warn

Remediation: Modify the definition of pods and service accounts which do not need to mount service account tokens to disable it.

5.1.7 Avoid use of system:masters group (Manual)

Result: warn

Remediation: Remove the system:masters group from all users in the cluster.

5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)

Result: warn

Remediation: Where possible, remove the impersonate, bind and escalate rights from subjects.

5.1.9 Minimize access to create persistent volumes (Manual)

Result: warn

Remediation: Where possible, remove create access to PersistentVolume objects in the cluster.

5.1.10 Minimize access to the proxy sub-resource of nodes (Manual)

Result: warn

Remediation: Where possible, remove access to the proxy sub-resource of node objects.

5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual)

Result: warn

Remediation: Where possible, remove access to the approval sub-resource of certificatesigningrequest objects.

5.1.12 Minimize access to webhook configuration objects (Manual)

Result: warn

Remediation: Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects

5.1.13 Minimize access to the service account token creation (Manual)

Result: warn

Remediation: Where possible, remove access to the token sub-resource of serviceaccount objects.

5.2 Pod Security Standards

5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual)

Result: warn

Remediation: Ensure that either Pod Security Admission or an external policy control system is in place for every namespace which contains user workloads.

5.2.2 Minimize the admission of privileged containers (Manual)

Result: warn

Remediation: Add policies to each namespace in the cluster which has user workloads to restrict the admission of privileged containers.

5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated)

Result: warn

Remediation: Add policies to each namespace in the cluster which has user workloads to restrict the admission of hostPID containers.

5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated)

Result: warn

Remediation: Add policies to each namespace in the cluster which has user workloads to restrict the admission of hostIPC containers.

5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated)

Result: warn

Remediation: Add policies to each namespace in the cluster which has user workloads to restrict the admission of hostNetwork containers.

5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Automated)

Result: warn

Remediation: Add policies to each namespace in the cluster which has user workloads to restrict the admission of containers with .spec.allowPrivilegeEscalation set to true.

5.2.7 Minimize the admission of root containers (Automated)

Result: warn

Remediation: Create a policy for each namespace in the cluster, ensuring that either MustRunAsNonRoot or MustRunAs with the range of UIDs not including 0, is set.

5.2.8 Minimize the admission of containers with the NET_RAW capability (Automated)

Result: warn

Remediation: Add policies to each namespace in the cluster which has user workloads to restrict the admission of containers with the NET_RAW capability.

5.2.9 Minimize the admission of containers with added capabilities (Automated)

Result: warn

Remediation: Ensure that allowedCapabilities is not present in policies for the cluster unless it is set to an empty array.

5.2.10 Minimize the admission of containers with capabilities assigned (Manual)

Result: warn

Remediation: Review the use of capabilites in applications running on your cluster. Where a namespace contains applicaions which do not require any Linux capabities to operate consider adding a PSP which forbids the admission of containers which do not drop all capabilities.

5.2.11 Minimize the admission of Windows HostProcess containers (Manual)

Result: warn

Remediation: Add policies to each namespace in the cluster which has user workloads to restrict the admission of containers that have .securityContext.windowsOptions.hostProcess set to true.

5.2.12 Minimize the admission of HostPath volumes (Manual)

Result: warn

Remediation: Add policies to each namespace in the cluster which has user workloads to restrict the admission of containers with hostPath volumes.

5.2.13 Minimize the admission of containers which use HostPorts (Manual)

Result: warn

Remediation: Add policies to each namespace in the cluster which has user workloads to restrict the admission of containers which use hostPort sections.

5.3 Network Policies and CNI

5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual)

Result: warn

Remediation: If the CNI plugin in use does not support network policies, consideration should be given to making use of a different plugin, or finding an alternate mechanism for restricting traffic in the Kubernetes cluster.

5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual)

Result: warn

Remediation: Follow the documentation and create NetworkPolicy objects as you need them.

5.4 Secrets Management

5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual)

Result: warn

Remediation: If possible, rewrite application code to read Secrets from mounted secret files, rather than from environment variables.

5.4.2 Consider external secret storage (Manual)

Result: warn

Remediation: Refer to the Secrets management options offered by your cloud provider or a third-party secrets management solution.

5.5 Extensible Admission Control

5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)

Result: warn

Remediation: Follow the Kubernetes documentation and setup image provenance.

5.7 General Policies

5.7.1 Create administrative boundaries between resources using namespaces (Manual)

Result: warn

Remediation: Follow the documentation and create namespaces for objects in your deployment as you need them.

5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)

Result: warn

Remediation: Use securityContext to enable the docker/default seccomp profile in your pod definitions. An example is as below: securityContext: seccompProfile: type: RuntimeDefault

5.7.3 Apply SecurityContext to your Pods and Containers (Manual)

Result: warn

Remediation: Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker Containers.

5.7.4 The default namespace should not be used (Manual)

Result: warn

Remediation: Ensure that namespaces are created to allow for appropriate segregation of Kubernetes resources and that all new resources are created in a specific namespace.