Runtime

k0s supports any container runtime that implements the CRI specification.

k0s comes bundled with containerd as the default Container Runtime Interface (CRI) and runc as the default low-level runtime. In most cases they don’t require any configuration changes. However, if custom configuration is needed, this page provides some examples.

k0s_runtime

containerd configuration

By default k0s manages the full containerd configuration. User has the option of fully overriding, and thus also managing, the configuration themselves.

User managed containerd configuration

In the default k0s generated configuration there’s a “magic” comment telling k0s it is k0s managed:

  1. # k0s_managed=true

If you wish to take over the configuration management remove this line.

To make changes to containerd configuration you must first generate a default containerd configuration, with the default values set to /etc/k0s/containerd.toml:

  1. containerd config default > /etc/k0s/containerd.toml

k0s runs containerd with the following default values:

  1. /var/lib/k0s/bin/containerd \
  2. --root=/var/lib/k0s/containerd \
  3. --state=/run/k0s/containerd \
  4. --address=/run/k0s/containerd.sock \
  5. --config=/etc/k0s/containerd.toml

Next, add the following default values to the configuration file:

  1. version = 2
  2. root = "/var/lib/k0s/containerd"
  3. state = "/run/k0s/containerd"
  4. ...
  5. [grpc]
  6. address = "/run/k0s/containerd.sock"

k0s managed dynamic runtime configuration

As of 1.27.1, k0s allows dynamic configuration of containerd CRI runtimes. This works by k0s creating a special directory in /etc/k0s/containerd.d/ where users can place partial containerd configuration files.

K0s will automatically pick up these files and add them as containerd configuration imports. If a partial configuration file contains a CRI plugin configuration section, k0s will instead treat such a file as a merge patch to k0s’s default containerd configuration. This is to mitigate containerd’s decision to replace rather than merge individual plugin configuration sections from imported configuration files. However, this behavior may change in future releases of containerd.

Examples

Following chapters provide some examples how to configure different runtimes for containerd using k0s managed drop-in configurations.

Using gVisor

gVisor is an application kernel, written in Go, that implements a substantial portion of the Linux system call interface. It provides an additional layer of isolation between running applications and the host operating system.

  1. Install the needed gVisor binaries into the host.

    1. (
    2. set -e
    3. ARCH=$(uname -m)
    4. URL=https://storage.googleapis.com/gvisor/releases/release/latest/${ARCH}
    5. wget ${URL}/runsc ${URL}/runsc.sha512 \
    6. ${URL}/containerd-shim-runsc-v1 ${URL}/containerd-shim-runsc-v1.sha512
    7. sha512sum -c runsc.sha512 \
    8. -c containerd-shim-runsc-v1.sha512
    9. rm -f *.sha512
    10. chmod a+rx runsc containerd-shim-runsc-v1
    11. sudo mv runsc containerd-shim-runsc-v1 /usr/local/bin
    12. )

    Refer to the gVisor install docs for more information.

  2. Prepare the config for k0s managed containerd, to utilize gVisor as additional runtime:

    1. cat <<EOF | sudo tee /etc/k0s/containerd.d/gvisor.toml
    2. version = 2
    3. [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc]
    4. runtime_type = "io.containerd.runsc.v1"
    5. EOF
  3. Start and join the worker into the cluster, as normal:

    1. k0s worker $token
  4. Register containerd to the Kubernetes side to make gVisor runtime usable for workloads (by default, containerd uses normal runc as the runtime):

    1. cat <<EOF | kubectl apply -f -
    2. apiVersion: node.k8s.io/v1
    3. kind: RuntimeClass
    4. metadata:
    5. name: gvisor
    6. handler: runsc
    7. EOF

    At this point, you can use gVisor runtime for your workloads:

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: nginx-gvisor
    5. spec:
    6. runtimeClassName: gvisor
    7. containers:
    8. - name: nginx
    9. image: nginx
  5. (Optional) Verify that the created nginx pod is running under gVisor runtime:

    1. # kubectl exec nginx-gvisor -- dmesg | grep -i gvisor
    2. [ 0.000000] Starting gVisor...

Using nvidia-container-runtime

First, install the NVIDIA runtime components:

  1. distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
  2. && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
  3. && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
  4. sudo apt-get update && sudo apt-get install -y nvidia-container-runtime

Next, drop in the NVIDIA runtime’s configuration into into /etc/k0s/containerd.d/nvidia.toml:

  1. [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia]
  2. privileged_without_host_devices = false
  3. runtime_engine = ""
  4. runtime_root = ""
  5. runtime_type = "io.containerd.runc.v1"
  6. [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia.options]
  7. BinaryName = "/usr/bin/nvidia-container-runtime"

Create the needed RuntimeClass:

  1. cat <<EOF | kubectl apply -f -
  2. apiVersion: node.k8s.io/v1
  3. kind: RuntimeClass
  4. metadata:
  5. name: nvidia
  6. handler: nvidia
  7. EOF

Note Detailed instruction on how to run nvidia-container-runtime on your node is available here.

Using custom CRI runtimes

Warning: You can use your own CRI runtime with k0s (for example, docker). However, k0s will not start or manage the runtime, and configuration is solely your responsibility.

Use the option --cri-socket to run a k0s worker with a custom CRI runtime. the option takes input in the form of <type>:<url> (the only supported type is remote).

Using Docker as the container runtime

As of Kubernetes 1.24, the use of Docker as a container runtime is no longer supported out of the box. However, Mirantis provides cri-dockerd, a shim that allows Docker to be controlled via CRI. It’s based on the dockershim that was previously part of upstream Kubernetes.

Configuration

In order to use Docker as the container runtime for k0s, the following steps need to be taken:

  1. Manually install required components.
    On each k0s worker and k0s controller --enable-worker node, both Docker Engine and cri-dockerd need to be installed manually. Follow the official Docker Engine installation guide and cri-dockerd installation instructions.

  2. Configure and restart affected k0s nodes.
    Once installations are complete, the nodes needs to be restarted with the --cri-socket flag pointing to cri-dockerd’s socket, which is typically located at /var/run/cri-dockerd.sock. For instance, the commands to start a node would be as follows:

    1. k0s worker --cri-socket=remote:unix:///var/run/cri-dockerd.sock

    or, respectively

    1. k0s controller --enable-worker --cri-socket=remote:unix:///var/run/cri-dockerd.sock

    When running k0s as a service, consider reinstalling the service with the appropriate flags:

    1. sudo k0s install --force worker --cri-socket=remote:unix:///var/run/cri-dockerd.sock

    or, respectively

    1. sudo k0s install --force controller --enable-worker --cri-socket=remote:unix:///var/run/cri-dockerd.sock

In scenarios where Docker is managed via systemd, it is crucial that the cgroupDriver: systemd setting is included in the Kubelet configuration. It can be added to the workerProfiles section of the k0s configuration. An example of how the k0s configuration might look:

  1. apiVersion: k0s.k0sproject.io/v1beta1
  2. kind: ClusterConfig
  3. metadata:
  4. name: k0s
  5. spec:
  6. workerProfiles:
  7. - name: systemd-docker-cri
  8. values:
  9. cgroupDriver: systemd

Note that this is a cluster-wide configuration setting that must be added to the k0s controller’s configuration rather than directly to the workers, or to the cluster configuration if using dynamic configuration. See the worker profiles section of the documentation for more details. When starting workers, both the --profile=systemd-docker-cri and --cri-socket flags are required. The profile name, such as systemd-docker-cri, is flexible. Alternatively, this setting can be applied to the default profile, which will apply to all nodes started without a specific profile. In this case, the --profile flag is not needed.

Please note that there are currently some pitfalls around container metrics when using cri-dockerd.

Verification

The successful configuration can be verified by executing the following command:

  1. $ kubectl get nodes -o wide
  2. NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
  3. docker-worker-0 Ready <none> 15m v1.30.3+k0s 172.27.77.155 <none> Ubuntu 22.04.3 LTS 5.15.0-82-generic docker://24.0.7

On the worker nodes, the Kubernetes containers should be listed as regular Docker containers:

  1. $ docker ps --format "table {{.ID}}\t{{.Names}}\t{{.State}}\t{{.Status}}"
  2. CONTAINER ID NAMES STATE STATUS
  3. 9167a937af28 k8s_konnectivity-agent_konnectivity-agent-9rnj7_kube-system_430027b4-75c3-487c-b94d-efeb7204616d_1 running Up 14 minutes
  4. b6978162a05d k8s_metrics-server_metrics-server-7556957bb7-wfg8k_kube-system_5f642105-78c8-450a-bfd2-2021b680b932_1 running Up 14 minutes
  5. d576abe86c92 k8s_coredns_coredns-85df575cdb-vmdq5_kube-system_6f26626e-d241-4f15-889a-bcae20d04e2c_1 running Up 14 minutes
  6. 8f268b180c59 k8s_kube-proxy_kube-proxy-2x6jz_kube-system_34a7a8ba-e15d-4968-8a02-f5c0cb3c8361_1 running Up 14 minutes
  7. ed0a665ec28e k8s_POD_konnectivity-agent-9rnj7_kube-system_430027b4-75c3-487c-b94d-efeb7204616d_0 running Up 14 minutes
  8. a9861a7beab5 k8s_POD_metrics-server-7556957bb7-wfg8k_kube-system_5f642105-78c8-450a-bfd2-2021b680b932_0 running Up 14 minutes
  9. 898befa4840e k8s_POD_kube-router-fftkt_kube-system_940ad783-055e-4fce-8ce1-093ca01625b9_0 running Up 14 minutes
  10. e80dabc23ce7 k8s_POD_kube-proxy-2x6jz_kube-system_34a7a8ba-e15d-4968-8a02-f5c0cb3c8361_0 running Up 14 minutes
  11. 430a784b1bdd k8s_POD_coredns-85df575cdb-vmdq5_kube-system_6f26626e-d241-4f15-889a-bcae20d04e2c_0 running Up 14 minutes