Configuration

This guide covers how to configure KIND cluster creation.

We know this is currently a bit lacking and will expand it over time - PRs welcome!

Getting Started

To configure kind cluster creation, you will need to create a YAML config file. This file follows Kubernetes conventions for versioning etc.

A minimal valid config is:

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4

This config merely specifies that we are configuring a KIND cluster (kind: Cluster) and that the version of KIND’s config we are using is v1alpha4 (apiVersion: kind.x-k8s.io/v1alpha4).

Any given version of kind may support different versions which will have different options and behavior. This is why we must always specify the version.

This mechanism is inspired by Kubernetes resources and component config.

To use this config, place the contents in a file config.yaml and then run kind create cluster --config=config.yaml from the same directory.

You can also include a full file path like kind create cluster --config=/foo/bar/config.yaml.

The structure of the Cluster type is defined by a Go struct, which is described here.

A Note On CLI Parameters and Configuration Files

Unless otherwise noted, parameters passed to the CLI take precedence over their equivalents in a config file. For example, if you invoke:

  1. kind create cluster --name my-cluster

The name my-cluster will be used regardless of the presence of that value in your config file.

Cluster-Wide Options

The following high level options are available.

NOTE: not all options are documented yet! We will fix this with time, PRs welcome!

Name Your Cluster

You can give your cluster a name by specifying it in your config:

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. name: app-1-cluster

Feature Gates

Kubernetes feature gates can be enabled cluster-wide across all Kubernetes components with the following config:

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. featureGates:
  4. # any feature gate can be enabled here with "Name": true
  5. # or disabled here with "Name": false
  6. # not all feature gates are tested, however
  7. "CSIMigration": true

Runtime Config

Kubernetes API server runtime-config can be toggled using the runtimeConfig key, which maps to the --runtime-config kube-apiserver flag. This may be used to e.g. disable beta / alpha APIs.

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. runtimeConfig:
  4. "api/alpha": "false"

Networking

Multiple details of the cluster’s networking can be customized under the networking field.

IP Family

KIND has support for IPv4, IPv6 and dual-stack clusters, you can switch from the default of IPv4 by setting:

IPv6 clusters

You can run IPv6 single-stack clusters using kind, if the host that runs the docker containers support IPv6. Most operating systems / distros have IPv6 enabled by default, but you can check on Linux with the following command:

  1. sudo sysctl net.ipv6.conf.all.disable_ipv6

You should see:

  1. net.ipv6.conf.all.disable_ipv6 = 0

If you are using Docker on Windows or Mac, you will need to use an IPv4 port forward for the API Server from the host because IPv6 port forwards don’t work on these platforms, you can do this with the following config:

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. networking:
  4. ipFamily: ipv6
  5. apiServerAddress: 127.0.0.1

On Linux all you need is:

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. networking:
  4. ipFamily: ipv6
Dual Stack clusters

You can run dual stack clusters using kind 0.11+, on kubernetes versions 1.20+.

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. networking:
  4. ipFamily: dual

API Server

The API Server listen address and port can be customized with:

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. networking:
  4. # WARNING: It is _strongly_ recommended that you keep this the default
  5. # (127.0.0.1) for security reasons. However it is possible to change this.
  6. apiServerAddress: "127.0.0.1"
  7. # By default the API server listens on a random open port.
  8. # You may choose a specific port but probably don't need to in most cases.
  9. # Using a random port makes it easier to spin up multiple clusters.
  10. apiServerPort: 6443

security goose says

Security Goose Says:

NOTE: You should really think thrice before exposing your kind cluster publicly! kind does not ship with state of the art security or any update strategy (other than disposing your cluster and creating a new one)! We strongly discourage exposing kind to anything other than loopback.

Pod Subnet

You can configure the subnet used for pod IPs by setting

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. networking:
  4. podSubnet: "10.244.0.0/16"

By default, kind uses 10.244.0.0/16 pod subnet for IPv4 and fd00:10:244::/56 pod subnet for IPv6.

Service Subnet

You can configure the Kubernetes service subnet used for service IPs by setting

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. networking:
  4. serviceSubnet: "10.96.0.0/12"

By default, kind uses 10.96.0.0/16 service subnet for IPv4 and fd00:10:96::/112 service subnet for IPv6.

Disable Default CNI

KIND ships with a simple networking implementation (“kindnetd”) based around standard CNI plugins (ptp, host-local, …) and simple netlink routes.

This CNI also handles IP masquerade.

You may disable the default to install a different CNI. This is a power user feature with limited support, but many common CNI manifests are known to work, e.g. Calico.

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. networking:
  4. # the default CNI will not be installed
  5. disableDefaultCNI: true

kube-proxy mode

You can configure the kube-proxy mode that will be used, between iptables, nftables (Kubernetes v1.31+), and ipvs. By default iptables is used

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. networking:
  4. kubeProxyMode: "nftables"

To disable kube-proxy, set the mode to "none".

Nodes

The kind: Cluster object has a nodes field containing a list of node objects. If unset this defaults to:

  1. nodes:
  2. # one node hosting a control plane
  3. - role: control-plane

You can create a multi node cluster with the following config:

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. # One control plane node and three "workers".
  4. #
  5. # While these will not add more real compute capacity and
  6. # have limited isolation, this can be useful for testing
  7. # rolling updates etc.
  8. #
  9. # The API-server and other control plane components will be
  10. # on the control-plane node.
  11. #
  12. # You probably don't need this unless you are testing Kubernetes itself.
  13. nodes:
  14. - role: control-plane
  15. - role: worker
  16. - role: worker
  17. - role: worker

Per-Node Options

The following options are available for setting on each entry in nodes.

NOTE: not all options are documented yet! We will fix this with time, PRs welcome!

Kubernetes Version

You can set a specific Kubernetes version by setting the node’s container image. You can find available image tags on the releases page. Please include the @sha256: image digest from the image in the release notes, as seen in this example:

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. nodes:
  4. - role: control-plane
  5. image: kindest/node:v1.16.4@sha256:b91a2c2317a000f3a783489dfb755064177dbc3a0b2f4147d50f04825d016f55
  6. - role: worker
  7. image: kindest/node:v1.16.4@sha256:b91a2c2317a000f3a783489dfb755064177dbc3a0b2f4147d50f04825d016f55

Reference

Note: Kubernetes versions are expressed as x.y.z, where x is the major version, y is the minor version, and z is the patch version, following Semantic Versioning terminology. For more information, see Kubernetes Release Versioning.

Extra Mounts

Extra mounts can be used to pass through storage on the host to a kind node for persisting data, mounting through code etc.

examples/config-with-mounts.yaml
  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. nodes:

    • role: control-plane

      add a mount from /path/to/my/files on the host to /files on the node

      extraMounts:
      • hostPath: /path/to/my/files
      • containerPath: /files
      • #

        add an additional mount leveraging all of the config fields

        #

        generally you only need the two fields above …

        #
      • hostPath: /path/to/my/other-files/
      • containerPath: /other-files

        optional: if set, the mount is read-only.

        default false

        readOnly: true

        optional: if set, the mount needs SELinux relabeling.

        default false

        selinuxRelabel: false

        optional: set propagation mode (None, HostToContainer or Bidirectional)

        see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation

        default None

        #

        WARNING: You very likely do not need this field.

        #

        This field controls propagation of additional mounts created

        at runtime underneath this mount.

        #

        On MacOS with Docker Desktop, if the mount is from macOS and not the

        docker desktop VM, you cannot use this field. You can use it for

        mounts to the linux VM.

        propagation: None

NOTE: If you are using Docker for Mac or Windows check that the hostPath is included in the Preferences -> Resources -> File Sharing.

For more information see the Docker file sharing guide.

Extra Port Mappings

Extra port mappings can be used to port forward to the kind nodes. This is a cross-platform option to get traffic into your kind cluster.

If you are running Docker without the Docker Desktop Application on Linux, you can simply send traffic to the node IPs from the host without extra port mappings. With the installation of the Docker Desktop Application, whether it is on macOs, Windows or Linux, you’ll want to use these.

You may also want to see the Ingress Guide.

examples/config-with-port-mapping.yaml
  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. nodes:

    • role: control-plane

      port forward 80 on the host to 80 on this node

      extraPortMappings:
      • containerPort: 80
      • hostPort: 80

        optional: set the bind address on the host

        0.0.0.0 is the current default

        listenAddress: 127.0.0.1

        optional: set the protocol to one of TCP, UDP, SCTP.

        TCP is the default

        protocol: TCP

An example http pod mapping host ports to a container port.

  1. kind: Pod
  2. apiVersion: v1
  3. metadata:
  4. name: foo
  5. spec:
  6. containers:
  7. - name: foo
  8. image: hashicorp/http-echo:0.2.3
  9. args:
  10. - "-text=foo"
  11. ports:
  12. - containerPort: 5678
  13. hostPort: 80

NodePort with Port Mappings

To use port mappings with NodePort, the kind node containerPort and the service nodePort needs to be equal.

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. nodes:
  4. - role: control-plane
  5. extraPortMappings:
  6. - containerPort: 30950
  7. hostPort: 80

And then set nodePort to be 30950.

  1. kind: Pod
  2. apiVersion: v1
  3. metadata:
  4. name: foo
  5. labels:
  6. app: foo
  7. spec:
  8. containers:
  9. - name: foo
  10. image: hashicorp/http-echo:0.2.3
  11. args:
  12. - "-text=foo"
  13. ports:
  14. - containerPort: 5678
  15. ---
  16. apiVersion: v1
  17. kind: Service
  18. metadata:
  19. name: foo
  20. spec:
  21. type: NodePort
  22. ports:
  23. - name: http
  24. nodePort: 30950
  25. port: 5678
  26. selector:
  27. app: foo

Extra Labels

Extra labels might be useful for working with nodeSelectors.

An example label for specifying a tier label:

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. nodes:
  4. - role: control-plane
  5. - role: worker
  6. extraPortMappings:
  7. - containerPort: 30950
  8. hostPort: 80
  9. labels:
  10. tier: frontend
  11. - role: worker
  12. labels:
  13. tier: backend

Kubeadm Config Patches

KIND uses kubeadm to configure cluster nodes.

Formally KIND runs kubeadm init on the first control-plane node, we can customize the flags by using the kubeadm InitConfiguration (spec)

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. nodes:
  4. - role: control-plane
  5. kubeadmConfigPatches:
  6. - |
  7. kind: InitConfiguration
  8. nodeRegistration:
  9. kubeletExtraArgs:
  10. node-labels: "my-label=true"

If you want to do more customization, there are four configuration types available during kubeadm init: InitConfiguration, ClusterConfiguration, KubeProxyConfiguration, KubeletConfiguration. For example, we could override the apiserver flags by using the kubeadm ClusterConfiguration (spec):

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. nodes:
  4. - role: control-plane
  5. kubeadmConfigPatches:
  6. - |
  7. kind: ClusterConfiguration
  8. apiServer:
  9. extraArgs:
  10. enable-admission-plugins: NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook

On every additional node configured in the KIND cluster, worker or control-plane (in HA mode), KIND runs kubeadm join which can be configured using the JoinConfiguration (spec)

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. nodes:
  4. - role: control-plane
  5. - role: worker
  6. - role: worker
  7. kubeadmConfigPatches:
  8. - |
  9. kind: JoinConfiguration
  10. nodeRegistration:
  11. kubeletExtraArgs:
  12. node-labels: "my-label2=true"
  13. - role: control-plane
  14. kubeadmConfigPatches:
  15. - |
  16. kind: JoinConfiguration
  17. nodeRegistration:
  18. kubeletExtraArgs:
  19. node-labels: "my-label3=true"

If you need more control over patching, strategic merge and JSON6092 patches can be used as well. These are specified using files in a directory, for example ./patches/kube-controller-manager.yaml could be the following.

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: kube-controller-manager
  5. namespace: kube-system
  6. spec:
  7. containers:
  8. - name: kube-controller-manager
  9. env:
  10. - name: KUBE_CACHE_MUTATION_DETECTOR
  11. value: "true"

Then in your kind YAML configuration use the following.

  1. nodes:
  2. - role: control-plane
  3. extraMounts:
  4. - hostPath: ./patches
  5. containerPath: /patches
  6. kubeadmConfigPatches:
  7. - |
  8. kind: InitConfiguration
  9. patches:
  10. directory: /patches

Note the extraMounts stanza. The node is a container created by kind. kubeadm is run inside this node container, and the local directory that contains the patches has to be accessible to kubeadm. extraMounts plumbs a local directory through to this node container.

This example was for changing the manager in the control plane. To use a patch for a worker node, use a JoinConfiguration patch and an extraMounts stanza for the worker role.