- Known Issues
- Contents
- Kubectl Version Skew
- Older Docker Installations
- Docker on Btrfs or ZFS
- Docker Installed with Snap
- Failing to apply overlay network
- Failure to build node image
- Failing to properly start cluster
- Pod errors due to “too many open files”
- Docker permission denied
- Windows Containers
- Non-AMD64 Architectures
- Unable to pull images
- Chrome OS
- AppArmor
Known Issues
Having problems with kind? This guide is covers some known problems and solutions / workarounds.
It may additionally be helpful to:
- check our issue tracker
- file an issue (if there isn't one already)
- reach out and ask for help in #kind on the kubernetes slack
Contents
- kubectl version skew
- Older Docker Installations
- Docker on Btrfs or ZFS
- Docker Installed With Snap
- Failing to apply overlay network
- Failure to build node image
- Failure for cluster to properly start
- Pod errors due to “too many open files”
- Docker permission denied
- Windows Containers
- Non-AMD64 Architectures
- Unable to pull images
- Chrome OS
- AppArmor
Kubectl Version Skew
You may have problems interacting with your kind cluster if your client(s) areskewed too far from the kind node version. Kubernetes only supports limited skewbetween clients and the API server.
This is a issue that frequently occurs when running kind
alongside Docker For Mac.
This problem is related to a bug in docker on macOS
If you see something like the following error message:
$ kubectl edit deploy -n kube-system kubernetes-dashboard
error: SchemaError(io.k8s.api.autoscaling.v2beta1.ExternalMetricStatus): invalid object doesn't have additional properties
You can check your client and server versions by running:
kubectl version
If there is a mismatch between the server and client versions, you should install a newer client version.
If you are using Mac, you can install kubectl via homebrew by running:
brew install kubernetes-cli
And overwrite the symlinks created by Docker For Mac by running:
brew link --overwrite kubernetes-cli
Older Docker Installations
kind
is known to have issues with Kubernetes 1.13 or lower when using Docker versions:
1.13.1
(released January 2017)17.05.0-ce
(released May 2017)
And possibly other old versions of Docker.
With these versions you must use Kubernetes >= 1.14, or more ideally upgrade Docker instead.
kind is tested with a recent stable docker-ce
release.
Docker on Btrfs or ZFS
kind
cannot run properly if containers on your machine / host are backed by aBtrfs or ZFSfilesystem.
This should only be relevant on Linux, on which you can check with:
docker info | grep -i storage
As a workaround, you'll need to ensure that the storage driver is one that works.Docker's default of overlay2
is a good choice, but aufs
should also work.
You can set the storage driver with the following configuration in /etc/docker/daemon.json
:
{
"storage-driver": "overlay2"
}
After restarting the Docker daemon you should see that Docker is now using the overlay2
storage driver instead of btrfs
.
NOTE: You'll need to make sure the backing filesystem is not btrfs / ZFS as well,which may require creating a partition on your host disk with a suitable filesystem and ensuring Docker'sdata root is on this (by default /var/lib/docker
). Ext4 is a reasonable choice.
Docker Installed with Snap
If you installed Docker with snap, it is likely that docker
commands do nothave access to $TMPDIR
. This may break some kind commands which dependon using temp directories (kind build …
).
Currently a workaround for this is setting the TEMPDIR
environment variable toa directory snap does have access to when working with kind.This can for example be some directory under $HOME
.
Failing to apply overlay network
There are two known causes for problems while applying the overlay networkwhile building a kind cluster:
- Host machine is behind a proxy
- Usage of Docker version 18.09
If you see something like the following error message:
✗ [kind-1-control-plane] Starting Kubernetes (this may take a minute) ☸
FATA[07:20:43] Failed to create cluster: failed to apply overlay network: exit status 1
or the following, when setting the loglevel
flag to debug,
DEBU[16:26:53] Running: /usr/bin/docker [docker exec --privileged kind-1-control-plane /bin/sh -c kubectl apply --kubeconfig=/etc/kubernetes/admin.conf -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version --kubeconfig=/etc/kubernetes/admin.conf | base64 | tr -d '\n')"]
ERRO[16:28:25] failed to apply overlay network: exit status 1 ) ☸
✗ [control-plane] Starting Kubernetes (this may take a minute) ☸
ERRO[16:28:25] failed to apply overlay network: exit status 1
DEBU[16:28:25] Running: /usr/bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\t{{.Label "io.k8s.sigs.kind.cluster"}} --filter label=io.k8s.sigs.kind.cluster=1]
DEBU[16:28:25] Running: /usr/bin/docker [docker rm -f -v kind-1-control-plane]
⠈⠁ [control-plane] Pre-loading images 🐋 Error: failed to create cluster: failed to apply overlay network: exit status 1
The issue may be due to your host machine being behind a proxy, such as inkind#136.We are currently looking into ways of mitigating this issue by preloading CNIartifacts, see kind#200.Another possible solution is to enable kind nodes to use a proxy whendownloading images, see kind#270.
The last known case for this issue comes from the host machineusing Docker 18.09 in kind#136.In this case, a known solution is to upgrade to any Docker version greater than orequal to Docker 18.09.1.
Failure to build node image
Building kind's node image may fail due to running out of memory on Docker for Mac or Docker for Windows.See kind#229.
If you see something like this:
cmd/kube-scheduler
cmd/kube-proxy
/usr/local/go/pkg/tool/linux_amd64/link: signal: killed
!!! [0116 08:30:53] Call tree:
!!! [0116 08:30:53] 1: /go/src/k8s.io/kubernetes/hack/lib/golang.sh:614 kube::golang::build_some_binaries(...)
!!! [0116 08:30:53] 2: /go/src/k8s.io/kubernetes/hack/lib/golang.sh:758 kube::golang::build_binaries_for_platform(...)
!!! [0116 08:30:53] 3: hack/make-rules/build.sh:27 kube::golang::build_binaries(...)
!!! [0116 08:30:53] Call tree:
!!! [0116 08:30:53] 1: hack/make-rules/build.sh:27 kube::golang::build_binaries(...)
!!! [0116 08:30:53] Call tree:
!!! [0116 08:30:53] 1: hack/make-rules/build.sh:27 kube::golang::build_binaries(...)
make: *** [all] Error 1
Makefile:92: recipe for target 'all' failed
!!! [0116 08:30:54] Call tree:
!!! [0116 08:30:54] 1: build/../build/common.sh:518 kube::build::run_build_command_ex(...)
!!! [0116 08:30:54] 2: build/release-images.sh:38 kube::build::run_build_command(...)
make: *** [quick-release-images] Error 1
ERRO[08:30:54] Failed to build Kubernetes: failed to build images: exit status 2
Error: error building node image: failed to build kubernetes: failed to build images: exit status 2
Usage:
kind build node-image [flags]
Flags:
--base-image string name:tag of the base image to use for the build (default "kindest/base:v20181203-d055041")
-h, --help help for node-image
--image string name:tag of the resulting image to be built (default "kindest/node:latest")
--kube-root string Path to the Kubernetes source directory (if empty, the path is autodetected)
--type string build type, one of [bazel, docker] (default "docker")
Global Flags:
--loglevel string logrus log level [panic, fatal, error, warning, info, debug] (default "warning")
error building node image: failed to build kubernetes: failed to build images: exit status 2
Then you may try increasing the resource limits for the Docker engine on Mac or Windows.
It is recommended that you allocate at least 8GB of RAM to build Kubernetes.
Open the Preferences menu.
Go to the Advanced settings page, and change the settings there, seechanging Docker's resource limits.
Failing to properly start cluster
This issue is similar to afailure while building the node image.If the cluster creation process was successful but you are unable to see anyKubernetes resources running, for example:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c0261f7512fd kindest/node:v1.12.2 "/usr/local/bin/entr…" About a minute ago Up About a minute 0.0.0.0:64907->64907/tcp kind-1-control-plane
$ docker exec -it c0261f7512fd /bin/sh
# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
#
or kubectl
being unable to connect to the cluster,
$ kind export kubeconfig
$ kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Unable to connect to the server: EOF
Then as in kind#156, you may solve this issue by claiming back somespace on your machine by removing unused data or images left by the Dockerengine by running:
docker system prune
And / or:
docker image prune
You can verify the issue by exporting the logs (kind export logs
) and lookingat the kubelet logs, which may have something like the following:
Dec 07 00:37:53 kind-1-control-plane kubelet[688]: I1207 00:37:53.229561 688 eviction_manager.go:340] eviction manager: must evict pod(s) to reclaim ephemeral-storage
Dec 07 00:37:53 kind-1-control-plane kubelet[688]: E1207 00:37:53.229638 688 eviction_manager.go:351] eviction manager: eviction thresholds have been met, but no pods are active to evict
If on the other hand you are running kind on a btrfs partition and your logsshow something like the following:
F0103 17:42:41.470269 3804 kubelet.go:1359] Failed to start ContainerManager failed to get rootfs info: failed to get device for dir "/ var/lib/kubelet": could not find device with major: 0, minor: 67 in cached partitions map
This problem seems to be related to a bug in Docker.
Pod errors due to “too many open files”
This may be caused by running out of inotify resources. Resource limits are defined by fs.inotify.max_user_watches
and fs.inotify.max_user_instances
system variables. For example, in Ubuntu these default to 8192 and 128 respectively, which is not enough to create a cluster with many nodes.
To increase these limits temporarily run the following commands on the host:
sudo sysctl fs.inotify.max_user_watches=524288
sudo sysctl fs.inotify.max_user_instances=512
To make the changes persistent, edit the file /etc/sysctl.conf
and add these lines:
fs.inotify.max_user_watches = 524288
fs.inotify.max_user_instances = 512
Docker permission denied
When using kind
, we assume that the user you are executing kind as has permission to use docker.If you initially ran Docker CLI commands using sudo
, you may see the following error, which indicates that your ~/.docker/
directory was created with incorrect permissions due to the sudo
commands.
WARNING: Error loading config file: /home/user/.docker/config.json
open /home/user/.docker/config.json: permission denied
To fix this problem, either follow the docker's docs manage docker as a non root user,or try to use sudo
before your commands (if you get command not found
please check this comment about sudo with kind).
Windows Containers
Docker Desktop for Windows supports running both Linux (the default) and Windows Docker containers.
kind
for Windows requires Linux containers. To switch between Linux and Windows containers see this page.
Windows containers are not like Linux containers and do not support running docker in docker and therefore cannot support kind.
Non-AMD64 Architectures
KIND does not currently ship pre-built images for non-amd64 architectures.In the future we may, but currently demand has been low and the cost to buildhas been high.
To use kind on other architectures, you need to first build a base imageand then build a node image.
Run images/base/build.sh
and then taking note of the built image name use kind build node-image —base-image=kindest/base:tag-i-built
.
There are more details about how to do this in the Quick Start guide.
Unable to pull images
When using named KIND instances you may sometimes see your images failing to pull correctly on pods. This will usually manifest itself with the following output when doing a kubectl describe pod my-pod
Failed to pull image "docker.io/my-custom-image:tag": rpc error: code = Unknown desc = failed to resolve image "docker.io/library/my-custom-image:tag": no available registry endpoint: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
If this image has been loaded onto your kind cluster using the command kind load docker-image my-custom-image
then you have likely not provided the name parameter.
Re-run the command this time adding the —name my-cluster-name
param:
kind load docker-image my-custom-image —name my-cluster-name
Chrome OS
Kubernetes does not work in the Chrome OS Linux sandbox.
Please see the upstream issue https://bugs.chromium.org/p/chromium/issues/detail?id=878034
For previous discussion see: https://github.com/kubernetes-sigs/kind/issues/763
AppArmor
If your host has AppArmor enabled you may run into moby/moby/issues/7512.
You will likely need to disable apparmor on your host or at least any profile(s)related to applications you are trying to run in KIND.
See Previous Discussion: kind#1179