Private Registries

This guide discusses how to use kind with image registries that require authentication.

There are multiple ways to do this, which we try to cover here.

Use ImagePullSecrets

Kubernetes supports configuring pods to use imagePullSecrets for pulling images. If possible, this is the preferable and most portable route.

See the upstream kubernetes docs for this, kind does not require any special handling to use this.

If you already have the config file locally but would still like to use secrets, read through kubernetes’ docs for creating a secret from a file.

Pull to the Host and Side-Load

kind can load an image from the host with the kind load ... commands. If you configure your host with credentials to pull the desired image(s) and then load them to the nodes you can avoid needing to authenticate on the nodes.

Add Credentials to the Nodes

Generally the upstream docs for using a private registry apply, with kind there are two options for this.

Mount a Config File to Each Node

If you pre-create a docker config.json containing credential(s) on the host you can mount it to each kind node.

Assuming your file is at /path/to/my/secret.json, the kind config would be:

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. nodes:
  4. - role: control-plane
  5. extraMounts:
  6. - containerPath: /var/lib/kubelet/config.json
  7. hostPath: /path/to/my/secret.json

Use an Access Token

A credential can be programmatically added to the nodes at runtime.

If you do this then kubelet must be restarted on each node to pick up the new credentials.

An example shell snippet for generating a gcr.io cred file on your host machine using Access Tokens:

examples/kind-gcr.sh

  1. #!/bin/sh
  2. set -o errexit
  3. # desired cluster name; default is "kind"
  4. KIND_CLUSTER_NAME="${KIND_CLUSTER_NAME:-kind}"
  5. # create a temp file for the docker config
  6. echo "Creating temporary docker client config directory ..."
  7. DOCKER_CONFIG=$(mktemp -d)
  8. export DOCKER_CONFIG
  9. trap 'echo "Removing ${DOCKER_CONFIG}/*" && rm -rf ${DOCKER_CONFIG:?}' EXIT
  10. echo "Creating a temporary config.json"
  11. # This is to force the omission of credsStore, which is automatically
  12. # created on supported system. With credsStore missing, "docker login"
  13. # will store the password in the config.json file.
  14. # https://docs.docker.com/engine/reference/commandline/login/#credentials-store
  15. cat <<EOF >"${DOCKER_CONFIG}/config.json"
  16. {
  17. "auths": { "gcr.io": {} }
  18. }
  19. EOF
  20. # login to gcr in DOCKER_CONFIG using an access token
  21. # https://cloud.google.com/container-registry/docs/advanced-authentication#access_token
  22. echo "Logging in to GCR in temporary docker client config directory ..."
  23. gcloud auth print-access-token | \
  24. docker login -u oauth2accesstoken --password-stdin https://gcr.io
  25. # setup credentials on each node
  26. echo "Moving credentials to kind cluster name='${KIND_CLUSTER_NAME}' nodes ..."
  27. for node in $(kind get nodes --name "${KIND_CLUSTER_NAME}"); do
  28. # the -oname format is kind/name (so node/name) we just want name
  29. node_name=${node#node/}
  30. # copy the config to where kubelet will look
  31. docker cp "${DOCKER_CONFIG}/config.json" "${node_name}:/var/lib/kubelet/config.json"
  32. # restart kubelet to pick up the config
  33. docker exec "${node_name}" systemctl restart kubelet.service
  34. done
  35. echo "Done!"

Use a Service Account

Access tokens are short lived, so you may prefer to use a Service Account and keyfile instead. First, either download the key from the console or generate one with gcloud:

  1. gcloud iam service-accounts keys create <output.json> --iam-account <account email>

Then, replace the gcloud auth print-access-token | ... line from the access token snippet with:

  1. cat <output.json> | docker login -u _json_key --password-stdin https://gcr.io

See Google’s upstream docs on key file authentication for more details.

Use a Certificate

If you have a registry authenticated with certificates, and both certificates and keys reside on your host folder, it is possible to mount and use them into the containerd plugin patching the default configuration, like in the example:

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. nodes:
  4. - role: control-plane
  5. # This option mounts the host docker registry folder into
  6. # the control-plane node, allowing containerd to access them.
  7. extraMounts:
  8. - containerPath: /etc/docker/certs.d/registry.dev.example.com
  9. hostPath: /etc/docker/certs.d/registry.dev.example.com
  10. containerdConfigPatches:
  11. - |-
  12. [plugins."io.containerd.grpc.v1.cri".registry.configs."registry.dev.example.com".tls]
  13. cert_file = "/etc/docker/certs.d/registry.dev.example.com/ba_client.cert"
  14. key_file = "/etc/docker/certs.d/registry.dev.example.com/ba_client.key"