Private Registries

This guide discusses how to use kind with image registries that require authentication.

There are multiple ways to do this, which we try to cover here.

Use ImagePullSecrets

Kubernetes supports configuring pods to use imagePullSecrets for pulling images. If possible, this is the preferable and most portable route.

See the upstream kubernetes docs for this, kind does not require any special handling to use this.

If you already have the config file locally but would still like to use secrets, read through kubernetes’ docs for creating a secret from a file.

Pull to the Host and Side-Load

kind can load an image from the host with the kind load ... commands. If you configure your host with credentials to pull the desired image(s) and then load them to the nodes you can avoid needing to authenticate on the nodes.

Add Credentials to the Nodes

Generally the upstream docs for using a private registry apply, with kind there are two options for this.

Mount a Config File to Each Node

If you pre-create a docker config.json containing credential(s) on the host you can mount it to each kind node.

Assuming your file is at /path/to/my/secret.json, the kind config would be:

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. nodes:
  4. - role: control-plane
  5. extraMounts:
  6. - containerPath: /var/lib/kubelet/config.json
  7. hostPath: /path/to/my/secret.json

Use an Access Token

A credential can be programmatically added to the nodes at runtime.

If you do this then kubelet must be restarted on each node to pick up the new credentials.

An example shell snippet for generating a gcr.io cred file on your host machine using Access Tokens:

examples/kind-gcr.sh
  1. #!/bin/sh
  2. set -o errexit

  3. desired cluster name; default is kind

    KIND_CLUSTER_NAME=”${KIND_CLUSTER_NAME:-kind}”

  4. create a temp file for the docker config

    echo Creating temporary docker client config directory …”

  5. DOCKER_CONFIG=$(mktemp -d)

  6. export DOCKER_CONFIG

  7. trap echo Removing ${DOCKER_CONFIG}/*” && rm -rf ${DOCKER_CONFIG:?}’ EXIT

  8. echo “Creating a temporary config.json”

  9. This is to force the omission of credsStore, which is automatically

    created on supported system. With credsStore missing, “docker login”

    will store the password in the config.json file.

    https://docs.docker.com/engine/reference/commandline/login/#credentials-store

    cat <<EOF >”${DOCKER_CONFIG}/config.json”

  10. {

  11. “auths”: { “gcr.io”: {} }

  12. }

  13. EOF

  14. login to gcr in DOCKER_CONFIG using an access token

    https://cloud.google.com/container-registry/docs/advanced-authentication#access_token

    echo “Logging in to GCR in temporary docker client config directory …”

  15. gcloud auth print-access-token | \

  16. docker login -u oauth2accesstoken —password-stdin https://gcr.io

  17. setup credentials on each node

    echo “Moving credentials to kind cluster name=’${KIND_CLUSTER_NAME}’ nodes …”

  18. for node in $(kind get nodes —name “${KIND_CLUSTER_NAME}”); do

  19. the -oname format is kind/name (so node/name) we just want name

    node_name=${node#node/}

  20. copy the config to where kubelet will look

    docker cp “${DOCKER_CONFIG}/config.json” “${node_name}:/var/lib/kubelet/config.json”

  21. restart kubelet to pick up the config

    docker exec “${node_name}” systemctl restart kubelet.service

  22. done

  23. echo “Done!”

Use a Service Account

Access tokens are short lived, so you may prefer to use a Service Account and keyfile instead. First, either download the key from the console or generate one with gcloud:

  1. gcloud iam service-accounts keys create <output.json> --iam-account <account email>

Then, replace the gcloud auth print-access-token | ... line from the access token snippet with:

  1. cat <output.json> | docker login -u _json_key --password-stdin https://gcr.io

See Google’s upstream docs on key file authentication for more details.

Use a Certificate

If you have a registry authenticated with certificates, and both certificates and keys reside on your host folder, it is possible to mount and use them into the containerd plugin patching the default configuration, like in the example:

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. nodes:
  4. - role: control-plane
  5. # This option mounts the host docker registry folder into
  6. # the control-plane node, allowing containerd to access them.
  7. extraMounts:
  8. - containerPath: /etc/docker/certs.d/registry.dev.example.com
  9. hostPath: /etc/docker/certs.d/registry.dev.example.com
  10. containerdConfigPatches:
  11. - |-
  12. [plugins."io.containerd.grpc.v1.cri".registry.configs."registry.dev.example.com".tls]
  13. cert_file = "/etc/docker/certs.d/registry.dev.example.com/ba_client.cert"
  14. key_file = "/etc/docker/certs.d/registry.dev.example.com/ba_client.key"