Private Registries

Some users may want to test applications on kind that require pulling images from authenticated private registries, there are multiple ways to do this.

Use ImagePullSecrets

Kubernetes supports configuring pods to use imagePullSecrets for pulling images. If possible, this is the preferable and most portable route.

See the upstream kubernetes docs for this, kind does not require any special handling to use this.

If you already have the config file locally but would still like to use secrets, read through kubernetes’ docs for creating a secret from a file.

Pull to the Host and Side-Load

kind can load an image from the host with the kind load ... commands. If you configure your host with credentials to pull the desired image(s) and then load them to the nodes you can avoid needing to authenticate on the nodes.

Add Credentials to the Nodes

Generally the upstream docs for using a private registry apply, with kind there are two options for this.

Mount a Config File to Each Node

If you pre-create a docker config.json containing credential(s) on the host you can mount it to each kind node.

Assuming your file is at /path/to/my/secret.json, the kind config would be:

kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane extraMounts: - containerPath: /var/lib/kubelet/config.json hostPath: /path/to/my/secret.json

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. nodes:
  4. - role: control-plane
  5. extraMounts:
  6. - containerPath: /var/lib/kubelet/config.json
  7. hostPath: /path/to/my/secret.json
### Use an Access Token A credential can be programmatically added to the nodes at runtime. If you do this then kubelet must be restarted on each node to pick up the new credentials. An example shell snippet for generating a gcr.io cred file on your host machine using Access Tokens: #!/bin/sh set -o errexit # desired cluster name; default is “kind” KIND_CLUSTER_NAME=”${KIND_CLUSTER_NAME:-kind}” # create a temp file for the docker config echo “Creating temporary docker client config directory …” DOCKER_CONFIG=$(mktemp -d) export DOCKER_CONFIG trap ‘echo “Removing ${DOCKER_CONFIG}/*“ && rm -rf ${DOCKER_CONFIG:?}’ EXIT echo “Creating a temporary config.json” # This is to force the omission of credsStore, which is automatically # created on supported system. With credsStore missing, “docker login” # will store the password in the config.json file. # https://docs.docker.com/engine/reference/commandline/login/#credentials-store cat <“${DOCKER_CONFIG}/config.json” { “auths”: { “gcr.io”: {} } } EOF # login to gcr in DOCKER_CONFIG using an access token # https://cloud.google.com/container-registry/docs/advanced-authentication#access\_token echo “Logging in to GCR in temporary docker client config directory …” gcloud auth print-access-token | \ docker login -u oauth2accesstoken —password-stdin https://gcr.io # setup credentials on each node echo “Moving credentials to kind cluster name=’${KIND_CLUSTER_NAME}’ nodes …” for node in $(kind get nodes —name “${KIND_CLUSTER_NAME}”); do # the -oname format is kind/name (so node/name) we just want name node_name=${node#node/} # copy the config to where kubelet will look docker cp “${DOCKER_CONFIG}/config.json” “${node_name}:/var/lib/kubelet/config.json” # restart kubelet to pick up the config docker exec “${node_name}” systemctl restart kubelet.service done echo “Done!”
examples/kind-gcr.sh —>
  1. #!/bin/sh
  2. set -o errexit
  3. # desired cluster name; default is “kind”
  4. KIND_CLUSTER_NAME=”${KIND_CLUSTER_NAME:-kind}”
  5. # create a temp file for the docker config
  6. echo Creating temporary docker client config directory …”
  7. DOCKER_CONFIG=$(mktemp -d)
  8. export DOCKER_CONFIG
  9. trap echo Removing ${DOCKER_CONFIG}/*” && rm -rf ${DOCKER_CONFIG:?}’ EXIT
  10. echo “Creating a temporary config.json”
  11. # This is to force the omission of credsStore, which is automatically
  12. # created on supported system. With credsStore missing, “docker login”
  13. # will store the password in the config.json file.
  14. # https://docs.docker.com/engine/reference/commandline/login/#credentials-store
  15. cat <<EOF >”${DOCKER_CONFIG}/config.json”
  16. {
  17. “auths”: { “gcr.io”: {} }
  18. }
  19. EOF
  20. # login to gcr in DOCKER_CONFIG using an access token
  21. # https://cloud.google.com/container-registry/docs/advanced-authentication#access_token
  22. echo “Logging in to GCR in temporary docker client config directory …”
  23. gcloud auth print-access-token | \
  24. docker login -u oauth2accesstoken —password-stdin https://gcr.io
  25. # setup credentials on each node
  26. echo “Moving credentials to kind cluster name=’${KIND_CLUSTER_NAME}’ nodes …”
  27. for node in $(kind get nodes —name “${KIND_CLUSTER_NAME}”); do
  28. # the -oname format is kind/name (so node/name) we just want name
  29. node_name=${node#node/}
  30. # copy the config to where kubelet will look
  31. docker cp “${DOCKER_CONFIG}/config.json” “${node_name}:/var/lib/kubelet/config.json”
  32. # restart kubelet to pick up the config
  33. docker exec “${node_name}” systemctl restart kubelet.service
  34. done
  35. echo “Done!”
### Use a Service Account Access tokens are short lived, so you may prefer to use a Service Account and keyfile instead. First, either download the key from the console or generate one with gcloud: gcloud iam service-accounts keys create <output.json> --iam-account <account email> Then, replace the gcloud auth print-access-token | ... line from the access token snippet with: cat <output.json> | docker login -u _json_key --password-stdin https://gcr.io See Google’s upstream docs on key file authentication for more details.