Access and operations

Terraform service/kubernetes

As soon as the cluster is running, we want to be able to access the Kubernetes API remotely. This can be done by copying /etc/kubernetes/admin.conf from kube1 to your own machine. After installing kubectl locally, execute the following commands:

  1. # create local config folder
  2. mkdir -p ~/.kube
  3. # backup old config if required
  4. [ -f ~/.kube/config ] && cp ~/.kube/config ~/.kube/config.backup
  5. # copy config from master node
  6. scp root@<PUBLIC_IP_KUBE1>:/etc/kubernetes/admin.conf ~/.kube/config
  7. # change config to use correct IP address
  8. kubectl config set-cluster kubernetes --server=https://<PUBLIC_IP_KUBE1>:6443

You’re now able to remotely access the Kubernetes API. Running kubectl get nodes should show a list of nodes similar to this:

  1. NAME STATUS AGE VERSION
  2. kube1 Ready 1h v1.9.1
  3. kube2 Ready 1h v1.9.1
  4. kube3 Ready 1h v1.9.1

Role-Based Access Control

As of version 1.6, kubeadm configures Kubernetes with RBAC enabled. Because our hobby cluster is typically operated by trusted people, we should enable permissive RBAC permissions to be able to deploy any kind of services using any kind of resources. If you’re in doubt whether this is secure enough for your use case, please refer to the official RBAC documentation.

  1. kubectl create clusterrolebinding permissive-binding \
  2. --clusterrole=cluster-admin \
  3. --user=admin \
  4. --user=kubelet \
  5. --group=system:serviceaccounts

Deploying services

Services can now be deployed remotely by calling kubectl -f apply <FILE>. It’s also possible to apply multiple files by pointing to a folder, for example:

  1. $ ls dashboard/
  2. deployment.yml service.yml
  3. $ kubectl apply -f dashboard/
  4. deployment "kubernetes-dashboard" created
  5. service "kubernetes-dashboard" created

This guide will make no further explanations in this regard. Please refer to the official documentation on kubernetes.io.