Installing Kubernetes

Terraform service/kubernetes

There are plenty of ways to set up a Kubernetes cluster from scratch. At this point however, we settle on kubeadm. This dramatically simplifies the setup process by automating the creation of certificates, services and configuration files.

Before getting started with Kubernetes itself, we need to take care of setting up two essential services that are not part of the actual stack, namely Docker and etcd.

Docker setup

Docker is directly available from the package registries of most Linux distributions. Hints regarding supported versions are available in the official kubeadm guide. Simply use your preferred way of installation. Running apt-get install docker.io on Ubuntu will install a stable version, although not the most recent one, but this is perfectly fine in our case.

Kubernetes recommends running Docker with Iptables and IP Masq disabled. The easiest way to achieve this is by creating a systemd unit file to set the required configuration flags:

  1. # /etc/systemd/system/docker.service.d/10-docker-opts.conf
  2. Environment="DOCKER_OPTS=--iptables=false --ip-masq=false"

If this file has been placed after Docker was installed, make sure to restart the service using systemctl restart docker.

Etcd setup

Terraform service/etcd

etcd is a highly-available key value store, which Kubernetes uses for persistent storage of all of its REST API objects. It is therefore a crucial part of the cluster. kubeadm would normally install etcd on a single node. Depending on the number of hosts available, it would be rather stupid not to run etcd in cluster mode. As mentioned earlier, it makes sense to run at least a three node cluster due to the fact that etcd is fault tolerant only from this size on.

Even though etcd is generally available with most package managers, it’s recommended to manually install a more recent version:

  1. export ETCD_VERSION="v3.2.13"
  2. mkdir -p /opt/etcd
  3. curl -L https://storage.googleapis.com/etcd/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz \
  4. -o /opt/etcd-${ETCD_VERSION}-linux-amd64.tar.gz
  5. tar xzvf /opt/etcd-${ETCD_VERSION}-linux-amd64.tar.gz -C /opt/etcd --strip-components=1

In an insecure environment configuring etcd typically involves creating and distributing certificates across nodes, whereas running it within a secure network makes this process a whole lot easier. There’s simply no need to make use of additional security layers as long as the service is bound to an end-to-end secured VPN interface.

This section is not going to explain etcd configuration in depth, refer to the official documentation instead. All that needs to be done is creating a systemd unit file on each host. Assuming a three node cluster, the configuration for kube1 would look like this:

  1. # /etc/systemd/system/etcd.service
  2. [Unit]
  3. Description=etcd
  4. After=network.target wg-quick@wg0.service
  5. [Service]
  6. Type=notify
  7. ExecStart=/opt/etcd/etcd --name kube1 \
  8. --data-dir /var/lib/etcd \
  9. --listen-client-urls "http://10.0.1.1:2379,http://localhost:2379" \
  10. --advertise-client-urls "http://10.0.1.1:2379" \
  11. --listen-peer-urls "http://10.0.1.1:2380" \
  12. --initial-cluster "kube1=http://10.0.1.1:2380,kube2=http://10.0.1.2:2380,kube3=http://10.0.1.3:2380" \
  13. --initial-advertise-peer-urls "http://10.0.1.1:2380" \
  14. --heartbeat-interval 200 \
  15. --election-timeout 5000
  16. Restart=always
  17. RestartSec=5
  18. TimeoutStartSec=0
  19. StartLimitInterval=0
  20. [Install]
  21. WantedBy=multi-user.target

It’s important to understand that each flag starting with --initial does only apply during the first launch of a cluster. This means for example, that it’s possible to add and remove cluster members at any time without ever changing the value of --initial-cluster.

After the files have been placed on each host, it’s time to start the etcd cluster:

  1. systemctl enable etcd.service # launch etcd during system boot
  2. systemctl start etcd.service

Executing /opt/etcd/etcdctl member list should show a list of cluster members. If something went wrong check the logs using journalctl -u etcd.service.

Kubernetes setup

Now that Docker is configured and etcd is running, it’s time to deploy Kubernetes. The first step is to install the required packages on each host:

  1. curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
  2. cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
  3. deb http://apt.kubernetes.io/ kubernetes-xenial-unstable main
  4. EOF
  5. apt-get update
  6. apt-get install -y kubelet kubeadm kubectl kubernetes-cni

Initializing the master node

Before initializing the master node, we need to create a manifest on kube1 which will then be used as configuration in the next step:

  1. # /tmp/master-configuration.yml
  2. apiVersion: kubeadm.k8s.io/v1alpha1
  3. kind: MasterConfiguration
  4. api:
  5. advertiseAddress: 10.0.1.1
  6. etcd:
  7. endpoints:
  8. - http://10.0.1.1:2379
  9. - http://10.0.1.2:2379
  10. - http://10.0.1.3:2379
  11. apiServerCertSANs:
  12. - <PUBLIC_IP_KUBE1>

Then we run the following command on kube1:

  1. kubeadm init --config /tmp/master-configuration.yml

After the setup is complete, kubeadm prints a token such as 818d5a.8b50eb5477ba4f40. It’s important to write it down, we’ll need it in a minute to join the other cluster nodes.

Kubernetes is built around openness, so it’s up to us to choose and install a suitable pod network. This is required as it enables pods running on different nodes to communicate with each other. One of the many options is Weave Net. It requires zero configuration and is considered stable and well-maintained:

  1. # create symlink for the current user in order to gain access to the API server with kubectl
  2. [ -d $HOME/.kube ] || mkdir -p $HOME/.kube
  3. ln -s /etc/kubernetes/admin.conf $HOME/.kube/config
  4. # install Weave Net
  5. kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
  6. # allow traffic on the newly created weave network interface
  7. ufw allow in on weave
  8. ufw reload

Unfortunately, Weave Net will not readily work with our current cluster configuration because traffic will be routed via the wrong network interface. This can be fixed by running the following command on each host:

  1. ip route add 10.96.0.0/16 dev $VPN_INTERFACE src $VPN_IP
  2. # on kube1:
  3. ip route add 10.96.0.0/16 dev wg0 src 10.0.1.1
  4. # on kube2:
  5. ip route add 10.96.0.0/16 dev wg0 src 10.0.1.2
  6. # on kube3:
  7. ip route add 10.96.0.0/16 dev wg0 src 10.0.1.3

The added route will not survive a reboot as it is not persistent. To ensure that the route gets added after a reboot, we have to add a systemd service unit on each node which will wait for the wireguard interface to come up and after that adds the route. For kube1 it would look like this:

  1. # /etc/systemd/system/overlay-route.service
  2. [Unit]
  3. Description=Overlay network route for Wireguard
  4. After=wg-quick@wg0.service
  5. [Service]
  6. Type=oneshot
  7. User=root
  8. ExecStart=/sbin/ip route add 10.96.0.0/16 dev wg0 src 10.0.1.1
  9. [Install]
  10. WantedBy=multi-user.target

After that we have to enable it by running following command:

  1. systemctl enable overlay-route.service

Joining the cluster nodes

All that’s left is to join the cluster with the other nodes. Run the following command on each host:

  1. kubeadm join --token=<TOKEN> 10.0.1.1:6443 --discovery-token-unsafe-skip-ca-verification

That’s it, a Kubernetes cluster is ready at our disposal.