Installing Kubernetes
There are plenty of ways to set up a Kubernetes cluster from scratch. At this point however, we settle on kubeadm. This dramatically simplifies the setup process by automating the creation of certificates, services and configuration files.
Before getting started with Kubernetes itself, we need to take care of setting up two essential services that are not part of the actual stack, namely Docker and etcd.
Docker setup
Docker is directly available from the package registries of most Linux distributions. Hints regarding supported versions are available in the official kubeadm guide. Simply use your preferred way of installation. Running apt-get install docker.io
on Ubuntu will install a stable version, although not the most recent one, but this is perfectly fine in our case.
Kubernetes recommends running Docker with Iptables and IP Masq disabled. The easiest way to achieve this is by creating a systemd unit file to set the required configuration flags:
# /etc/systemd/system/docker.service.d/10-docker-opts.conf
Environment="DOCKER_OPTS=--iptables=false --ip-masq=false"
If this file has been placed after Docker was installed, make sure to restart the service using systemctl restart docker
.
Etcd setup
etcd is a highly-available key value store, which Kubernetes uses for persistent storage of all of its REST API objects. It is therefore a crucial part of the cluster. kubeadm would normally install etcd on a single node. Depending on the number of hosts available, it would be rather stupid not to run etcd in cluster mode. As mentioned earlier, it makes sense to run at least a three node cluster due to the fact that etcd is fault tolerant only from this size on.
Even though etcd is generally available with most package managers, it’s recommended to manually install a more recent version:
export ETCD_VERSION="v3.2.13"
mkdir -p /opt/etcd
curl -L https://storage.googleapis.com/etcd/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz \
-o /opt/etcd-${ETCD_VERSION}-linux-amd64.tar.gz
tar xzvf /opt/etcd-${ETCD_VERSION}-linux-amd64.tar.gz -C /opt/etcd --strip-components=1
In an insecure environment configuring etcd typically involves creating and distributing certificates across nodes, whereas running it within a secure network makes this process a whole lot easier. There’s simply no need to make use of additional security layers as long as the service is bound to an end-to-end secured VPN interface.
This section is not going to explain etcd configuration in depth, refer to the official documentation instead. All that needs to be done is creating a systemd unit file on each host. Assuming a three node cluster, the configuration for kube1 would look like this:
# /etc/systemd/system/etcd.service
[Unit]
Description=etcd
After=network.target wg-quick@wg0.service
[Service]
Type=notify
ExecStart=/opt/etcd/etcd --name kube1 \
--data-dir /var/lib/etcd \
--listen-client-urls "http://10.0.1.1:2379,http://localhost:2379" \
--advertise-client-urls "http://10.0.1.1:2379" \
--listen-peer-urls "http://10.0.1.1:2380" \
--initial-cluster "kube1=http://10.0.1.1:2380,kube2=http://10.0.1.2:2380,kube3=http://10.0.1.3:2380" \
--initial-advertise-peer-urls "http://10.0.1.1:2380" \
--heartbeat-interval 200 \
--election-timeout 5000
Restart=always
RestartSec=5
TimeoutStartSec=0
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
It’s important to understand that each flag starting with --initial
does only apply during the first launch of a cluster. This means for example, that it’s possible to add and remove cluster members at any time without ever changing the value of --initial-cluster
.
After the files have been placed on each host, it’s time to start the etcd cluster:
systemctl enable etcd.service # launch etcd during system boot
systemctl start etcd.service
Executing /opt/etcd/etcdctl member list
should show a list of cluster members. If something went wrong check the logs using journalctl -u etcd.service
.
Kubernetes setup
Now that Docker is configured and etcd is running, it’s time to deploy Kubernetes. The first step is to install the required packages on each host:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial-unstable main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl kubernetes-cni
Initializing the master node
Before initializing the master node, we need to create a manifest on kube1 which will then be used as configuration in the next step:
# /tmp/master-configuration.yml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: 10.0.1.1
etcd:
endpoints:
- http://10.0.1.1:2379
- http://10.0.1.2:2379
- http://10.0.1.3:2379
apiServerCertSANs:
- <PUBLIC_IP_KUBE1>
Then we run the following command on kube1:
kubeadm init --config /tmp/master-configuration.yml
After the setup is complete, kubeadm prints a token such as 818d5a.8b50eb5477ba4f40
. It’s important to write it down, we’ll need it in a minute to join the other cluster nodes.
Kubernetes is built around openness, so it’s up to us to choose and install a suitable pod network. This is required as it enables pods running on different nodes to communicate with each other. One of the many options is Weave Net. It requires zero configuration and is considered stable and well-maintained:
# create symlink for the current user in order to gain access to the API server with kubectl
[ -d $HOME/.kube ] || mkdir -p $HOME/.kube
ln -s /etc/kubernetes/admin.conf $HOME/.kube/config
# install Weave Net
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
# allow traffic on the newly created weave network interface
ufw allow in on weave
ufw reload
Unfortunately, Weave Net will not readily work with our current cluster configuration because traffic will be routed via the wrong network interface. This can be fixed by running the following command on each host:
ip route add 10.96.0.0/16 dev $VPN_INTERFACE src $VPN_IP
# on kube1:
ip route add 10.96.0.0/16 dev wg0 src 10.0.1.1
# on kube2:
ip route add 10.96.0.0/16 dev wg0 src 10.0.1.2
# on kube3:
ip route add 10.96.0.0/16 dev wg0 src 10.0.1.3
The added route will not survive a reboot as it is not persistent. To ensure that the route gets added after a reboot, we have to add a systemd service unit on each node which will wait for the wireguard interface to come up and after that adds the route. For kube1 it would look like this:
# /etc/systemd/system/overlay-route.service
[Unit]
Description=Overlay network route for Wireguard
After=wg-quick@wg0.service
[Service]
Type=oneshot
User=root
ExecStart=/sbin/ip route add 10.96.0.0/16 dev wg0 src 10.0.1.1
[Install]
WantedBy=multi-user.target
After that we have to enable it by running following command:
systemctl enable overlay-route.service
Joining the cluster nodes
All that’s left is to join the cluster with the other nodes. Run the following command on each host:
kubeadm join --token=<TOKEN> 10.0.1.1:6443 --discovery-token-unsafe-skip-ca-verification
That’s it, a Kubernetes cluster is ready at our disposal.