K3s multi-node install

Big picture

This tutorial gets you a multi node K3s cluster with Calico in approximately 10 minutes.

Value

K3s is a lightweight implementation of Kubernetes packaged as a single binary.

The geeky details of what you get:

PolicyIPAMCNIOverlayRoutingDatastore

Before you begin

  • Make sure you have a linux host that meets the following requirements
    • x86-64 processor
    • 1CPU
    • 1GB Ram
    • 10GB free disk space
    • Ubuntu 18.04 (amd64), Ubuntu 20.04 (amd64)

K3s multi-node install - 图1note

K3s supports ARM processors too, this tutorial was tested against x86-64 processor environment. For more detail please visit this link.

How to

Initializing control plane instance

K3s installation script can be modified by environment variables. Here you are providing some extra arguments to disable flannel, disable k3s default network policy and change the pod ip CIDR.

K3s multi-node install - 图2note

Full list of arguments can be viewed at this link.

  1. curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--flannel-backend=none --disable-network-policy --cluster-cidr=192.168.0.0/16" sh -

K3s multi-node install - 图3caution

If 192.168.0.0/16 is already in use within your network you must select a different pod network CIDR by replacing 192.168.0.0/16 in the above command.

Enable remote access to your K3s instance

To set up remote access to your cluster first ensure you have installed kubectl on your system.

K3s multi-node install - 图4note

If you are not sure how to install kubectl in your OS visit this link.

K3s stores a kubeconfig file in your server at /etc/rancher/k3s/k3s.yaml, copy all the content of k3s.yaml from your server into ~/.kube/config on the system that you like to have remote access to the cluster.

Add extra nodes to K3s cluster

To add additional nodes to your cluster you need two piece of information.

  • K3S_URL which is going to be your main node ip address.
  • K3S_TOKEN which is stored in /var/lib/rancher/k3s/server/node-token file in main Node (Step 1). Execute following command in your node instance and join it to the cluster.

K3s multi-node install - 图5note

Remember to change serverip and mytoken.

  1. curl -sfL https://get.k3s.io | K3S_URL=https://serverip:6443 K3S_TOKEN=mytoken sh -

Install Calico

  • Operator
  • Manifest

Install the Calico operator and custom resource definitions.

  1. kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml

K3s multi-node install - 图6note

Due to the large size of the CRD bundle, kubectl apply might exceed request limits. Instead, use kubectl create or kubectl replace.

Install Calico by creating the necessary custom resource. For more information on configuration options available in this manifest, see the installation reference.

  1. kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/custom-resources.yaml

K3s multi-node install - 图7note

Before creating this manifest, read its contents and make sure its settings are correct for your environment. For example, you may need to change the default IP pool CIDR to match your pod network CIDR.

Install Calico by using the following command.

  1. kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/calico.yaml

K3s multi-node install - 图8note

You can also view the YAML in a new tab.

You should see the following output.

  1. configmap/calico-config created
  2. customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
  3. customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
  4. customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
  5. customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
  6. customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
  7. customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
  8. customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
  9. customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
  10. customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
  11. customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
  12. customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
  13. customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
  14. customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
  15. customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
  16. customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
  17. clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
  18. clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
  19. clusterrole.rbac.authorization.k8s.io/calico-node created
  20. clusterrolebinding.rbac.authorization.k8s.io/calico-node created
  21. daemonset.apps/calico-node created
  22. serviceaccount/calico-node created
  23. deployment.apps/calico-kube-controllers created
  24. serviceaccount/calico-kube-controllers created

Check the installation

  1. Confirm that all of the pods are running using the following command.
  • Operator
  • Manifest
  1. NAMESPACE NAME READY STATUS RESTARTS AGE
  2. tigera-operator tigera-operator-c9cf5b94d-gj9qp 1/1 Running 0 107s
  3. calico-system calico-typha-7dcd87597-npqsf 1/1 Running 0 88s
  4. calico-system calico-node-rdwwz 1/1 Running 0 88s
  5. kube-system local-path-provisioner-6d59f47c7-4q8l2 1/1 Running 0 2m14s
  6. kube-system metrics-server-7566d596c8-xf66d 1/1 Running 0 2m14s
  7. kube-system coredns-8655855d6-wfdbm 1/1 Running 0 2m14s
  8. calico-system calico-kube-controllers-89df8c6f8-7hxc5 1/1 Running 0 87s
  1. NAMESPACE NAME READY STATUS RESTARTS AGE
  2. kube-system calico-node-9hn9z 1/1 Running 0 23m
  3. kube-system local-path-provisioner-6d59f47c7-drznc 1/1 Running 0 38m
  4. kube-system calico-kube-controllers-789f6df884-928lt 1/1 Running 0 23m
  5. kube-system metrics-server-7566d596c8-qxlfz 1/1 Running 0 38m
  6. kube-system coredns-8655855d6-blzl5 1/1 Running 0 38m
  1. Confirm that you now have two nodes in your cluster with the following command.

    1. kubectl get nodes -o wide

    It should return something like the following.

    1. NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
    2. k3s-master Ready master 40m v1.18.2+k3s1 172.16.2.128 <none> Ubuntu 18.04.3 LTS 4.15.0-101-generic containerd://1.3.3-k3s2
    3. k3s-node1 Ready <none> 30m v1.18.2+k3s1 172.16.2.129 <none> Ubuntu 18.04.3 LTS 4.15.0-101-generic containerd://1.3.3-k3s2

Congratulations! You now have a multi node K3s cluster equipped with Calico and Traefik.

Next steps