Cluster Load Balancer

This section describes how to install an external load balancer in front of a High Availability (HA) K3s cluster’s server nodes. Two examples are provided: Nginx and HAProxy.

Cluster Load Balancer - 图1提示

External load-balancers should not be confused with the embedded ServiceLB, which is an embedded controller that allows for use of Kubernetes LoadBalancer Services without deploying a third-party load-balancer controller. For more details, see Service Load Balancer.

External load-balancers can be used to provide a fixed registration address for registering nodes, or for external access to the Kubernetes API Server. For exposing LoadBalancer Services, external load-balancers can be used alongside or instead of ServiceLB, but in most cases, replacement load-balancer controllers such as MetalLB or Kube-VIP are a better choice.

Prerequisites

All nodes in this example are running Ubuntu 20.04.

For both examples, assume that a HA K3s cluster with embedded etcd has been installed on 3 nodes.

Each k3s server is configured with:

  1. # /etc/rancher/k3s/config.yaml
  2. token: lb-cluster-gd
  3. tls-san: 10.10.10.100

The nodes have hostnames and IPs of:

  • server-1: 10.10.10.50
  • server-2: 10.10.10.51
  • server-3: 10.10.10.52

Two additional nodes for load balancing are configured with hostnames and IPs of:

  • lb-1: 10.10.10.98
  • lb-2: 10.10.10.99

Three additional nodes exist with hostnames and IPs of:

  • agent-1: 10.10.10.101
  • agent-2: 10.10.10.102
  • agent-3: 10.10.10.103

Setup Load Balancer

  • HAProxy
  • Nginx

HAProxy is an open source option that provides a TCP load balancer. It also supports HA for the load balancer itself, ensuring redundancy at all levels. See HAProxy Documentation for more info.

Additionally, we will use KeepAlived to generate a virtual IP (VIP) that will be used to access the cluster. See KeepAlived Documentation for more info.

  1. Install HAProxy and KeepAlived:
  1. sudo apt-get install haproxy keepalived
  1. Add the following to /etc/haproxy/haproxy.cfg on lb-1 and lb-2:
  1. frontend k3s-frontend
  2. bind *:6443
  3. mode tcp
  4. option tcplog
  5. default_backend k3s-backend
  6. backend k3s-backend
  7. mode tcp
  8. option tcp-check
  9. balance roundrobin
  10. default-server inter 10s downinter 5s
  11. server server-1 10.10.10.50:6443 check
  12. server server-2 10.10.10.51:6443 check
  13. server server-3 10.10.10.52:6443 check
  1. Add the following to /etc/keepalived/keepalived.conf on lb-1 and lb-2:
  1. vrrp_script chk_haproxy {
  2. script 'killall -0 haproxy' # faster than pidof
  3. interval 2
  4. }
  5. vrrp_instance haproxy-vip {
  6. interface eth1
  7. state <STATE> # MASTER on lb-1, BACKUP on lb-2
  8. priority <PRIORITY> # 200 on lb-1, 100 on lb-2
  9. virtual_router_id 51
  10. virtual_ipaddress {
  11. 10.10.10.100/24
  12. }
  13. track_script {
  14. chk_haproxy
  15. }
  16. }
  1. Restart HAProxy and KeepAlived on lb-1 and lb-2:
  1. systemctl restart haproxy
  2. systemctl restart keepalived
  1. On agent-1, agent-2, and agent-3, run the following command to install k3s and join the cluster:
  1. curl -sfL https://get.k3s.io | K3S_TOKEN=lb-cluster-gd sh -s - agent --server https://10.10.10.100:6443

You can now use kubectl from server node to interact with the cluster.

  1. root@server-1 $ k3s kubectl get nodes -A
  2. NAME STATUS ROLES AGE VERSION
  3. agent-1 Ready <none> 32s v1.27.3+k3s1
  4. agent-2 Ready <none> 20s v1.27.3+k3s1
  5. agent-3 Ready <none> 9s v1.27.3+k3s1
  6. server-1 Ready control-plane,etcd,master 4m22s v1.27.3+k3s1
  7. server-2 Ready control-plane,etcd,master 3m58s v1.27.3+k3s1
  8. server-3 Ready control-plane,etcd,master 3m12s v1.27.3+k3s1

Nginx Load Balancer

Cluster Load Balancer - 图2危险

Nginx does not natively support a High Availability (HA) configuration. If setting up an HA cluster, having a single load balancer in front of K3s will reintroduce a single point of failure.

Nginx Open Source provides a TCP load balancer. See Using nginx as HTTP load balancer for more info.

  1. Create a nginx.conf file on lb-1 with the following contents:
  1. events {}
  2. stream {
  3. upstream k3s_servers {
  4. server 10.10.10.50:6443;
  5. server 10.10.10.51:6443;
  6. server 10.10.10.52:6443;
  7. }
  8. server {
  9. listen 6443;
  10. proxy_pass k3s_servers;
  11. }
  12. }
  1. Run the Nginx load balancer on lb-1:

Using docker:

  1. docker run -d --restart unless-stopped \
  2. -v ${PWD}/nginx.conf:/etc/nginx/nginx.conf \
  3. -p 6443:6443 \
  4. nginx:stable

Or install nginx and then run:

  1. cp nginx.conf /etc/nginx/nginx.conf
  2. systemctl start nginx
  1. On agent-1, agent-2, and agent-3, run the following command to install k3s and join the cluster:
  1. curl -sfL https://get.k3s.io | K3S_TOKEN=lb-cluster-gd sh -s - agent --server https://10.10.10.99:6443

You can now use kubectl from server node to interact with the cluster.

  1. root@server1 $ k3s kubectl get nodes -A
  2. NAME STATUS ROLES AGE VERSION
  3. agent-1 Ready <none> 30s v1.27.3+k3s1
  4. agent-2 Ready <none> 22s v1.27.3+k3s1
  5. agent-3 Ready <none> 13s v1.27.3+k3s1
  6. server-1 Ready control-plane,etcd,master 4m49s v1.27.3+k3s1
  7. server-2 Ready control-plane,etcd,master 3m58s v1.27.3+k3s1
  8. server-3 Ready control-plane,etcd,master 3m16s v1.27.3+k3s1