Distributed hybrid or multicloud cluster

A K3s cluster can still be deployed on nodes which do not share a common private network and are not directly connected (e.g. nodes in different public clouds). There are two options to achieve this: the embedded k3s multicloud solution and the integration with the tailscale VPN provider.

Distributed hybrid or multicloud cluster - 图1warning

The latency between nodes will increase as external connectivity requires more hops. This will reduce the network performance and could also impact the health of the cluster if latency is too high.

Distributed hybrid or multicloud cluster - 图2warning

Embedded etcd is not supported in this type of deployment. If using embedded etcd, all server nodes must be reachable to each other via their private IPs. Agents may be distributed over multiple networks, but all servers should be in the same location.

Embedded k3s multicloud solution

K3s uses wireguard to establish a VPN mesh for cluster traffic. Nodes must each have a unique IP through which they can be reached (usually a public IP). K3s supervisor traffic will use a websocket tunnel, and cluster (CNI) traffic will use a wireguard tunnel.

To enable this type of deployment, you must add the following parameters on servers:

  1. --node-external-ip=<SERVER_EXTERNAL_IP> --flannel-backend=wireguard-native --flannel-external-ip

and on agents:

  1. --node-external-ip=<AGENT_EXTERNAL_IP>

where SERVER_EXTERNAL_IP is the IP through which we can reach the server node and AGENT_EXTERNAL_IP is the IP through which we can reach the agent node. Note that the K3S_URL config parameter in the agent should use the SERVER_EXTERNAL_IP to be able to connect to it. Remember to check the Networking Requirements and allow access to the listed ports on both internal and external addresses.

Both SERVER_EXTERNAL_IP and AGENT_EXTERNAL_IP must have connectivity between them and are normally public IPs.

Distributed hybrid or multicloud cluster - 图3Dynamic IPs

If nodes are assigned dynamic IPs and the IP changes (e.g. in AWS), you must modify the --node-external-ip parameter to reflect the new IP. If running K3s as a service, you must modify /etc/systemd/system/k3s.service then run:

  1. systemctl daemon-reload
  2. systemctl restart k3s

Integration with the Tailscale VPN provider (experimental)

Available in v1.27.3, v1.26.6, v1.25.11 and newer.

K3s can integrate with Tailscale so that nodes use the Tailscale VPN service to build a mesh between nodes.

There are four steps to be done with Tailscale before deploying K3s:

  1. Log in to your Tailscale account

  2. In Settings > Keys, generate an auth key ($AUTH-KEY), which may be reusable for all nodes in your cluster

  3. Decide on the podCIDR the cluster will use (by default 10.42.0.0/16). Append the CIDR (or CIDRs for dual-stack) in Access controls with the stanza:

  1. "autoApprovers": {
  2. "routes": {
  3. "10.42.0.0/16": ["your_account@xyz.com"],
  4. "2001:cafe:42::/56": ["your_account@xyz.com"],
  5. },
  6. },
  1. Install Tailscale in your nodes:
  1. curl -fsSL https://tailscale.com/install.sh | sh

To deploy K3s with Tailscale integration enabled, you must add the following parameter on each of your nodes:

  1. --vpn-auth="name=tailscale,joinKey=$AUTH-KEY

or provide that information in a file and use the parameter:

  1. --vpn-auth-file=$PATH_TO_FILE

Optionally, if you have your own Tailscale server (e.g. headscale), you can connect to it by appending ,controlServerURL=$URL to the vpn-auth parameters.

Next, you can proceed to create the server using the following command:

  1. k3s server --token <token> --vpn-auth="name=tailscale,joinKey=<joinKey>" --node-external-ip=<TailscaleIPOfServerNode>

After executing this command, access the Tailscale admin console to approve the Tailscale node and subnet (if not already approved through autoApprovers).

Once the server is set up, connect the agents using:

  1. k3s agent --token <token> --vpn-auth="name=tailscale,joinKey=<joinKey>" --server https://<TailscaleIPOfServerNode>:6443 --node-external-ip=<TailscaleIPOfAgentNode>

Again, approve the Tailscale node and subnet as you did for the server.

If you have ACLs activated in Tailscale, you need to add an “accept” rule to allow pods to communicate with each other. Assuming the auth key you created automatically tags the Tailscale nodes with the tag testing-k3s, the rule should look like this:

  1. "acls": [
  2. {
  3. "action": "accept",
  4. "src": ["tag:testing-k3s", "10.42.0.0/16"],
  5. "dst": ["tag:testing-k3s:*", "10.42.0.0/16:*"],
  6. },
  7. ],

Distributed hybrid or multicloud cluster - 图4warning

If you plan on running several K3s clusters using the same tailscale network, please create appropriate ACLs to avoid IP conflicts or use different podCIDR subnets for each cluster.