Changing the pods CIDR in a MicroK8s cluster
By default MicroK8s v1.19+ will use the 10.1.0.0/16 network to place its pods.
To change the pods CIDR you need to configure kube-proxy (edit /var/snap/microk8s/current/args/kube-proxy
) and tell the calico CNI what the new CIDR is (edit and apply /var/snap/microk8s/current/args/cni-network/cni.yaml
).
Configuration steps
- Remove the current CNI configuration with :
microk8s kubectl delete -f /var/snap/microk8s/current/args/cni-network/cni.yaml
Edit
/var/snap/microk8s/current/args/kube-proxy
and update the--cluster-cidr=10.1.0.0/16
argument with the new CIDR.Restart MicroK8s with:
microk8s stop
microk8s start
- Edit
/var/snap/microk8s/current/args/cni-network/cni.yaml
and replace the new IP range in. For example to switch to10.2.0.0/16
update theCALICO_IPV4POOL_CIDR
with:
- name: CALICO_IPV4POOL_CIDR
value: "10.2.0.0/16"
- Apply the new CNI manifest:
microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml
Verify the new configuration
At this point new pods are placed on the updated CIDR. To check the update worked try deploying some pods:
microk8s enable dns dashboard
…then check the allocated IP addresses:
microk8s kubectl get po -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system pod/calico-node-rdkz6 1/1 Running 0 4m34s 192.168.1.23 aurora <none> <none>
kube-system pod/calico-kube-controllers-847c8c99d-rjfd4 1/1 Running 0 4m34s 10.2.180.193 aurora <none> <none>
kube-system pod/metrics-server-8bbfb4bdb-wqjxs 1/1 Running 0 3m2s 10.2.180.195 aurora <none> <none>
kube-system pod/coredns-86f78bb79c-cppgt 1/1 Running 0 3m12s 10.2.180.194 aurora <none> <none>
kube-system pod/kubernetes-dashboard-7ffd448895-2l7xn 1/1 Running 0 2m52s 10.2.180.196 aurora <none> <none>
kube-system pod/dashboard-metrics-scraper-6c4568dc68-5nn7p 1/1 Running 0 2m52s 10.2.180.197 aurora <none> <none>
You can also check the IPtable rules:
sudo iptables -t nat -nL |grep "10\.2\."
KUBE-MARK-MASQ all -- 10.2.180.194 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:10.2.180.194:53
KUBE-MARK-MASQ all -- 10.2.180.194 0.0.0.0/0 /* kube-system/kube-dns:metrics */
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:metrics */ tcp to:10.2.180.194:9153
KUBE-MARK-MASQ all -- 10.2.180.194 0.0.0.0/0 /* kube-system/kube-dns:dns */
DNAT udp -- 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:10.2.180.194:53
KUBE-MARK-MASQ all -- 10.2.180.195 0.0.0.0/0 /* kube-system/metrics-server */
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 /* kube-system/metrics-server */ tcp to:10.2.180.195:4443
KUBE-MARK-MASQ tcp -- !10.2.0.0/16 10.152.183.178 /* kube-system/metrics-server cluster IP */ tcp dpt:443
KUBE-MARK-MASQ tcp -- !10.2.0.0/16 10.152.183.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
KUBE-MARK-MASQ udp -- !10.2.0.0/16 10.152.183.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
KUBE-MARK-MASQ tcp -- !10.2.0.0/16 10.152.183.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
KUBE-MARK-MASQ tcp -- !10.2.0.0/16 10.152.183.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
Behind a proxy
Remember: If you are also setting up a proxy, you will also need to update /var/snap/microk8s/current/args/containerd-env
with the respective IP ranges.
Last updated 4 months ago. Help improve this document in the forum.