Networking

The commands/steps listed on this page can be used to check networking related issues in your cluster.

Make sure you configured the correct kubeconfig (for example, export KUBECONFIG=$PWD/kube_config_cluster.yml for Rancher HA) or are using the embedded kubectl via the UI.

Double check if all the required ports are opened in your (host) firewall

Double check if all the required ports are opened in your (host) firewall. The overlay network uses UDP in comparison to all other required ports which are TCP.

Check if overlay network is functioning correctly

The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from NODE_1 to NODE_2. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod.

To test the overlay network, you can launch the following DaemonSet definition. This will run a swiss-army-knife container on every host (image was developed by Rancher engineers and can be found here: https://github.com/rancherlabs/swiss-army-knife), which we will use to run a ping test between containers on all hosts.

Networking - 图1caution

The swiss-army-knife container does not support Windows nodes. It also does not support ARM nodes, such as a Raspberry Pi. When the test encounters incompatible nodes, this is recorded in the pod logs as an error message, such as exec user process caused: exec format error for ARM nodes, or ImagePullBackOff (Back-off pulling image "rancherlabs/swiss-army-knife) for Windows nodes.

  1. Save the following file as overlaytest.yml

    1. apiVersion: apps/v1
    2. kind: DaemonSet
    3. metadata:
    4. name: overlaytest
    5. spec:
    6. selector:
    7. matchLabels:
    8. name: overlaytest
    9. template:
    10. metadata:
    11. labels:
    12. name: overlaytest
    13. spec:
    14. tolerations:
    15. - operator: Exists
    16. containers:
    17. - image: rancherlabs/swiss-army-knife
    18. imagePullPolicy: Always
    19. name: overlaytest
    20. command: ["sh", "-c", "tail -f /dev/null"]
    21. terminationMessagePath: /dev/termination-log
  2. Launch it using kubectl create -f overlaytest.yml

  3. Wait until kubectl rollout status ds/overlaytest -w returns: daemon set "overlaytest" successfully rolled out.

  4. Run the following script, from the same location. It will have each overlaytest container on every host ping each other:

    1. #!/bin/bash
    2. echo "=> Start network overlay test"
    3. kubectl get pods -l name=overlaytest -o jsonpath='{range .items[*]}{@.metadata.name}{" "}{@.spec.nodeName}{"\n"}{end}' |
    4. while read spod shost
    5. do kubectl get pods -l name=overlaytest -o jsonpath='{range .items[*]}{@.status.podIP}{" "}{@.spec.nodeName}{"\n"}{end}' |
    6. while read tip thost
    7. do kubectl --request-timeout='10s' exec $spod -c overlaytest -- /bin/sh -c "ping -c2 $tip > /dev/null 2>&1"
    8. RC=$?
    9. if [ $RC -ne 0 ]
    10. then echo FAIL: $spod on $shost cannot reach pod IP $tip on $thost
    11. else echo $shost can reach $thost
    12. fi
    13. done
    14. done
    15. echo "=> End network overlay test"
  5. When this command has finished running, it will output the state of each route:

    1. => Start network overlay test
    2. Error from server (NotFound): pods "wk2" not found
    3. FAIL: overlaytest-5bglp on wk2 cannot reach pod IP 10.42.7.3 on wk2
    4. Error from server (NotFound): pods "wk2" not found
    5. FAIL: overlaytest-5bglp on wk2 cannot reach pod IP 10.42.0.5 on cp1
    6. Error from server (NotFound): pods "wk2" not found
    7. FAIL: overlaytest-5bglp on wk2 cannot reach pod IP 10.42.2.12 on wk1
    8. command terminated with exit code 1
    9. FAIL: overlaytest-v4qkl on cp1 cannot reach pod IP 10.42.7.3 on wk2
    10. cp1 can reach cp1
    11. cp1 can reach wk1
    12. command terminated with exit code 1
    13. FAIL: overlaytest-xpxwp on wk1 cannot reach pod IP 10.42.7.3 on wk2
    14. wk1 can reach cp1
    15. wk1 can reach wk1
    16. => End network overlay test

    If you see error in the output, there is some issue with the route between the pods on the two hosts. In the above output the node wk2 has no connectivity over the overlay network. This could be because the required ports for overlay networking are not opened for wk2.

  6. You can now clean up the DaemonSet by running kubectl delete ds/overlaytest.

Check if MTU is correctly configured on hosts and on peering/tunnel appliances/devices

When the MTU is incorrectly configured (either on hosts running Rancher, nodes in created/imported clusters or on appliances/devices in between), error messages will be logged in Rancher and in the agents, similar to:

  • websocket: bad handshake
  • Failed to connect to proxy
  • read tcp: i/o timeout

See Google Cloud VPN: MTU Considerations for an example how to configure MTU correctly when using Google Cloud VPN between Rancher and cluster nodes.

Resolved issues

Overlay network broken when using Canal/Flannel due to missing node annotations

GitHub issue#13644
Resolved inv2.1.2

To check if your cluster is affected, the following command will list nodes that are broken (this command requires jq to be installed):

  1. kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name'

If there is no output, the cluster is not affected.