Bandwidth Manager (beta)

This guide explains how to configure Cilium’s bandwidth manager to optimize TCP and UDP workloads and efficiently rate limit individual Pods if needed through the help of EDT (Earliest Departure Time) and eBPF.

The bandwidth manager does not rely on CNI chaining and is natively integrated into Cilium instead. Hence, it does not make use of the bandwidth CNI plugin. Due to scalability concerns in particular for multi-queue network interfaces, it is not recommended to use the bandwidth CNI plugin which is based on TBF (Token Bucket Filter) instead of EDT.

Cilium’s bandwidth manager supports the kubernetes.io/egress-bandwidth Pod annotation which is enforced on egress at the native host networking devices. The bandwidth enforcement is supported for direct routing as well as tunneling mode in Cilium.

The kubernetes.io/ingress-bandwidth annotation is not supported and also not recommended to use. Limiting bandwidth happens natively at the egress point of networking devices in order to reduce or pace bandwidth usage on the wire. Enforcing at ingress would add yet another layer of buffer queueing right in the critical fast-path of a node via ifb device where ingress traffic first needs to be redirected to the ifb’s egress point in order to perform shaping before traffic can go up the stack. At this point traffic has already occupied the bandwidth usage on the wire, and the node has already spent resources on processing the packet. kubernetes.io/ingress-bandwidth annotation is ignored by Cilium’s bandwidth manager.

Note

Bandwidth Manager requires a v5.1.x or more recent Linux kernel.

Note

Make sure you have Helm 3 installed. Helm 2 is no longer supported.

Setup Helm repository:

  1. helm repo add cilium https://helm.cilium.io/

Cilium’s bandwidth manager is disabled by default on new installations. To install Cilium with the bandwidth manager enabled, run

  1. helm install cilium cilium/cilium --version 1.11.7 \
  2. --namespace kube-system \
  3. --set bandwidthManager=true

To enable the bandwidth manager on an existing installation, run

  1. helm upgrade cilium cilium/cilium --version 1.11.7 \
  2. --namespace kube-system \
  3. --reuse-values \
  4. --set bandwidthManager=true
  5. kubectl -n kube-system rollout restart ds/cilium

The native host networking devices are auto detected as native devices which have the default route on the host or have Kubernetes InternalIP or ExternalIP assigned. InternalIP is preferred over ExternalIP if both exist. To change and manually specify the devices, set their names in the devices helm option (e.g. devices='{eth0,eth1,eth2}'). Each listed device has to be named the same on all Cilium-managed nodes.

Verify that the Cilium Pods have come up correctly:

  1. $ kubectl -n kube-system get pods -l k8s-app=cilium
  2. NAME READY STATUS RESTARTS AGE
  3. cilium-crf7f 1/1 Running 0 10m
  4. cilium-db21a 1/1 Running 0 10m

In order to verify whether the bandwidth manager feature has been enabled in Cilium, the cilium status CLI command provides visibility through the BandwidthManager info line. It also dumps a list of devices on which the egress bandwidth limitation is enforced:

  1. $ kubectl -n kube-system exec ds/cilium -- cilium status | grep BandwidthManager
  2. BandwidthManager: EDT with BPF [eth0]

To verify that egress bandwidth limits are indeed being enforced, one can deploy two netperf Pods in different nodes — one acting as a server and one acting as the client:

  1. ---
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. annotations:
  6. # Limits egress bandwidth to 10Mbit/s.
  7. kubernetes.io/egress-bandwidth: "10M"
  8. labels:
  9. # This pod will act as server.
  10. app.kubernetes.io/name: netperf-server
  11. name: netperf-server
  12. spec:
  13. containers:
  14. - name: netperf
  15. image: cilium/netperf
  16. ports:
  17. - containerPort: 12865
  18. ---
  19. apiVersion: v1
  20. kind: Pod
  21. metadata:
  22. # This Pod will act as client.
  23. name: netperf-client
  24. spec:
  25. affinity:
  26. # Prevents the client from being scheduled to the
  27. # same node as the server.
  28. podAntiAffinity:
  29. requiredDuringSchedulingIgnoredDuringExecution:
  30. - labelSelector:
  31. matchExpressions:
  32. - key: app.kubernetes.io/name
  33. operator: In
  34. values:
  35. - netperf-server
  36. topologyKey: kubernetes.io/hostname
  37. containers:
  38. - name: netperf
  39. args:
  40. - sleep
  41. - infinity
  42. image: cilium/netperf

Once up and running, the netperf-client Pod can be used to test egress bandwidth enforcement on the netperf-server Pod. As the test streaming direction is from the netperf-server Pod towards the client, we need to check TCP_MAERTS:

  1. $ NETPERF_SERVER_IP=$(kubectl get pod netperf-server -o jsonpath='{.status.podIP}')
  2. $ kubectl exec netperf-client -- \
  3. netperf -t TCP_MAERTS -H "${NETPERF_SERVER_IP}"
  4. MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.217.0.254 () port 0 AF_INET
  5. Recv Send Send
  6. Socket Socket Message Elapsed
  7. Size Size Size Time Throughput
  8. bytes bytes bytes secs. 10^6bits/sec
  9. 87380 16384 16384 10.00 9.56

As can be seen, egress traffic of the netperf-server Pod has been limited to 10Mbit per second.

In order to introspect current endpoint bandwidth settings from BPF side, the following command can be run (replace cilium-xxxxx with the name of the Cilium Pod that is co-located with the netperf-server Pod):

  1. $ kubectl exec -it -n kube-system cilium-xxxxxx -- cilium bpf bandwidth list
  2. IDENTITY EGRESS BANDWIDTH (BitsPerSec)
  3. 491 10M

Each Pod is represented in Cilium as an Endpoint which has an identity. The above identity can then be correlated with the cilium endpoint list command.

Limitations

  • Bandwidth enforcement currently does not work in combination with L7 Cilium Network Policies. In case they select the Pod at egress, then the bandwidth enforcement will be disabled for those Pods.