System Requirements
Before installing Cilium, please ensure that your system meets the minimum requirements below. Most modern Linux distributions already do.
Summary
When running Cilium using the container image cilium/cilium
, the host system must meet these requirements:
- Linux kernel >= 4.9.17
When running Cilium as a native process on your host (i.e. not running the cilium/cilium
container image) these additional requirements must be met:
- clang+LLVM >= 10.0
- iproute2 with eBPF templating patches [1]
When running Cilium without Kubernetes these additional requirements must be met:
- Key-Value store etcd >= 3.1.0 or consul >= 0.6.4
Requirement | Minimum Version | In cilium container |
---|---|---|
Linux kernel | >= 4.9.17 | no |
Key-Value store (etcd) | >= 3.1.0 | no |
Key-Value store (consul) | >= 0.6.4 | no |
clang+LLVM | >= 10.0 | yes |
iproute2 | >= 5.0.0 [1] | yes |
[1] | (1, 2) Requires support for eBPF templating as documented below. |
Linux Distribution Compatibility Matrix
The following table lists Linux distributions that are known to work well with Cilium.
Distribution | Minimum Version |
---|---|
Amazon Linux 2 | all |
Container-Optimized OS | all |
CentOS | >= 7.0 [2] |
Debian | >= 9 Stretch |
Fedora Atomic/Core | >= 25 |
Flatcar | all |
LinuxKit | all |
RedHat Enterprise Linux | >= 8.0 |
Ubuntu | >= 16.04.1 (Azure), >= 16.04.2 (Canonical), >= 16.10 |
Opensuse | Tumbleweed, >=Leap 15.0 |
RancherOS | >= 1.5.5 |
[2] | CentOS 7 requires a third-party kernel provided by ElRepo whereas CentOS 8 ships with a supported kernel. |
Note
The above list is based on feedback by users. If you find an unlisted Linux distribution that works well, please let us know by opening a GitHub issue or by creating a pull request that updates this guide.
Note
Systemd 245 and above (systemctl --version
) overrides rp_filter
setting of Cilium network interfaces. This introduces connectivity issues (see GitHub issue 10645 for details). To avoid that, configure rp_filter
in systemd using the following commands:
echo 'net.ipv4.conf.lxc*.rp_filter = 0' > /etc/sysctl.d/99-override_cilium_rp_filter.conf
systemctl restart systemd-sysctl
Linux Kernel
Cilium leverages and builds on the kernel eBPF functionality as well as various subsystems which integrate with eBPF. Therefore, host systems are required to run Linux kernel version 4.9.17 or later to run a Cilium agent. More recent kernels may provide additional eBPF functionality that Cilium will automatically detect and use on agent start.
In order for the eBPF feature to be enabled properly, the following kernel configuration options must be enabled. This is typically the case with distribution kernels. When an option can be built as a module or statically linked, either choice is valid.
CONFIG_BPF=y
CONFIG_BPF_SYSCALL=y
CONFIG_NET_CLS_BPF=y
CONFIG_BPF_JIT=y
CONFIG_NET_CLS_ACT=y
CONFIG_NET_SCH_INGRESS=y
CONFIG_CRYPTO_SHA1=y
CONFIG_CRYPTO_USER_API_HASH=y
CONFIG_CGROUPS=y
CONFIG_CGROUP_BPF=y
Note
Users running Linux 4.10 or earlier with Cilium CIDR policies may face Restrictions on unique prefix lengths for CIDR policy rules.
L7 proxy redirection currently uses TPROXY
iptables actions as well as socket
matches. For L7 redirection to work as intended kernel configuration must include the following modules:
CONFIG_NETFILTER_XT_TARGET_TPROXY=m
CONFIG_NETFILTER_XT_MATCH_MARK=m
CONFIG_NETFILTER_XT_MATCH_SOCKET=m
When xt_socket
kernel module is missing the forwarding of redirected L7 traffic does not work in non-tunneled datapath modes. Since some notable kernels (e.g., COS) are shipping without xt_socket
module, Cilium implements a fallback compatibility mode to allow L7 policies and visibility to be used with those kernels. Currently this fallback disables ip_early_demux
kernel feature in non-tunneled datapath modes, which may decrease system networking performance. This guarantees HTTP and Kafka redirection works as intended. However, if HTTP or Kafka enforcement policies or visibility annotations are never used, this behavior can be turned off by adding the following to the helm configuration command line:
helm install cilium cilium/cilium --version 1.10.2 \
...
--set enableXTSocketFallback=false
Required Kernel Versions for Advanced Features
Cilium requires Linux kernel 4.9.17 or higher; however, development on additional kernel features continues to progress in the Linux community. Some of Cilium’s features are dependent on newer kernel versions and are thus enabled by upgrading to more recent kernel versions as detailed below.
Cilium Feature | Minimum Kernel Version |
---|---|
IPv4 fragment handling | >= 4.10 |
Restrictions on unique prefix lengths for CIDR policy rules | >= 4.11 |
IPsec Transparent Encryption in tunneling mode | >= 4.19 |
WireGuard Transparent Encryption | >= 5.6 |
Host-Reachable Services | >= 4.19.57, >= 5.1.16, >= 5.2 |
Kubernetes Without kube-proxy | >= 4.19.57, >= 5.1.16, >= 5.2 |
Bandwidth Manager (beta) | >= 5.1 |
Local Redirect Policy (beta) | >= 4.19.57, >= 5.1.16, >= 5.2 |
Full support for Session Affinity | >= 5.7 |
BPF-based proxy redirection | >= 5.7 |
BPF-based host routing | >= 5.10 |
Key-Value store
Cilium optionally uses a distributed Key-Value store to manage, synchronize and distribute security identities across all cluster nodes. The following Key-Value stores are currently supported:
- etcd >= 3.1.0
- consul >= 0.6.4
Cilium can be used without a Key-Value store when CRD-based state management is used with Kubernetes. This is the default for new Cilium installations. Larger clusters will perform better with a Key-Value store backed identity management instead, see Quick Installation for more details.
See Key-Value Store for details on how to configure the cilium-agent
to use a Key-Value store.
clang+LLVM
Note
This requirement is only needed if you run cilium-agent
natively. If you are using the Cilium container image cilium/cilium
, clang+LLVM is included in the container image.
LLVM is the compiler suite that Cilium uses to generate eBPF bytecode programs to be loaded into the Linux kernel. The minimum supported version of LLVM available to cilium-agent
should be >=5.0. The version of clang installed must be compiled with the eBPF backend enabled.
See https://releases.llvm.org/ for information on how to download and install LLVM.
iproute2
Note
iproute2 is only needed if you run cilium-agent
directly on the host machine. iproute2 is included in the cilium/cilium
container image.
iproute2 is a low level tool used to configure various networking related subsystems of the Linux kernel. Cilium uses iproute2 to configure networking and tc
, which is part of iproute2, to load eBPF programs into the kernel.
The version of iproute2 must include the eBPF templating patches. See the links in the table below for documentation on how to install the correct version of iproute2 for your distribution.
Distribution | Link |
---|---|
Binary (OpenSUSE) | Open Build Service |
Source | Cilium iproute2 source |
Firewall Rules
If you are running Cilium in an environment that requires firewall rules to enable connectivity, you will have to add the following rules to ensure Cilium works properly.
It is recommended but optional that all nodes running Cilium in a given cluster must be able to ping each other so cilium-health
can report and monitor connectivity among nodes. This requires ICMP Type 0/8, Code 0 open among all nodes. TCP 4240 should also be open among all nodes for cilium-health
monitoring. Note that it is also an option to only use one of these two methods to enable health monitoring. If the firewall does not permit either of these methods, Cilium will still operate fine but will not be able to provide health information.
If you are using VXLAN overlay network mode, Cilium uses Linux’s default VXLAN port 8472 over UDP, unless Linux has been configured otherwise. In this case, UDP 8472 must be open among all nodes to enable VXLAN overlay mode. The same applies to Geneve overlay network mode, except the port is UDP 6081.
If you are running in direct routing mode, your network must allow routing of pod IPs.
As an example, if you are running on AWS with VXLAN overlay networking, here is a minimum set of AWS Security Group (SG) rules. It assumes a separation between the SG on the master nodes, master-sg
, and the worker nodes, worker-sg
. It also assumes etcd
is running on the master nodes.
Master Nodes (master-sg
) Rules:
Port Range / Protocol | Ingress/Egress | Source/Destination | Description |
---|---|---|---|
2379-2380/tcp | ingress | worker-sg | etcd access |
8472/udp | ingress | master-sg (self) | VXLAN overlay |
8472/udp | ingress | worker-sg | VXLAN overlay |
4240/tcp | ingress | master-sg (self) | health checks |
4240/tcp | ingress | worker-sg | health checks |
ICMP 8/0 | ingress | master-sg (self) | health checks |
ICMP 8/0 | ingress | worker-sg | health checks |
8472/udp | egress | master-sg (self) | VXLAN overlay |
8472/udp | egress | worker-sg | VXLAN overlay |
4240/tcp | egress | master-sg (self) | health checks |
4240/tcp | egress | worker-sg | health checks |
ICMP 8/0 | egress | master-sg (self) | health checks |
ICMP 8/0 | egress | worker-sg | health checks |
Worker Nodes (worker-sg
):
Port Range / Protocol | Ingress/Egress | Source/Destination | Description |
---|---|---|---|
8472/udp | ingress | master-sg | VXLAN overlay |
8472/udp | ingress | worker-sg (self) | VXLAN overlay |
4240/tcp | ingress | master-sg | health checks |
4240/tcp | ingress | worker-sg (self) | health checks |
ICMP 8/0 | ingress | master-sg | health checks |
ICMP 8/0 | ingress | worker-sg (self) | health checks |
8472/udp | egress | master-sg | VXLAN overlay |
8472/udp | egress | worker-sg (self) | VXLAN overlay |
4240/tcp | egress | master-sg | health checks |
4240/tcp | egress | worker-sg (self) | health checks |
ICMP 8/0 | egress | master-sg | health checks |
ICMP 8/0 | egress | worker-sg (self) | health checks |
2379-2380/tcp | egress | master-sg | etcd access |
Note
If you use a shared SG for the masters and workers, you can condense these rules into ingress/egress to self. If you are using Direct Routing mode, you can condense all rules into ingress/egress ANY port/protocol to/from self.
The following ports should also be available on each node:
Port Range / Protocol | Description |
---|---|
4240/tcp | cluster health checks (cilium-health ) |
4244/tcp | Hubble server |
4245/tcp | Hubble Relay |
6060/tcp | cilium-agent pprof server (listening on 127.0.0.1) |
6061/tcp | cilium-operator pprof server (listening on 127.0.0.1) |
6062/tcp | Hubble Relay pprof server (listening on 127.0.0.1) |
6942/tcp | operator Prometheus metrics |
9090/tcp | cilium-agent Prometheus metrics |
9876/tcp | cilium-agent health status API |
9890/tcp | cilium-agent gops server (listening on 127.0.0.1) |
9891/tcp | operator gops server (listening on 127.0.0.1) |
9892/tcp | clustermesh-apiserver gops server (listening on 127.0.0.1) |
9893/tcp | Hubble Relay gops server (listening on 127.0.0.1) |
51871/udp | WireGuard encryption tunnel endpoint |
Mounted eBPF filesystem
Note
Some distributions mount the bpf filesystem automatically. Check if the bpf filesystem is mounted by running the command.
# mount | grep /sys/fs/bpf
$ # if present should output, e.g. "none on /sys/fs/bpf type bpf"...
This step is required for production environments but optional for testing and development. It allows the cilium-agent
to pin eBPF resources to a persistent filesystem and make them persistent across restarts of the agent. If the eBPF filesystem is not mounted in the host filesystem, Cilium will automatically mount the filesystem but it will be unmounted and re-mounted when the Cilium pod is restarted. This in turn will cause eBPF resources to be re-created which will cause network connectivity to be disrupted while Cilium is not running. Mounting the eBPF filesystem in the host mount namespace will ensure that the agent can be restarted without affecting connectivity of any pods.
In order to mount the eBPF filesystem, the following command must be run in the host mount namespace. The command must only be run once during the boot process of the machine.
# mount bpffs /sys/fs/bpf -t bpf
A portable way to achieve this with persistence is to add the following line to /etc/fstab
and then run mount /sys/fs/bpf
. This will cause the filesystem to be automatically mounted when the node boots.
bpffs /sys/fs/bpf bpf defaults 0 0
If you are using systemd to manage the kubelet, see the section Mounting BPFFS with systemd.
Privileges
The following privileges are required to run Cilium. When running the standard Kubernetes DaemonSet
, the privileges are automatically granted to Cilium.
Cilium interacts with the Linux kernel to install eBPF program which will then perform networking tasks and implement security rules. In order to install eBPF programs system-wide,
CAP_SYS_ADMIN
privileges are required. These privileges must be granted tocilium-agent
.The quickest way to meet the requirement is to run
cilium-agent
as root and/or as privileged container.Cilium requires access to the host networking namespace. For this purpose, the Cilium pod is scheduled to run in the host networking namespace directly.