Installation with external etcd
This guide walks you through the steps required to set up Cilium on Kubernetes using an external etcd. Use of an external etcd provides better performance and is suitable for larger environments. If you are looking for a simple installation method to get started, refer to the section Installation with managed etcd.
Should you encounter any issues during the installation, please refer to the Troubleshooting section and / or seek help on Slack.
When do I need to use a kvstore?
Unlike the section Quick Installation, this guide explains how to configure Cilium to use an external kvstore such as etcd. If you are unsure whether you need to use a kvstore at all, the following is a list of reasons when to use a kvstore:
- If you want to use the Multi-Cluster (Cluster Mesh) functionality.
- If you are running in an environment with more than 250 nodes, 5k pods, or if you observe a high overhead in state propagation caused by Kubernetes events.
- If you do not want Cilium to store state in Kubernetes custom resources (CRDs).
Requirements
Make sure your Kubernetes environment is meeting the requirements:
- Kubernetes >= 1.12
- Linux kernel >= 4.9
- Kubernetes in CNI mode
- Mounted eBPF filesystem mounted on all worker nodes
- Recommended: Enable PodCIDR allocation (
--allocate-node-cidrs
) in thekube-controller-manager
(recommended)
Refer to the section Requirements for detailed instruction on how to prepare your Kubernetes environment.
You will also need an external etcd version 3.1.0 or higher.
Configure Cilium
When using an external kvstore, the address of the external kvstore needs to be configured in the ConfigMap. Download the base YAML and configure it with Helm
:
Note
First, make sure you have Helm 3 installed. Helm 2 is no longer supported.
Setup Helm repository:
helm repo add cilium https://helm.cilium.io/
Deploy Cilium release via Helm:
helm install cilium cilium/cilium --version 1.9.8 \
--namespace kube-system \
--set etcd.enabled=true \
--set "etcd.endpoints[0]=http://etcd-endpoint1:2379" \
--set "etcd.endpoints[1]=http://etcd-endpoint2:2379" \
--set "etcd.endpoints[2]=http://etcd-endpoint3:2379"
If you do not want Cilium to store state in Kubernetes custom resources (CRDs), consider setting identityAllocationMode
:
--set identityAllocationMode=kvstore
Optional: Configure the SSL certificates
Create a Kubernetes secret with the root certificate authority, and client-side key and certificate of etcd:
kubectl create secret generic -n kube-system cilium-etcd-secrets \
--from-file=etcd-client-ca.crt=ca.crt \
--from-file=etcd-client.key=client.key \
--from-file=etcd-client.crt=client.crt
Adjust the helm template generation to enable SSL for etcd and use https instead of http for the etcd endpoint URLs:
helm install cilium cilium/cilium --version 1.9.8 \
--namespace kube-system \
--set etcd.enabled=true \
--set etcd.ssl=true \
--set "etcd.endpoints[0]=https://etcd-endpoint1:2379" \
--set "etcd.endpoints[1]=https://etcd-endpoint2:2379" \
--set "etcd.endpoints[2]=https://etcd-endpoint3:2379"
Validate the Installation
Verify that Cilium pods were started on each of your worker nodes
kubectl --namespace kube-system get ds cilium
NAME DESIRED CURRENT READY NODE-SELECTOR AGE
cilium 4 4 4 <none> 3m2s
kubectl -n kube-system get deployments cilium-operator
NAME READY UP-TO-DATE AVAILABLE AGE
cilium-operator 2/2 2 2 2m6s
Specify Environment Variables
Specify the namespace in which Cilium is installed as CILIUM_NAMESPACE
environment variable. Subsequent commands reference this environment variable.
export CILIUM_NAMESPACE=kube-system
Enable Hubble for Cluster-Wide Visibility
Hubble is the component for observability in Cilium. To obtain cluster-wide visibility into your network traffic, deploy Hubble Relay and the UI as follows on your existing installation:
Installation via Helm
Installation via quick-hubble-install.yaml
If you installed Cilium via helm install
, you may enable Hubble Relay and UI with the following command:
helm upgrade cilium cilium/cilium --version 1.9.8 \
--namespace $CILIUM_NAMESPACE \
--reuse-values \
--set hubble.listenAddress=":4244" \
--set hubble.relay.enabled=true \
--set hubble.ui.enabled=true
On Cilium 1.9.1 and older, the Cilium agent pods will be restarted in the process.
If you installed Cilium 1.9.2 or newer via the provided quick-install.yaml
, you may deploy Hubble Relay and UI on top of your existing installation with the following command:
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-hubble-install.yaml
Installation via quick-hubble-install.yaml
only works if the installed Cilium version is 1.9.2 or newer. Users of Cilium 1.9.0 or 1.9.1 are encouraged to upgrade to a newer version by applying the most recent Cilium quick-install.yaml
first.
Alternatively, it is possible to manually generate a YAML manifest for the Cilium DaemonSet and Hubble Relay/UI as follows. The generated YAML can be applied on top of an existing installation:
# Set this to your installed Cilium version
export CILIUM_VERSION=1.9.1
# Please set any custom Helm values you may need for Cilium,
# such as for example `--set operator.replicas=1` on single-cluster nodes.
helm template cilium cilium/cilium --version $CILIUM_VERSION \\
--namespace $CILIUM_NAMESPACE \\
--set hubble.tls.auto.method="cronJob" \\
--set hubble.listenAddress=":4244" \\
--set hubble.relay.enabled=true \\
--set hubble.ui.enabled=true > cilium-with-hubble.yaml
# This will modify your existing Cilium DaemonSet and ConfigMap
kubectl apply -f cilium-with-hubble.yaml
The Cilium agent pods will be restarted in the process.
Once the Hubble UI pod is started, use port forwarding for the hubble-ui
service. This allows opening the UI locally on a browser:
kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-ui --address 0.0.0.0 --address :: 12000:80
And then open http://localhost:12000/ to access the UI.
Hubble UI is not the only way to get access to Hubble data. A command line tool, the Hubble CLI, is also available. It can be installed by following the instructions below:
Linux
MacOS
Windows
Download the latest hubble release:
export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz"
curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz.sha256sum"
sha256sum --check hubble-linux-amd64.tar.gz.sha256sum
tar zxf hubble-linux-amd64.tar.gz
and move the hubble
CLI to a directory listed in the $PATH
environment variable. For example:
sudo mv hubble /usr/local/bin
Download the latest hubble release:
export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-darwin-amd64.tar.gz"
curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-darwin-amd64.tar.gz.sha256sum"
shasum -a 256 -c hubble-darwin-amd64.tar.gz.sha256sum
tar zxf hubble-darwin-amd64.tar.gz
and move the hubble
CLI to a directory listed in the $PATH
environment variable. For example:
sudo mv hubble /usr/local/bin
Download the latest hubble release:
curl -LO "https://raw.githubusercontent.com/cilium/hubble/master/stable.txt"
set /p HUBBLE_VERSION=<stable.txt
curl -LO "https://github.com/cilium/hubble/releases/download/%HUBBLE_VERSION%/hubble-windows-amd64.tar.gz"
curl -LO "https://github.com/cilium/hubble/releases/download/%HUBBLE_VERSION%/hubble-windows-amd64.tar.gz.sha256sum"
certutil -hashfile hubble-windows-amd64.tar.gz SHA256
type hubble-windows-amd64.tar.gz.sha256sum
:: verify that the checksum from the two commands above match
tar zxf hubble-windows-amd64.tar.gz
and move the hubble.exe
CLI to a directory listed in the %PATH%
environment variable after extracting it from the tarball.
Similarly to the UI, use port forwarding for the hubble-relay
service to make it available locally:
kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-relay --address 0.0.0.0 --address :: 4245:80
In a separate terminal window, run the hubble status
command specifying the Hubble Relay address:
$ hubble --server localhost:4245 status
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 5455/16384 (33.29%)
Flows/s: 11.30
Connected Nodes: 4/4
If Hubble Relay reports that all nodes are connected, as in the example output above, you can now use the CLI to observe flows of the entire cluster:
hubble --server localhost:4245 observe
If you encounter any problem at this point, you may seek help on Slack.
Tip
Hubble CLI configuration can be persisted using a configuration file or environment variables. This avoids having to specify options specific to a particular environment every time a command is run. Run hubble help config
for more information.
For more information about Hubble and its components, see the Observability section.