Installation with external etcd
This guide walks you through the steps required to set up Cilium on Kubernetes using an external etcd. Use of an external etcd provides better performance and is suitable for larger environments.
Should you encounter any issues during the installation, please refer to the Troubleshooting section and / or seek help on Slack.
When do I need to use a kvstore?
Unlike the section Quick Installation, this guide explains how to configure Cilium to use an external kvstore such as etcd. If you are unsure whether you need to use a kvstore at all, the following is a list of reasons when to use a kvstore:
- If you are running in an environment with more than 250 nodes, 5k pods, or if you observe a high overhead in state propagation caused by Kubernetes events.
- If you do not want Cilium to store state in Kubernetes custom resources (CRDs).
Requirements
Make sure your Kubernetes environment is meeting the requirements:
- Kubernetes >= 1.16
- Linux kernel >= 4.9
- Kubernetes in CNI mode
- Mounted eBPF filesystem mounted on all worker nodes
- Recommended: Enable PodCIDR allocation (
--allocate-node-cidrs
) in thekube-controller-manager
(recommended)
Refer to the section Requirements for detailed instruction on how to prepare your Kubernetes environment.
You will also need an external etcd version 3.1.0 or higher.
Configure Cilium
When using an external kvstore, the address of the external kvstore needs to be configured in the ConfigMap. Download the base YAML and configure it with Helm
:
Note
Make sure you have Helm 3 installed. Helm 2 is no longer supported.
Setup Helm repository:
helm repo add cilium https://helm.cilium.io/
Deploy Cilium release via Helm:
helm install cilium cilium/cilium --version 1.10.2 \
--namespace kube-system \
--set etcd.enabled=true \
--set "etcd.endpoints[0]=http://etcd-endpoint1:2379" \
--set "etcd.endpoints[1]=http://etcd-endpoint2:2379" \
--set "etcd.endpoints[2]=http://etcd-endpoint3:2379"
If you do not want Cilium to store state in Kubernetes custom resources (CRDs), consider setting identityAllocationMode
:
--set identityAllocationMode=kvstore
Optional: Configure the SSL certificates
Create a Kubernetes secret with the root certificate authority, and client-side key and certificate of etcd:
kubectl create secret generic -n kube-system cilium-etcd-secrets \
--from-file=etcd-client-ca.crt=ca.crt \
--from-file=etcd-client.key=client.key \
--from-file=etcd-client.crt=client.crt
Adjust the helm template generation to enable SSL for etcd and use https instead of http for the etcd endpoint URLs:
helm install cilium cilium/cilium --version 1.10.2 \
--namespace kube-system \
--set etcd.enabled=true \
--set etcd.ssl=true \
--set "etcd.endpoints[0]=https://etcd-endpoint1:2379" \
--set "etcd.endpoints[1]=https://etcd-endpoint2:2379" \
--set "etcd.endpoints[2]=https://etcd-endpoint3:2379"
Validate the Installation
Cilium CLI
Manually
Install the latest version of the Cilium CLI. The Cilium CLI can be used to install Cilium, inspect the state of a Cilium installation, and enable/disable various features (e.g. clustermesh, Hubble).
Linux
macOS
Other
curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-amd64.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
rm cilium-linux-amd64.tar.gz{,.sha256sum}
curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-darwin-amd64.tar.gz{,.sha256sum}
shasum -a 256 -c cilium-darwin-amd64.tar.gz.sha256sum
sudo tar xzvfC cilium-darwin-amd64.tar.gz /usr/local/bin
rm cilium-darwin-amd64.tar.gz{,.sha256sum}
See the full page of releases.
To validate that Cilium has been properly installed, you can run
$ cilium status --wait
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Hubble: disabled
\__/¯¯\__/ ClusterMesh: disabled
\__/
DaemonSet cilium Desired: 2, Ready: 2/2, Available: 2/2
Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2
Containers: cilium-operator Running: 2
cilium Running: 2
Image versions cilium quay.io/cilium/cilium:v1.9.5: 2
cilium-operator quay.io/cilium/operator-generic:v1.9.5: 2
Run the following command to validate that your cluster has proper network connectivity:
$ cilium connectivity test
ℹ️ Monitor aggregation detected, will skip some flow validation steps
✨ [k8s-cluster] Creating namespace for connectivity check...
(...)
---------------------------------------------------------------------------------------------------------------------
📋 Test Report
---------------------------------------------------------------------------------------------------------------------
✅ 69/69 tests successful (0 warnings)
Congratulations! You have a fully functional Kubernetes cluster with Cilium. 🎉
You can monitor as Cilium and all required components are being installed:
$ kubectl -n kube-system get pods --watch
NAME READY STATUS RESTARTS AGE
cilium-operator-cb4578bc5-q52qk 0/1 Pending 0 8s
cilium-s8w5m 0/1 PodInitializing 0 7s
coredns-86c58d9df4-4g7dd 0/1 ContainerCreating 0 8m57s
coredns-86c58d9df4-4l6b2 0/1 ContainerCreating 0 8m57s
It may take a couple of minutes for all components to come up:
cilium-operator-cb4578bc5-q52qk 1/1 Running 0 4m13s
cilium-s8w5m 1/1 Running 0 4m12s
coredns-86c58d9df4-4g7dd 1/1 Running 0 13m
coredns-86c58d9df4-4l6b2 1/1 Running 0 13m
You can deploy the “connectivity-check” to test connectivity between pods. It is recommended to create a separate namespace for this.
kubectl create ns cilium-test
Deploy the check with:
kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/v1.10/examples/kubernetes/connectivity-check/connectivity-check.yaml
It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:
$ kubectl get pods -n cilium-test
NAME READY STATUS RESTARTS AGE
echo-a-76c5d9bd76-q8d99 1/1 Running 0 66s
echo-b-795c4b4f76-9wrrx 1/1 Running 0 66s
echo-b-host-6b7fc94b7c-xtsff 1/1 Running 0 66s
host-to-b-multi-node-clusterip-85476cd779-bpg4b 1/1 Running 0 66s
host-to-b-multi-node-headless-dc6c44cb5-8jdz8 1/1 Running 0 65s
pod-to-a-79546bc469-rl2qq 1/1 Running 0 66s
pod-to-a-allowed-cnp-58b7f7fb8f-lkq7p 1/1 Running 0 66s
pod-to-a-denied-cnp-6967cb6f7f-7h9fn 1/1 Running 0 66s
pod-to-b-intra-node-nodeport-9b487cf89-6ptrt 1/1 Running 0 65s
pod-to-b-multi-node-clusterip-7db5dfdcf7-jkjpw 1/1 Running 0 66s
pod-to-b-multi-node-headless-7d44b85d69-mtscc 1/1 Running 0 66s
pod-to-b-multi-node-nodeport-7ffc76db7c-rrw82 1/1 Running 0 65s
pod-to-external-1111-d56f47579-d79dz 1/1 Running 0 66s
pod-to-external-fqdn-allow-google-cnp-78986f4bcf-btjn7 1/1 Running 0 66s
Note
If you deploy the connectivity check to a single node cluster, pods that check multi-node functionalities will remain in the Pending
state. This is expected since these pods need at least 2 nodes to be scheduled successfully.
Once done with the test, remove the cilium-test
namespace:
kubectl delete ns cilium-test