Installation on AWS EKS

Create an EKS Cluster

The first step is to create an EKS cluster. This guide will use eksctl but you can also follow the Getting Started with Amazon EKS guide.

Prerequisites

Ensure your AWS credentials are located in ~/.aws/credentials or are stored as environment variables .

Next, install eksctl :

Linux

MacOS

  1. curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
  2. sudo mv /tmp/eksctl /usr/local/bin
  1. brew install weaveworks/tap/eksctl

Ensure that aws-iam-authenticator is installed and in the executable path:

  1. which aws-iam-authenticator

If not, install it based on the AWS IAM authenticator documentation .

Create the cluster

Create an EKS cluster with eksctl see the eksctl Documentation for details on how to set credentials, change region, VPC, cluster size, etc.

  1. eksctl create cluster --name test-cluster --without-nodegroup

You should see something like this:

  1. [ℹ] using region us-west-2
  2. [ℹ] setting availability zones to [us-west-2b us-west-2a us-west-2c]
  3. [...]
  4. [✔] EKS cluster "test-cluster" in "us-west-2" region is ready

Delete VPC CNI (aws-node DaemonSet)

Cilium will manage ENIs instead of VPC CNI, so the aws-node DaemonSet has to be deleted to prevent conflict behavior.

Note

Once aws-node DaemonSet is deleted, EKS will not try to restore it.

  1. kubectl -n kube-system delete daemonset aws-node

Deploy Cilium

Note

First, make sure you have Helm 3 installed. Helm 2 is no longer supported.

Setup Helm repository:

  1. helm repo add cilium https://helm.cilium.io/

Deploy Cilium release via Helm:

  1. helm install cilium cilium/cilium --version 1.9.8 \
  2. --namespace kube-system \
  3. --set eni=true \
  4. --set ipam.mode=eni \
  5. --set egressMasqueradeInterfaces=eth0 \
  6. --set tunnel=disabled \
  7. --set nodeinit.enabled=true

Note

This helm command sets eni=true and tunnel=disabled, meaning that Cilium will allocate a fully-routable AWS ENI IP address for each pod, similar to the behavior of the Amazon VPC CNI plugin.

Cilium can alternatively run in EKS using an overlay mode that gives pods non-VPC-routable IPs. This allows running more pods per Kubernetes worker node than the ENI limit, but means that pod connectivity to resources outside the cluster (e.g., VMs in the VPC or AWS managed services) is masqueraded (i.e., SNAT) by Cilium to use the VPC IP address of the Kubernetes worker node. Excluding the lines for eni=true and tunnel=disabled from the helm command will configure Cilium to use overlay routing mode (which is the helm default).

Create a node group

  1. eksctl create nodegroup --cluster test-cluster --nodes 2

Validate the Installation

You can monitor as Cilium and all required components are being installed:

  1. kubectl -n kube-system get pods --watch
  2. NAME READY STATUS RESTARTS AGE
  3. cilium-operator-cb4578bc5-q52qk 0/1 Pending 0 8s
  4. cilium-s8w5m 0/1 PodInitializing 0 7s
  5. coredns-86c58d9df4-4g7dd 0/1 ContainerCreating 0 8m57s
  6. coredns-86c58d9df4-4l6b2 0/1 ContainerCreating 0 8m57s

It may take a couple of minutes for all components to come up:

  1. cilium-operator-cb4578bc5-q52qk 1/1 Running 0 4m13s
  2. cilium-s8w5m 1/1 Running 0 4m12s
  3. coredns-86c58d9df4-4g7dd 1/1 Running 0 13m
  4. coredns-86c58d9df4-4l6b2 1/1 Running 0 13m

Deploy the connectivity test

You can deploy the “connectivity-check” to test connectivity between pods. It is recommended to create a separate namespace for this.

  1. kubectl create ns cilium-test

Deploy the check with:

  1. kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/v1.9/examples/kubernetes/connectivity-check/connectivity-check.yaml

It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:

  1. $ kubectl get pods -n cilium-test
  2. NAME READY STATUS RESTARTS AGE
  3. echo-a-76c5d9bd76-q8d99 1/1 Running 0 66s
  4. echo-b-795c4b4f76-9wrrx 1/1 Running 0 66s
  5. echo-b-host-6b7fc94b7c-xtsff 1/1 Running 0 66s
  6. host-to-b-multi-node-clusterip-85476cd779-bpg4b 1/1 Running 0 66s
  7. host-to-b-multi-node-headless-dc6c44cb5-8jdz8 1/1 Running 0 65s
  8. pod-to-a-79546bc469-rl2qq 1/1 Running 0 66s
  9. pod-to-a-allowed-cnp-58b7f7fb8f-lkq7p 1/1 Running 0 66s
  10. pod-to-a-denied-cnp-6967cb6f7f-7h9fn 1/1 Running 0 66s
  11. pod-to-b-intra-node-nodeport-9b487cf89-6ptrt 1/1 Running 0 65s
  12. pod-to-b-multi-node-clusterip-7db5dfdcf7-jkjpw 1/1 Running 0 66s
  13. pod-to-b-multi-node-headless-7d44b85d69-mtscc 1/1 Running 0 66s
  14. pod-to-b-multi-node-nodeport-7ffc76db7c-rrw82 1/1 Running 0 65s
  15. pod-to-external-1111-d56f47579-d79dz 1/1 Running 0 66s
  16. pod-to-external-fqdn-allow-google-cnp-78986f4bcf-btjn7 1/1 Running 0 66s

Note

If you deploy the connectivity check to a single node cluster, pods that check multi-node functionalities will remain in the Pending state. This is expected since these pods need at least 2 nodes to be scheduled successfully.

Specify Environment Variables

Specify the namespace in which Cilium is installed as CILIUM_NAMESPACE environment variable. Subsequent commands reference this environment variable.

  1. export CILIUM_NAMESPACE=kube-system

Enable Hubble for Cluster-Wide Visibility

Hubble is the component for observability in Cilium. To obtain cluster-wide visibility into your network traffic, deploy Hubble Relay and the UI as follows on your existing installation:

Installation via Helm

Installation via quick-hubble-install.yaml

If you installed Cilium via helm install, you may enable Hubble Relay and UI with the following command:

  1. helm upgrade cilium cilium/cilium --version 1.9.8 \
  2. --namespace $CILIUM_NAMESPACE \
  3. --reuse-values \
  4. --set hubble.listenAddress=":4244" \
  5. --set hubble.relay.enabled=true \
  6. --set hubble.ui.enabled=true

On Cilium 1.9.1 and older, the Cilium agent pods will be restarted in the process.

If you installed Cilium 1.9.2 or newer via the provided quick-install.yaml, you may deploy Hubble Relay and UI on top of your existing installation with the following command:

  1. kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-hubble-install.yaml

Installation via quick-hubble-install.yaml only works if the installed Cilium version is 1.9.2 or newer. Users of Cilium 1.9.0 or 1.9.1 are encouraged to upgrade to a newer version by applying the most recent Cilium quick-install.yaml first.

Alternatively, it is possible to manually generate a YAML manifest for the Cilium DaemonSet and Hubble Relay/UI as follows. The generated YAML can be applied on top of an existing installation:

  1. # Set this to your installed Cilium version
  2. export CILIUM_VERSION=1.9.1
  3. # Please set any custom Helm values you may need for Cilium,
  4. # such as for example `--set operator.replicas=1` on single-cluster nodes.
  5. helm template cilium cilium/cilium --version $CILIUM_VERSION \\
  6. --namespace $CILIUM_NAMESPACE \\
  7. --set hubble.tls.auto.method="cronJob" \\
  8. --set hubble.listenAddress=":4244" \\
  9. --set hubble.relay.enabled=true \\
  10. --set hubble.ui.enabled=true > cilium-with-hubble.yaml
  11. # This will modify your existing Cilium DaemonSet and ConfigMap
  12. kubectl apply -f cilium-with-hubble.yaml

The Cilium agent pods will be restarted in the process.

Once the Hubble UI pod is started, use port forwarding for the hubble-ui service. This allows opening the UI locally on a browser:

  1. kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-ui --address 0.0.0.0 --address :: 12000:80

And then open http://localhost:12000/ to access the UI.

Hubble UI is not the only way to get access to Hubble data. A command line tool, the Hubble CLI, is also available. It can be installed by following the instructions below:

Linux

MacOS

Windows

Download the latest hubble release:

  1. export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
  2. curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz"
  3. curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz.sha256sum"
  4. sha256sum --check hubble-linux-amd64.tar.gz.sha256sum
  5. tar zxf hubble-linux-amd64.tar.gz

and move the hubble CLI to a directory listed in the $PATH environment variable. For example:

  1. sudo mv hubble /usr/local/bin

Download the latest hubble release:

  1. export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
  2. curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-darwin-amd64.tar.gz"
  3. curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-darwin-amd64.tar.gz.sha256sum"
  4. shasum -a 256 -c hubble-darwin-amd64.tar.gz.sha256sum
  5. tar zxf hubble-darwin-amd64.tar.gz

and move the hubble CLI to a directory listed in the $PATH environment variable. For example:

  1. sudo mv hubble /usr/local/bin

Download the latest hubble release:

  1. curl -LO "https://raw.githubusercontent.com/cilium/hubble/master/stable.txt"
  2. set /p HUBBLE_VERSION=<stable.txt
  3. curl -LO "https://github.com/cilium/hubble/releases/download/%HUBBLE_VERSION%/hubble-windows-amd64.tar.gz"
  4. curl -LO "https://github.com/cilium/hubble/releases/download/%HUBBLE_VERSION%/hubble-windows-amd64.tar.gz.sha256sum"
  5. certutil -hashfile hubble-windows-amd64.tar.gz SHA256
  6. type hubble-windows-amd64.tar.gz.sha256sum
  7. :: verify that the checksum from the two commands above match
  8. tar zxf hubble-windows-amd64.tar.gz

and move the hubble.exe CLI to a directory listed in the %PATH% environment variable after extracting it from the tarball.

Similarly to the UI, use port forwarding for the hubble-relay service to make it available locally:

  1. kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-relay --address 0.0.0.0 --address :: 4245:80

In a separate terminal window, run the hubble status command specifying the Hubble Relay address:

  1. $ hubble --server localhost:4245 status
  2. Healthcheck (via localhost:4245): Ok
  3. Current/Max Flows: 5455/16384 (33.29%)
  4. Flows/s: 11.30
  5. Connected Nodes: 4/4

If Hubble Relay reports that all nodes are connected, as in the example output above, you can now use the CLI to observe flows of the entire cluster:

  1. hubble --server localhost:4245 observe

If you encounter any problem at this point, you may seek help on Slack.

Tip

Hubble CLI configuration can be persisted using a configuration file or environment variables. This avoids having to specify options specific to a particular environment every time a command is run. Run hubble help config for more information.

For more information about Hubble and its components, see the Observability section.