Networking and security observability with Hubble
This guide provides a walkthrough of setting up a local Kubernetes cluster with Hubble and Cilium installed, in order to demonstrate some of Hubble’s capabilities.
If you haven’t read the Introduction to Cilium & Hubble yet, we’d encourage you to do that first.
The best way to get help if you get stuck is to ask a question on the Cilium Slack channel. With Cilium contributors across the globe, there is almost always someone available to help.
Set up a Kubernetes cluster
To run a Kubernetes cluster on your local machine, you have the choice to either set up a single-node cluster with minikube, or a local multi-node cluster on Docker using kind:
minikube
runs a single-node Kubernetes cluster inside a Virtual Machine (VM) and is the easiest way to run a Kubernetes cluster locally.kind
runs a multi-node Kubernetes using Docker container to emulate cluster nodes. It allows you to experiment with the cluster-wide observability features of Hubble Relay.
When unsure about the option to pick, follow the instructions for minikube
as it is less likely to cause friction.
Single-node cluster with minikube
Multi-node cluster with kind
Install kubectl & minikube
- Install
kubectl
version >= v1.10.0 as described in the Kubernetes Docs - Install
minikube
>= v1.3.1 as per minikube documentation: Install Minikube.
Note
It is important to validate that you have minikube v1.3.1 installed. Older versions of minikube are shipping a kernel configuration that is not compatible with the TPROXY requirements of Cilium >= 1.6.0.
minikube version
minikube version: v1.3.1
commit: ca60a424ce69a4d79f502650199ca2b52f29e631
- Create a minikube cluster:
minikube start --network-plugin=cni --memory=4096
# Only available for minikube >= v1.12.1
minikube start --cni=cilium --memory=4096
Note
From minikube v1.12.1+, cilium networking plugin can be enabled directly with --network-plugin=cilium
parameter in minikube start
command. With this flag enabled, minikube
will not only mount eBPF file system but also deploy quick-install.yaml
automatically.
- Mount the eBPF filesystem
minikube ssh -- sudo mount bpffs -t bpf /sys/fs/bpf
Note
In case of installing Cilium for a specific Kubernetes version, the --kubernetes-version vx.y.z
parameter can be appended to the minikube start
command for bootstrapping the local cluster. By default, minikube will install the most recent version of Kubernetes.
Install dependencies
- Install
docker
stable as described in Install Docker Engine - Install
kubectl
version >= v1.14.0 as described in the Kubernetes Docs - Install
helm
>= v3.0.3 per Helm documentation: Installing Helm - Install
kind
>= v0.7.0 per kind documentation: Installation and Usage
Configure kind
Configuring kind cluster creation is done using a YAML configuration file. This step is necessary in order to disable the default CNI and replace it with Cilium.
Create a kind-config.yaml
file based on the following template. It will create a cluster with 3 worker nodes and 1 control-plane node.
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
disableDefaultCNI: true
By default, the latest version of Kubernetes from when the kind release was created is used.
To change the version of Kubernetes being run, image
has to be defined for each node. See the Node Configuration documentation for more information.
Tip
By default, kind uses the following pod and service subnets:
Networking.PodSubnet = "10.244.0.0/16"
Networking.ServiceSubnet = "10.96.0.0/12"
If any of these subnets conflicts with your local network address range, update the networking
section of the kind configuration file to specify different subnets that do not conflict or you risk having connectivity issues when deploying Cilium. For example:
networking:
disableDefaultCNI: true
podSubnet: "10.10.0.0/16"
serviceSubnet: "10.11.0.0/16"
Create a cluster
To create a cluster with the configuration defined above, pass the kind-config.yaml
you created with the --config
flag of kind.
kind create cluster --config=kind-config.yaml
After a couple of seconds or minutes, a 4 nodes cluster should be created.
A new kubectl
context (kind-kind
) should be added to KUBECONFIG
or, if unset, to ${HOME}/.kube/config
:
kubectl cluster-info --context kind-kind
Note
The cluster nodes will remain in state NotReady
until Cilium is deployed. This behavior is expected.
Preload images
Preload the cilium
image into each worker node in the kind cluster:
docker pull cilium/cilium:v1.8.10
kind load docker-image cilium/cilium:v1.8.10
Deploy Cilium and Hubble
This section shows how to install Cilium, enable Hubble and deploy Hubble Relay and Hubble’s graphical UI.
Single-node cluster with minikube
Multi-node cluster with kind
Deploy Hubble and Cilium with the provided pre-rendered YAML manifest:
kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.8/install/kubernetes/experimental-install.yaml
Note
First, make sure you have Helm 3 installed.
If you have (or planning to have) Helm 2 charts (and Tiller) in the same cluster, there should be no issue as both version are mutually compatible in order to support gradual migration. Cilium chart is targeting Helm 3 (v3.0.3 and above).
Setup Helm repository:
helm repo add cilium https://helm.cilium.io/
Deploy Hubble and Cilium with the following Helm command:
helm install cilium cilium/cilium --version 1.8.10 \
--namespace kube-system \
--set global.nodeinit.enabled=true \
--set global.kubeProxyReplacement=partial \
--set global.hostServices.enabled=false \
--set global.externalIPs.enabled=true \
--set global.nodePort.enabled=true \
--set global.hostPort.enabled=true \
--set global.pullPolicy=IfNotPresent \
--set config.ipam=kubernetes \
--set global.hubble.enabled=true \
--set global.hubble.listenAddress=":4244" \
--set global.hubble.relay.enabled=true \
--set global.hubble.ui.enabled=true
Note
Please note that Hubble Relay and Hubble UI are currently in beta status and are not yet recommended for production use.
Validate the Installation
You can monitor as Cilium and all required components are being installed:
kubectl -n kube-system get pods --watch
NAME READY STATUS RESTARTS AGE
cilium-2rlwx 0/1 Init:0/2 0 2s
cilium-ncqtb 0/1 Init:0/2 0 2s
cilium-node-init-9h9dd 0/1 ContainerCreating 0 2s
cilium-node-init-cmks4 0/1 ContainerCreating 0 2s
cilium-node-init-vnx5n 0/1 ContainerCreating 0 2s
cilium-node-init-zhs66 0/1 ContainerCreating 0 2s
cilium-nrzsp 0/1 Init:0/2 0 2s
cilium-operator-599dbcf854-7w4rr 0/1 Pending 0 2s
cilium-pghbg 0/1 Init:0/2 0 2s
coredns-66bff467f8-gnzk7 0/1 Pending 0 6m6s
coredns-66bff467f8-wzh49 0/1 Pending 0 6m6s
etcd-kind-control-plane 1/1 Running 0 6m15s
hubble-relay-5684848cc8-6ldhj 0/1 ContainerCreating 0 2s
hubble-ui-54c6bc4cdc-h5drq 0/1 Pending 0 2s
kube-apiserver-kind-control-plane 1/1 Running 0 6m15s
kube-controller-manager-kind-control-plane 1/1 Running 0 6m15s
kube-proxy-dchqv 1/1 Running 0 5m51s
kube-proxy-jkvhr 1/1 Running 0 5m53s
kube-proxy-nb9b2 1/1 Running 0 6m5s
kube-proxy-ttf7z 1/1 Running 0 5m50s
kube-scheduler-kind-control-plane 1/1 Running 0 6m15s
cilium-node-init-zhs66 1/1 Running 0 4s
It may take a couple of minutes for all components to come up:
kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
cilium-2rlwx 1/1 Running 0 16m
cilium-ncqtb 1/1 Running 0 16m
cilium-node-init-9h9dd 1/1 Running 1 16m
cilium-node-init-cmks4 1/1 Running 1 16m
cilium-node-init-vnx5n 1/1 Running 1 16m
cilium-node-init-zhs66 1/1 Running 1 16m
cilium-nrzsp 1/1 Running 0 16m
cilium-operator-599dbcf854-7w4rr 1/1 Running 0 16m
cilium-pghbg 1/1 Running 0 16m
coredns-66bff467f8-gnzk7 1/1 Running 0 22m
coredns-66bff467f8-wzh49 1/1 Running 0 22m
etcd-kind-control-plane 1/1 Running 0 22m
hubble-relay-5684848cc8-2z6qk 1/1 Running 0 21s
hubble-ui-54c6bc4cdc-g5mgd 1/1 Running 0 17s
kube-apiserver-kind-control-plane 1/1 Running 0 22m
kube-controller-manager-kind-control-plane 1/1 Running 0 22m
kube-proxy-dchqv 1/1 Running 0 21m
kube-proxy-jkvhr 1/1 Running 0 21m
kube-proxy-nb9b2 1/1 Running 0 22m
kube-proxy-ttf7z 1/1 Running 0 21m
kube-scheduler-kind-control-plane 1/1 Running 0 22m
Accessing the Graphical User Interface
Hubble provides a graphical user interface which displays a service map of your service dependencies. To access Hubble UI, you can use the following command to forward the port of the web frontend to your local machine:
kubectl port-forward -n kube-system svc/hubble-ui 12000:80
Open http://localhost:12000 in your browser. You should see a screen with an invitation to select a namespace, use the namespace selector dropdown on the left top corner to select a namespace:
In this example, we are deploying the Star Wars demo from the Identity-Aware and HTTP-Aware Policy Enforcement guide. However you can apply the same techniques to observe application connectivity dependencies in your own namespace, and clusters for application of any type.
Once the the deployment is ready, issue a request from both spaceships to emulate some traffic.
$ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed
$ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed
These requests will then be displayed in the UI as service dependencies between the different pods:
In the bottom of the interface, you may also inspect each recent Hubble flow event in your current namespace individually.
Note
If you enable Layer 7 Protocol Visibility on your pods, the Hubble UI service map will display the HTTP endpoints which are being accessed by the requests.
Inspecting the cluster’s network traffic with Hubble Relay
Now let’s install the Hubble CLI on your PC/laptop. This will allow you to inspect the traffic using Hubble Relay.
Linux
MacOS
Windows
Download the latest hubble release:
export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz"
curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz.sha256sum"
sha256sum --check hubble-linux-amd64.tar.gz.sha256sum
tar zxf hubble-linux-amd64.tar.gz
and move the hubble
CLI to a directory listed in the $PATH
environment variable. For example:
sudo mv hubble /usr/local/bin
Download the latest hubble release:
export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-darwin-amd64.tar.gz"
curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-darwin-amd64.tar.gz.sha256sum"
shasum -a 256 -c hubble-darwin-amd64.tar.gz.sha256sum
tar zxf hubble-darwin-amd64.tar.gz
and move the hubble
CLI to a directory listed in the $PATH
environment variable. For example:
sudo mv hubble /usr/local/bin
Download the latest hubble release:
curl -LO "https://raw.githubusercontent.com/cilium/hubble/master/stable.txt"
set /p HUBBLE_VERSION=<stable.txt
curl -LO "https://github.com/cilium/hubble/releases/download/%HUBBLE_VERSION%/hubble-windows-amd64.tar.gz"
curl -LO "https://github.com/cilium/hubble/releases/download/%HUBBLE_VERSION%/hubble-windows-amd64.tar.gz.sha256sum"
certutil -hashfile hubble-windows-amd64.tar.gz SHA256
type hubble-windows-amd64.tar.gz.sha256sum
:: verify that the checksum from the two commands above match
tar zxf hubble-windows-amd64.tar.gz
and move the hubble.exe
CLI to a directory listed in the %PATH%
environment variable after extracting it from the tarball.
In order to access Hubble Relay with the hubble
CLI, let’s make sure to port-forward the Hubble Relay service locally:
$ kubectl port-forward -n kube-system svc/hubble-relay 4245:80
Note
This terminal window needs to be remain open to keep port-forwarding in place. Open a separate terminal window to use the hubble
CLI.
Confirm that the Hubble Relay service is healthy via hubble status
:
$ hubble status --server localhost:4245
Healthcheck (via localhost:4245): Ok
Max Flows: 16384
In order to avoid passing --server localhost:4245
to every command, you may export the following environment variable:
$ export HUBBLE_DEFAULT_SOCKET_PATH=localhost:4245
Let’s now issue some requests to emulate some traffic again. This first request is allowed by the policy.
$ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed
This next request is accessing an HTTP endpoint which is denied by policy.
$ kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port
Access denied
Finally, this last request will hang because the xwing
pod does not have the org=empire
label required by policy. Press Control-C to kill the curl request, or wait for it to time out.
$ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
command terminated with exit code 28
Let’s now inspect this traffic using the CLI. The command below filters all traffic on the application layer (L7, HTTP) to the deathstar
pod:
$ hubble observe --pod deathstar --protocol http
TIMESTAMP SOURCE DESTINATION TYPE VERDICT SUMMARY
Jun 18 13:52:23.843 default/tiefighter:52568 default/deathstar-5b7489bc84-8wvng:80 http-request FORWARDED HTTP/1.1 POST http://deathstar.default.svc.cluster.local/v1/request-landing
Jun 18 13:52:23.844 default/deathstar-5b7489bc84-8wvng:80 default/tiefighter:52568 http-response FORWARDED HTTP/1.1 200 0ms (POST http://deathstar.default.svc.cluster.local/v1/request-landing)
Jun 18 13:52:31.019 default/tiefighter:52628 default/deathstar-5b7489bc84-8wvng:80 http-request DROPPED HTTP/1.1 PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port
The following command shows all traffic to the deathstar
pod that has been dropped:
$ hubble observe --pod deathstar --verdict DROPPED
TIMESTAMP SOURCE DESTINATION TYPE VERDICT SUMMARY
Jun 18 13:52:31.019 default/tiefighter:52628 default/deathstar-5b7489bc84-8wvng:80 http-request DROPPED HTTP/1.1 PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port
Jun 18 13:52:38.321 default/xwing:34138 default/deathstar-5b7489bc84-v4s7d:80 Policy denied DROPPED TCP Flags: SYN
Jun 18 13:52:38.321 default/xwing:34138 default/deathstar-5b7489bc84-v4s7d:80 Policy denied DROPPED TCP Flags: SYN
Jun 18 13:52:39.327 default/xwing:34138 default/deathstar-5b7489bc84-v4s7d:80 Policy denied DROPPED TCP Flags: SYN
Feel free to further inspect the traffic. To get help for the observe
command, use hubble help observe
.
Cleanup
Once you are done experimenting with Hubble, you can remove all traces of the cluster by running the following command:
Single-node cluster with minikube
Multi-node cluster with kind
minikube delete
kind delete cluster