Monitoring & Metrics
Cilium and Hubble can both be configured to serve Prometheus metrics. Prometheus is a pluggable metrics collection and storage system and can act as a data source for Grafana, a metrics visualization frontend. Unlike some metrics collectors like statsd, Prometheus requires the collectors to pull metrics from each source.
Cilium and Hubble metrics can be enabled independently of each other.
Cilium Metrics
Cilium metrics provide insights into the state of Cilium itself, namely of the cilium-agent
, cilium-envoy
, and cilium-operator
processes. To run Cilium with Prometheus metrics enabled, deploy it with the prometheus.enabled=true
Helm value set.
Cilium metrics are exported under the cilium_
Prometheus namespace. Envoy metrics are exported under the envoy_
Prometheus namespace, of which the Cilium-defined metrics are exported under the envoy_cilium_
namespace. When running and collecting in Kubernetes they will be tagged with a pod name and namespace.
Installation
You can enable metrics for cilium-agent
(including Envoy) with the Helm value prometheus.enabled=true
. To enable metrics for cilium-operator
, use operator.prometheus.enabled=true
.
helm install cilium cilium/cilium --version 1.12.0 \
--namespace kube-system \
--set prometheus.enabled=true \
--set operator.prometheus.enabled=true
The ports can be configured via prometheus.port
, proxy.prometheus.port
, or operator.prometheus.port
respectively.
When metrics are enabled, all Cilium components will have the following annotations. They can be used to signal Prometheus whether to scrape metrics:
prometheus.io/scrape: true
prometheus.io/port: 9962
To collect Envoy metrics the Cilium chart will create a Kubernetes headless service named cilium-agent
with the prometheus.io/scrape:'true'
annotation set:
prometheus.io/scrape: true
prometheus.io/port: 9964
This additional headless service in addition to the other Cilium components is needed as each component can only have one Prometheus scrape and port annotation.
Prometheus will pick up the Cilium and Envoy metrics automatically if the following option is set in the scrape_configs
section:
scrape_configs:
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: (.+):(?:\d+);(\d+)
replacement: ${1}:${2}
target_label: __address__
Hubble Metrics
While Cilium metrics allow you to monitor the state Cilium itself, Hubble metrics on the other hand allow you to monitor the network behavior of your Cilium-managed Kubernetes pods with respect to connectivity and security.
Installation
To deploy Cilium with Hubble metrics enabled, you need to enable Hubble with hubble.enabled=true
and provide a set of Hubble metrics you want to enable via hubble.metrics.enabled
.
Some of the metrics can also be configured with additional options. See the Hubble exported metrics section for the full list of available metrics and their options.
helm install cilium cilium/cilium --version 1.12.0 \
--namespace kube-system \
--set hubble.metrics.enabled="{dns,drop,tcp,flow,icmp,http}"
The port of the Hubble metrics can be configured with the hubble.metrics.port
Helm value.
Note
L7 metrics such as HTTP, are only emitted for pods that enable Layer 7 Protocol Visibility.
When deployed with a non-empty hubble.metrics.enabled
Helm value, the Cilium chart will create a Kubernetes headless service named hubble-metrics
with the prometheus.io/scrape:'true'
annotation set:
prometheus.io/scrape: true
prometheus.io/port: 9965
Set the following options in the scrape_configs
section of Prometheus to have it scrape all Hubble metrics from the endpoints automatically:
scrape_configs:
- job_name: 'kubernetes-endpoints'
scrape_interval: 30s
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: (.+)(?::\d+);(\d+)
replacement: $1:$2
Example Prometheus & Grafana Deployment
If you don’t have an existing Prometheus and Grafana stack running, you can deploy a stack with:
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.12/examples/kubernetes/addons/prometheus/monitoring-example.yaml
It will run Prometheus and Grafana in the cilium-monitoring
namespace. If you have either enabled Cilium or Hubble metrics, they will automatically be scraped by Prometheus. You can then expose Grafana to access it via your browser.
kubectl -n cilium-monitoring port-forward service/grafana --address 0.0.0.0 --address :: 3000:3000
Open your browser and access http://localhost:3000/
Metrics Reference
cilium-agent
Configuration
To expose any metrics, invoke cilium-agent
with the --prometheus-serve-addr
option. This option takes a IP:Port
pair but passing an empty IP (e.g. :9962
) will bind the server to all available interfaces (there is usually only one in a container).
Exported Metrics
Endpoint
Name | Labels | Description |
---|---|---|
| Number of endpoints managed by this agent | |
|
| Count of all endpoint regenerations that have completed |
|
| Endpoint regeneration time stats |
|
| Count of all endpoints |
Services
Name | Labels | Description |
---|---|---|
| Number of services events labeled by action type |
Cluster health
Name | Labels | Description |
---|---|---|
| Number of nodes that cannot be reached | |
| Number of health endpoints that cannot be reached | |
| Number of failing controllers |
Node Connectivity
Name | Labels | Description |
---|---|---|
|
| The last observed status of both ICMP and HTTP connectivity between the current Cilium agent and other Cilium nodes |
|
| The last observed latency between the current Cilium agent and other Cilium nodes in seconds |
Clustermesh
Name | Labels | Description |
---|---|---|
|
| The total number of global services in the cluster mesh |
|
| The total number of remote clusters meshed with the local cluster |
|
| The total number of failures related to the remote cluster |
|
| The total number of nodes in the remote cluster |
|
| The timestamp of the last failure of the remote cluster |
|
| The readiness status of the remote cluster |
Datapath
Name | Labels | Description |
---|---|---|
|
| Number of conntrack dump resets. Happens when a BPF entry gets removed while dumping the map is in progress. |
|
| Number of times that the conntrack garbage collector process was run |
| The number of alive and deleted conntrack entries at the end of a garbage collector run labeled by datapath family | |
|
| The number of alive and deleted conntrack entries at the end of a garbage collector run |
|
| Duration in seconds of the garbage collector process |
IPSec
Name | Labels | Description |
---|---|---|
|
| Total number of xfrm errors. |
eBPF
Name | Labels | Description |
---|---|---|
|
| Duration of eBPF system call performed |
|
| Number of eBPF map operations performed. |
|
| Map pressure defined as fill-up ratio of the map. Policy maps are exceptionally reported only when ratio is over 0.1. |
| Max memory used by eBPF maps installed in the system | |
| Max memory used by eBPF programs installed in the system |
Both bpf_maps_virtual_memory_max_bytes
and bpf_progs_virtual_memory_max_bytes
are currently reporting the system-wide memory usage of eBPF that is directly and not directly managed by Cilium. This might change in the future and only report the eBPF memory usage directly managed by Cilium.
Drops/Forwards (L3/L4)
Name | Labels | Description |
---|---|---|
|
| Total dropped packets |
|
| Total dropped bytes |
|
| Total forwarded packets |
|
| Total forwarded bytes |
Policy
Name | Labels | Description |
---|---|---|
| Number of policies currently loaded | |
| Number of policies currently loaded (deprecated, use | |
| Total number of policies regenerated successfully | |
|
| Policy regeneration time stats labeled by the scope |
| Highest policy revision number in the agent | |
| Number of times a policy import has failed | |
| Number of endpoints labeled by policy enforcement status |
Policy L7 (HTTP/Kafka)
Name | Labels | Description |
---|---|---|
|
| Number of redirects installed for endpoints |
| Seconds waited for upstream server to reply to a request | |
| Number of total datapath update timeouts due to FQDN IP updates | |
|
| Number of total L7 requests/responses |
Identity
Name | Labels | Description |
---|---|---|
|
| Number of identities currently allocated |
Events external to Cilium
Name | Labels | Description |
---|---|---|
|
| Last timestamp when we received an event |
Controllers
Name | Labels | Description |
---|---|---|
|
| Number of times that a controller process was run |
|
| Duration in seconds of the controller process |
SubProcess
Name | Labels | Description |
---|---|---|
|
| Number of times that Cilium has started a subprocess |
Kubernetes
Name | Labels | Description |
---|---|---|
|
| Number of Kubernetes events received |
|
| Number of Kubernetes events processed |
|
| Duration in seconds in how long it took to complete a CNP status update |
| Number of terminating endpoint events received from Kubernetes |
IPAM
Name | Labels | Description |
---|---|---|
| Number of IPAM events received labeled by action and datapath family type | |
|
| Number of allocated IP addresses |
KVstore
Name | Labels | Description |
---|---|---|
|
| Duration of kvstore operation |
|
| Duration of seconds of time received event was blocked before it could be queued |
|
| Number of quorum errors |
Agent
Name | Labels | Description |
---|---|---|
|
| Duration of various bootstrap phases |
| Processing time of all the API calls made to the cilium-agent, labeled by API method, API path and returned HTTP code. |
FQDN
Name | Labels | Description |
---|---|---|
| Number of FQDNs that have been cleaned on FQDN garbage collector job |
API Rate Limiting
Name | Labels | Description |
---|---|---|
|
| Most recent adjustment factor for automatic adjustment |
|
| Total number of API requests processed |
|
| Mean and estimated processing duration in seconds |
|
| Current rate limiting configuration (limit and burst) |
|
| Current and maximum allowed number of requests in flight |
|
| Mean, min, and max wait duration |
|
| Histogram of wait duration per API call processed |
cilium-operator
Configuration
cilium-operator
can be configured to serve metrics by running with the option --enable-metrics
. By default, the operator will expose metrics on port 9963, the port can be changed with the option --operator-prometheus-serve-addr
.
Exported Metrics
All metrics are exported under the cilium_operator_
Prometheus namespace.
IPAM
Name | Labels | Description |
---|---|---|
|
| Number of IPs allocated |
|
| Number of IP allocation operations. |
|
| Number of interfaces creation operations. |
| Number of interfaces with addresses available | |
| Number of nodes unable to allocate more addresses | |
| Number of synchronization operations with external IPAM API | |
|
| Duration of interactions with external IPAM API. |
|
| Duration of rate limiting while accessing external IPAM API |
Hubble
Configuration
Hubble metrics are served by a Hubble instance running inside cilium-agent
. The command-line options to configure them are --enable-hubble
, --hubble-metrics-server
, and --hubble-metrics
. --hubble-metrics-server
takes an IP:Port
pair, but passing an empty IP (e.g. :9965
) will bind the server to all available interfaces. --hubble-metrics
takes a comma-separated list of metrics.
Some metrics can take additional semicolon-separated options per metric, e.g. --hubble-metrics="dns:query;ignoreAAAA,http:destinationContext=pod-short"
will enable the dns
metric with the query
and ignoreAAAA
options, and the http
metric with the destinationContext=pod-short
option.
Context Options
Most Hubble metrics can be configured to add the source and/or destination context as a label. The options are called sourceContext
and destinationContext
. The possible values are:
Option Value | Description |
---|---|
| All Cilium security identity labels |
| Kubernetes namespace name |
| Kubernetes pod name |
| Short version of the Kubernetes pod name. Typically the deployment/replicaset name. |
| All known DNS names of the source or destination (comma-separated) |
| The IPv4 or IPv6 address |
When specifying the source and/or destination context, multiple contexts can be specified by separating them via the |
symbol. When multiple are specified, then the first non-empty value is added to the metric as a label. For example, a metric configuration of flow:destinationContext=dns|ip
will first try to use the DNS name of the target for the label. If no DNS name is known for the target, it will fall back and use the IP address of the target instead.
Exported Metrics
Hubble metrics are exported under the hubble_
Prometheus namespace.
dns
Name | Labels | Description |
---|---|---|
|
| Number of DNS queries observed |
|
| Number of DNS responses observed |
|
| Number of DNS response types |
Options
Option Key | Option Value | Description |
---|---|---|
| N/A | Include the query as label “query” |
| N/A | Ignore any AAAA requests/responses |
This metric supports Context Options.
drop
Name | Labels | Description |
---|---|---|
|
| Number of drops |
Options
This metric supports Context Options.
flow
Name | Labels | Description |
---|---|---|
|
| Total number of flows processed |
Options
This metric supports Context Options.
flows-to-world
This metric counts all non-reply flows containing the reserved:world
label in their destination identity. By default, dropped flows are counted if and only if the drop reason is Policy denied
. Set any-drop
option to count all dropped flows.
Name | Labels | Description |
---|---|---|
|
| Total number of flows to |
Options
Option Key | Option Value | Description |
---|---|---|
| N/A | Count any dropped flows regardless of the drop reason. |
| N/A | Include the destination port as label |
This metric supports Context Options.
http
Name | Labels | Description |
---|---|---|
|
| Count of HTTP requests |
|
| Count of HTTP responses |
|
| Quantiles of HTTP request duration in seconds |
Options
This metric supports Context Options.
icmp
Name | Labels | Description |
---|---|---|
|
| Number of ICMP messages |
Options
This metric supports Context Options.
port-distribution
Name | Labels | Description |
---|---|---|
|
| Numbers of packets distributed by destination port |
Options
This metric supports Context Options.
tcp
Name | Labels | Description |
---|---|---|
|
| TCP flag occurrences |
Options
This metric supports Context Options.