Monitor Calico component metrics
Big picture
Use Prometheus configured for Calico components to get valuable metrics about the health of Calico.
Value
Using the open-source Prometheus monitoring and alerting toolkit, you can view time-series metrics from Calico components in the Prometheus or Grafana interfaces.
Concepts
About Prometheus
The Prometheus monitoring tool scrapes metrics from instrumented jobs and displays time series data in a visualizer (such as Grafana). For Calico, the “jobs” that Prometheus can harvest metrics from are the Felix and Typha components.
About Calico Felix, Typha, and kube-controllers components
Felix is a daemon that runs on every machine that implements network policy. Felix is the brains of Calico. Typha is an optional set of pods that extends Felix to scale traffic between Calico nodes and the datastore. The kube-controllers pod runs a set of controllers which are responsible for a variety of control plane functions, such as resource garbage collection and synchronization with the Kubernetes API.
You can configure Felix, Typha, and/or kube-controllers to provide metrics to Prometheus.
Before you begin…
In this tutorial we assume that you have completed all other introductory tutorials and possess a running Kubernetes cluster with Calico. You can either use kubectl
or calicoctl
to perform the following steps. Depending on which tool you would like to use, make sure you have the necessary prerequisites as shown below.
- kubectl
- calicoctl
If you wish to modify Calico configurations with kubectl
binary you need to make sure you have the Calico API server in your cluster. The API server allows you to manage resources within the projectcalico.org/v3
api group.
note
Operator based installs include the API server by default.
For more information about the API server please use this link.
You can run calicoctl
on any host with network access to the Calico datastore as either a binary or a container to manage Calico APIs in the projectcalico.org/v3
API group.
For more information about calicoctl please use this link.
How to
This tutorial will go through the necessary steps to implement basic monitoring of Calico with Prometheus.
- Configure Calico to enable the metrics reporting.
- Create the namespace and service account that Prometheus will need.
- Deploy and configure Prometheus.
- View the metrics in the Prometheus dashboard and create a simple graph.
1. Configure Calico to enable metrics reporting
Felix configuration
Felix prometheus metrics are disabled by default.
note
A comprehensive list of configuration values can be found at this link.
Use the following command to enable Felix metrics.
- kubectl
- calicoctl
kubectl patch felixconfiguration default --type merge --patch '{"spec":{"prometheusMetricsEnabled": true}}'
You should see an output like below:
felixconfiguration.projectcalico.org/default patched
calicoctl patch felixconfiguration default --patch '{"spec":{"prometheusMetricsEnabled": true}}'
You should see an output like below:
Successfully patched 1 'FelixConfiguration' resource
Creating a service to expose Felix metrics
Prometheus uses Kubernetes services to dynamically discover endpoints. Here you will create a service named felix-metrics-svc
which Prometheus will use to discover all the Felix metrics endpoints.
note
Felix by default uses port 9091 TCP to publish its metrics.
- Operator
- Manifest
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: felix-metrics-svc
namespace: calico-system
spec:
clusterIP: None
selector:
k8s-app: calico-node
ports:
- port: 9091
targetPort: 9091
EOF
If running Calico for Windows, also create a service for Windows nodes:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: felix-windows-metrics-svc
namespace: calico-system
spec:
clusterIP: None
selector:
k8s-app: calico-node-windows
ports:
- port: 9091
targetPort: 9091
EOF
By default, the Windows firewall blocks listening on ports. For Calico to manage the Prometheus metrics ports Windows firewall rules, enable the windowsManageFirewallRules
setting in FelixConfiguration:
kubectl patch felixConfiguration default --type merge --patch '{"spec":{"windowsManageFirewallRules": "Enabled"}}'
See the FelixConfiguration reference for more details. You can also add a Windows firewall rule that allows listening on the Prometheus metrics port instead of having Calico manage it.
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: felix-metrics-svc
namespace: kube-system
spec:
clusterIP: None
selector:
k8s-app: calico-node
ports:
- port: 9091
targetPort: 9091
EOF
Typha Configuration
- Operator
- Manifest
An Operator installation of Calico automatically deploys one or more Typha instances depending on the scale of your cluster. By default metrics for these instances are disabled.
Use the following command to instruct tigera-operator
to enable Typha metrics.
kubectl patch installation default --type=merge -p '{"spec": {"typhaMetricsPort":9093}}'
You should see a result similar to:
installation.operator.tigera.io/default patched
note
Typha implementation is optional, if you don’t have Typha in your cluster you can skip Typha configuration section.
If you are uncertain whether you have Typha
in your cluster execute the following code:
kubectl get pods -A | grep typha
If your result is similar to what is shown below you are using Typha in your cluster.
note
The name suffix of pods shown below was dynamically generated. Your typha instance might have a different suffix.
kube-system calico-typha-56fccfcdc4-z27xj 1/1 Running 0 28h
kube-system calico-typha-horizontal-autoscaler-74f77cd87c-6hx27 1/1 Running 0 28h
You can enable Typha metrics to be consumed by Prometheus via two ways.
Creating a service to expose Typha metrics
note
Typha uses port 9091 TCP by default to publish its metrics. However, if Calico is installed using Amazon yaml file this port will be 9093 as its set manually via TYPHA_PROMETHEUSMETRICSPORT environment variable.
- Operator
- Manifest
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: typha-metrics-svc
namespace: calico-system
spec:
clusterIP: None
selector:
k8s-app: calico-typha
ports:
- port: 9093
targetPort: 9093
EOF
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: typha-metrics-svc
namespace: kube-system
spec:
clusterIP: None
selector:
k8s-app: calico-typha
ports:
- port: 9093
targetPort: 9093
EOF
kube-controllers configuration
Prometheus metrics are enabled by default on TCP port 9094 for calico-kube-controllers
.
- Operator
- Manifest
The operator automatically creates a service that exposes these metrics.
You can use the following command to verify it.
kubectl get svc -n calico-system
You should see a result similar to:
calico-kube-controllers-metrics ClusterIP 10.43.77.57 <none> 9094/TCP 39d
Creating a service to expose kube-controllers metrics
Create a service to expose calico-kube-controllers
metrics to Prometheus.
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: kube-controllers-metrics-svc
namespace: kube-system
spec:
clusterIP: None
selector:
k8s-app: calico-kube-controllers
ports:
- port: 9094
targetPort: 9094
EOF
Optionally, you can use the following command to modify the port by changing the KubeControllersConfiguration
resource if desired.
note
Setting this value to zero will disable metrics in the kube-controllers pod.
- kubectl
- calicoctl
kubectl patch kubecontrollersconfiguration default --type=merge --patch '{"spec":{"prometheusMetricsPort": 9095}}'
calicoctl patch kubecontrollersconfiguration default --patch '{"spec":{"prometheusMetricsPort": 9095}}'
2. Cluster preparation
Namespace creation
Namespace
isolates resources in your cluster. Here you will create a Namespace called calico-monitoring
to hold your monitoring resources.
note
Kubernetes namespaces guide can be found at this link.
kubectl create -f -<<EOF
apiVersion: v1
kind: Namespace
metadata:
name: calico-monitoring
labels:
app: ns-calico-monitoring
role: monitoring
EOF
Service account creation
You need to provide Prometheus a serviceAccount with required permissions to collect information from Calico.
note
A comprehensive guide to user roles and authentication can be found at this link.
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: calico-prometheus-user
rules:
- apiGroups: [""]
resources:
- endpoints
- services
- pods
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-prometheus-user
namespace: calico-monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-prometheus-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-prometheus-user
subjects:
- kind: ServiceAccount
name: calico-prometheus-user
namespace: calico-monitoring
EOF
3. Install prometheus
Create prometheus config file
We can configure Prometheus using a ConfigMap to persistently store the desired settings.
note
A comprehensive guide about configuration file can be found at this link.
- Operator
- manifest
kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: calico-monitoring
data:
prometheus.yml: |-
global:
scrape_interval: 15s
external_labels:
monitor: 'tutorial-monitor'
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
- job_name: 'felix_metrics'
scrape_interval: 5s
scheme: http
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_name]
regex: felix-metrics-svc
replacement: $1
action: keep
- job_name: 'felix_windows_metrics'
scrape_interval: 5s
scheme: http
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_name]
regex: felix-windows-metrics-svc
replacement: $1
action: keep
- job_name: 'typha_metrics'
scrape_interval: 5s
scheme: http
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_name]
regex: typha-metrics-svc
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_pod_container_port_name]
regex: calico-typha
action: drop
- job_name: 'kube_controllers_metrics'
scrape_interval: 5s
scheme: http
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_name]
regex: calico-kube-controllers-metrics
replacement: $1
action: keep
EOF
kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: calico-monitoring
data:
prometheus.yml: |-
global:
scrape_interval: 15s
external_labels:
monitor: 'tutorial-monitor'
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
- job_name: 'felix_metrics'
scrape_interval: 5s
scheme: http
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_name]
regex: felix-metrics-svc
replacement: $1
action: keep
- job_name: 'felix_windows_metrics'
scrape_interval: 5s
scheme: http
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_name]
regex: felix-windows-metrics-svc
replacement: $1
action: keep
- job_name: 'typha_metrics'
scrape_interval: 5s
scheme: http
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_name]
regex: typha-metrics-svc
replacement: $1
action: keep
- job_name: 'kube_controllers_metrics'
scrape_interval: 5s
scheme: http
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_name]
regex: kube-controllers-metrics-svc
replacement: $1
action: keep
EOF
Create Prometheus pod
Now that you have a serviceaccount
with permissions to gather metrics and have a valid config file for your Prometheus, it’s time to create the Prometheus pod.
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: prometheus-pod
namespace: calico-monitoring
labels:
app: prometheus-pod
role: monitoring
spec:
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: calico-prometheus-user
containers:
- name: prometheus-pod
image: prom/prometheus
resources:
limits:
memory: "128Mi"
cpu: "500m"
volumeMounts:
- name: config-volume
mountPath: /etc/prometheus/prometheus.yml
subPath: prometheus.yml
ports:
- containerPort: 9090
volumes:
- name: config-volume
configMap:
name: prometheus-config
EOF
Check your cluster pods to assure pod creation was successful and prometheus pod is Running
.
kubectl get pods prometheus-pod -n calico-monitoring
It should return something like the following.
NAME READY STATUS RESTARTS AGE
prometheus-pod 1/1 Running 0 16s
4. View metrics
You can access prometheus dashboard by using port-forwarding feature.
kubectl port-forward pod/prometheus-pod 9090:9090 -n calico-monitoring
Browse to http://localhost:9090 you should be able to see prometheus dashboard. Type felix_active_local_endpoints in the Expression input textbox then hit the execute button. Console table should be populated with all your nodes and quantity of endpoints in each of them.
note
A list of Felix metrics can be found at this link. Similar lists can be found for kube-controllers and Typha.
Push the Add Graph
button, You should be able to see the metric plotted on a Graph.
Cleanup
This section will help you remove resources that you have created by following this tutorial. Please skip this step if you like to deploy Grafana to Visualize component metrics. First remove the services by executing the following command:
- Operator
- Manifest
kubectl delete service felix-metrics-svc -n calico-system
kubectl delete service typha-metrics-svc -n calico-system
If running Calico for Windows, also clean up the Windows nodes service:
kubectl delete service felix-windows-metrics-svc -n calico-system
kubectl delete service felix-metrics-svc -n kube-system
kubectl delete service typha-metrics-svc -n kube-system
kubectl delete service kube-controllers-metrics-svc -n kube-system
Return Calico configurations to their default state.
- kubectl
- calicoctl
kubectl patch felixConfiguration default --type merge --patch '{"spec":{"prometheusMetricsEnabled": false}}'
kubectl patch installation default --type=json -p '[{"op": "remove", "path":"/spec/typhaMetricsPort"}]'
calicoctl patch felixConfiguration default --patch '{"spec":{"prometheusMetricsEnabled": false}}'
Finally, remove the namespace and RBAC permissions.
kubectl delete namespace calico-monitoring
kubectl delete ClusterRole calico-prometheus-user
kubectl delete clusterrolebinding calico-prometheus-user
Best practices
If you enable Calico metrics to Prometheus, a best practice is to use network policy to limit access to the Calico metrics endpoints. For details, see Secure Calico Prometheus endpoints.
If you are not using Prometheus metrics, we recommend disabling the Prometheus ports entirely for more security.