Configure metrics

Enable or disable Dapr metrics

By default, each Dapr system process emits Go runtime/process metrics and has their own Dapr metrics.

Prometheus endpoint

The Dapr sidecar exposes a Prometheus-compatible metrics endpoint that you can scrape to gain a greater understanding of how Dapr is behaving.

Configuring metrics using the CLI

The metrics application endpoint is enabled by default. You can disable it by passing the command line argument --enable-metrics=false.

The default metrics port is 9090. You can override this by passing the command line argument --metrics-port to daprd.

Configuring metrics in Kubernetes

You can also enable/disable the metrics for a specific application by setting the dapr.io/enable-metrics: "false" annotation on your application deployment. With the metrics exporter disabled, daprd does not open the metrics listening port.

The following Kubernetes deployment example shows how metrics are explicitly enabled with the port specified as “9090”.

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: nodeapp
  5. labels:
  6. app: node
  7. spec:
  8. replicas: 1
  9. selector:
  10. matchLabels:
  11. app: node
  12. template:
  13. metadata:
  14. labels:
  15. app: node
  16. annotations:
  17. dapr.io/enabled: "true"
  18. dapr.io/app-id: "nodeapp"
  19. dapr.io/app-port: "3000"
  20. dapr.io/enable-metrics: "true"
  21. dapr.io/metrics-port: "9090"
  22. spec:
  23. containers:
  24. - name: node
  25. image: dapriosamples/hello-k8s-node:latest
  26. ports:
  27. - containerPort: 3000
  28. imagePullPolicy: Always

Configuring metrics using application configuration

You can also enable metrics via application configuration. To disable the metrics collection in the Dapr sidecars by default, set spec.metrics.enabled to false.

  1. apiVersion: dapr.io/v1alpha1
  2. kind: Configuration
  3. metadata:
  4. name: tracing
  5. namespace: default
  6. spec:
  7. metrics:
  8. enabled: false

Optimizing HTTP metrics reporting with path matching

When invoking Dapr using HTTP, metrics are created for each requested method by default. This can result in a high number of metrics, known as high cardinality, which can impact memory usage and CPU.

Path matching allows you to manage and control the cardinality of HTTP metrics in Dapr. This is an aggregation of metrics, so rather than having a metric for each event, you can reduce the number of metrics events and report an overall number. Learn more about how to set the cardinality in configuration.

This configuration is opt-in and is enabled via the Dapr configuration spec.metrics.http.pathMatching. When defined, it enables path matching, which standardizes specified paths for both metrics paths. This reduces the number of unique metrics paths, making metrics more manageable and reducing resource consumption in a controlled way.

When spec.metrics.http.pathMatching is combined with the increasedCardinality flag set to false, non-matched paths are transformed into a catch-all bucket to control and limit cardinality, preventing unbounded path growth. Conversely, when increasedCardinality is true (the default), non-matched paths are passed through as they normally would be, allowing for potentially higher cardinality but preserving the original path data.

Examples of Path Matching in HTTP Metrics

The following examples demonstrate how to use the Path Matching API in Dapr for managing HTTP metrics. On each example, the metrics are collected from 5 HTTP requests to the /orders endpoint with different order IDs. By adjusting cardinality and utilizing path matching, you can fine-tune metric granularity to balance detail and resource efficiency.

These examples illustrate the cardinality of the metrics, highlighting that high cardinality configurations result in many entries, which correspond to higher memory usage for handling metrics. For simplicity, the following example focuses on a single metric: dapr_http_server_request_count.

Low cardinality with path matching (Recommendation)

Configuration:

  1. http:
  2. increasedCardinality: false
  3. pathMatching:
  4. - /orders/{orderID}

Metrics generated:

  1. # matched paths
  2. dapr_http_server_request_count{app_id="order-service",method="GET",path="/orders/{orderID}",status="200"} 5
  3. # unmatched paths
  4. dapr_http_server_request_count{app_id="order-service",method="GET",path="",status="200"} 1

With low cardinality and path matching configured, you get the best of both worlds by grouping the metrics for the important endpoints without compromising the cardinality. This approach helps avoid high memory usage and potential security issues.

Low cardinality without path matching

Configuration:

  1. http:
  2. increasedCardinality: false

Metrics generated:

  1. dapr_http_server_request_count{app_id="order-service",method="GET", path="",status="200"} 5

In low cardinality mode, the path, which is the main source of unbounded cardinality, is dropped. This results in metrics that primarily indicate the number of requests made to the service for a given HTTP method, but without any information about the paths invoked.

High cardinality with path matching

Configuration:

  1. http:
  2. increasedCardinality: true
  3. pathMatching:
  4. - /orders/{orderID}

Metrics generated:

  1. dapr_http_server_request_count{app_id="order-service",method="GET",path="/orders/{orderID}",status="200"} 5

This example results from the same HTTP requests as the example above, but with path matching configured for the path /orders/{orderID}. By using path matching, you achieve reduced cardinality by grouping the metrics based on the matched path.

High Cardinality without path matching

Configuration:

  1. http:
  2. increasedCardinality: true

Metrics generated:

  1. dapr_http_server_request_count{app_id="order-service",method="GET",path="/orders/1",status="200"} 1
  2. dapr_http_server_request_count{app_id="order-service",method="GET",path="/orders/2",status="200"} 1
  3. dapr_http_server_request_count{app_id="order-service",method="GET",path="/orders/3",status="200"} 1
  4. dapr_http_server_request_count{app_id="order-service",method="GET",path="/orders/4",status="200"} 1
  5. dapr_http_server_request_count{app_id="order-service",method="GET",path="/orders/5",status="200"} 1

For each request, a new metric is created with the request path. This process continues for every request made to a new order ID, resulting in unbounded cardinality since the IDs are ever-growing.

HTTP metrics exclude verbs

The excludeVerbs option allows you to exclude specific HTTP verbs from being reported in the metrics. This can be useful in high-performance applications where memory savings are critical.

Examples of excluding HTTP verbs in metrics

The following examples demonstrate how to exclude HTTP verbs in Dapr for managing HTTP metrics.

Default - Include HTTP verbs

Configuration:

  1. http:
  2. excludeVerbs: false

Metrics generated:

  1. dapr_http_server_request_count{app_id="order-service",method="GET",path="/orders",status="200"} 1
  2. dapr_http_server_request_count{app_id="order-service",method="POST",path="/orders",status="200"} 1

In this example, the HTTP method is included in the metrics, resulting in a separate metric for each request to the /orders endpoint.

Exclude HTTP verbs

Configuration:

  1. http:
  2. excludeVerbs: true

Metrics generated:

  1. dapr_http_server_request_count{app_id="order-service",method="",path="/orders",status="200"} 2

In this example, the HTTP method is excluded from the metrics, resulting in a single metric for all requests to the /orders endpoint.

Configuring custom latency histogram buckets

Dapr uses cumulative histogram metrics to group latency values into buckets, where each bucket contains:

  • A count of the number of requests with that latency
  • All the requests with lower latency

Using the default latency bucket configurations

By default, Dapr groups request latency metrics into the following buckets:

  1. 1, 2, 3, 4, 5, 6, 8, 10, 13, 16, 20, 25, 30, 40, 50, 65, 80, 100, 130, 160, 200, 250, 300, 400, 500, 650, 800, 1000, 2000, 5000, 10000, 20000, 50000, 100000

Grouping latency values in a cumulative fashion allows buckets to be used or dropped as needed for increased or decreased granularity of data. For example, if a request takes 3ms, it’s counted in the 3ms bucket, the 4ms bucket, the 5ms bucket, and so on. Similarly, if a request takes 10ms, it’s counted in the 10ms bucket, the 13ms bucket, the 16ms bucket, and so on. After these two requests have completed, the 3ms bucket has a count of 1 and the 10ms bucket has a count of 2, since both the 3ms and 10ms requests are included here.

This shows up as follows:

123456810131620253040506580100130160…..100000
00111112222222222222…..2

The default number of buckets works well for most use cases, but can be adjusted as needed. Each request creates 34 different metrics, leaving this value to grow considerably for a large number of applications. More accurate latency percentiles can be achieved by increasing the number of buckets. However, a higher number of buckets increases the amount of memory used to store the metrics, potentially negatively impacting your monitoring system.

It is recommended to keep the number of latency buckets set to the default value, unless you are seeing unwanted memory pressure in your monitoring system. Configuring the number of buckets allows you to choose applications where:

  • You want to see more detail with a higher number of buckets
  • Broader values are sufficient by reducing the buckets

Take note of the default latency values your applications are producing before configuring the number buckets.

Customizing latency buckets to your scenario

Tailor the latency buckets to your needs, by modifying the spec.metrics.latencyDistributionBuckets field in the Dapr configuration spec for your application(s).

For example, if you aren’t interested in extremely low latency values (1-10ms), you can group them in a single 10ms bucket. Similarly, you can group the high values in a single bucket (1000-5000ms), while keeping more detail in the middle range of values that you are most interested in.

The following Configuration spec example replaces the default 34 buckets with 11 buckets, giving a higher level of granularity in the middle range of values:

  1. apiVersion: dapr.io/v1alpha1
  2. kind: Configuration
  3. metadata:
  4. name: custom-metrics
  5. spec:
  6. metrics:
  7. enabled: true
  8. latencyDistributionBuckets: [10, 25, 40, 50, 70, 100, 150, 200, 500, 1000, 5000]

Transform metrics with regular expressions

You can set regular expressions for every metric exposed by the Dapr sidecar to “transform” their values. See a list of all Dapr metrics.

The name of the rule must match the name of the metric that is transformed. The following example shows how to apply a regular expression for the label method in the metric dapr_runtime_service_invocation_req_sent_total:

  1. apiVersion: dapr.io/v1alpha1
  2. kind: Configuration
  3. metadata:
  4. name: daprConfig
  5. spec:
  6. metrics:
  7. enabled: true
  8. http:
  9. increasedCardinality: true
  10. rules:
  11. - name: dapr_runtime_service_invocation_req_sent_total
  12. labels:
  13. - name: method
  14. regex:
  15. "orders/": "orders/.+"

When this configuration is applied, a recorded metric with the method label of orders/a746dhsk293972nz is replaced with orders/.

Using regular expressions to reduce metrics cardinality is considered legacy. We encourage all users to set spec.metrics.http.increasedCardinality to false instead, which is simpler to configure and offers better performance.

References

Last modified October 11, 2024: Fixed typo (#4389) (fe17926)