Service metrics

Every Knative Service has a proxy container that proxies the connections to the application container. A number of metrics are reported for the queue proxy performance.

Using the following metrics, you can measure if requests are queued at the proxy side (need for backpressure) and what is the actual delay in serving requests at the application side.

Queue proxy metrics

Requests endpoint.

Metric NameDescriptionTypeTagsUnitStatus
revision_request_countThe number of requests that are routed to queue-proxyCounterconfiguration_name
container_name
namespace_name
pod_name
response_code
response_code_class
revision_name
service_name
DimensionlessStable
revision_request_latenciesThe response time in millisecondHistogramconfiguration_name
container_name
namespace_name
pod_name
response_code
response_code_class
revision_name
service_name
MillisecondsStable
revision_app_request_countThe number of requests that are routed to user-containerCounterconfiguration_name
container_name
namespace_name
pod_name
response_code
response_code_class
revision_name
service_name
DimensionlessStable
revision_app_request_latenciesThe response time in millisecondHistogramconfiguration_name
namespace_name
pod_name
response_code
response_code_class
revision_name
service_name
MillisecondsStable
revision_queue_depthThe current number of items in the serving and waiting queue, or not reported if unlimited concurrencyGaugeconfiguration_name
event-display
container_name
namespace_name
pod_name
response_code_class
revision_name
service_name
DimensionlessStable

Note

The revision_queue_depth metric will be exported only if the revision concurrency hard limit is set to a value greater than 1.

Exposing Queue proxy metrics

Queue proxy exports metrics for the requests endpoint on port 9091. The metrics can be scraped by Prometheus when metrics.request-metrics-backend-destination is set to prometheus (default) in the configmap observability. The backend can be changed to opencensus which uses a push model and requires a destination address which can be set in the same configmap via metrics.opencensus-address. User can control the reporting period for both backends with metrics.request-metrics-reporting-period-seconds. If metrics.request-metrics-reporting-period-seconds is not set at all then the reporting period depends on the value of the global reporting period, metrics.reporting-period-seconds, that affects both control and data planes. If both properties are not available then the reporting period defaults to 5s for the Prometheus backend and 60s for the Opencensus one.

Here is a sample configuration for the observability configmap in order to connect to the OpenTelemetry collector:

  1. metrics.request-metrics-backend-destination: "opencensus"
  2. metrics.opencensus-address: "otel-collector.metrics:55678"
  3. metrics.request-metrics-reporting-period-seconds: "1"

Note

The reporting period is to 1s so that we can push metrics as soon as possible but this could be overwhelming for the targeted metrics backend. Setting a value of zero or a negative value defaults to 10s (does not mean no delay) which is the default reporting period defined by the Opencensus metrics client library. The latter is used by Knative Serving for exporting metrics.