Generate Istio Metrics Without Mixer [Experimental]
The following information describes an experimental feature, which is intendedfor evaluation purposes only.
Istio 1.3 adds experimental support to generate service-level HTTP metricsdirectly in the Envoy proxies. This feature lets you continue to monitor yourservice meshes using the tools Istio provides without needing Mixer.
The in-proxy generation of service-level metrics replaces the following HTTPmetrics that Mixer currently generates:
istio_requests_total
istio_request_duration_seconds
istio_request_size
Enable service-level metrics generation in Envoy
To generate service-level metrics directly in the Envoy proxies, follow these steps:
- To prevent duplicate telemetry generation, disable calls to
istio-telemetry
in the mesh:
$ istioctl manifest apply --set values.mixer.telemetry.enabled=false,values.mixer.policy.enabled=false
Alternatively, you can comment out mixerCheckServer
and mixerReportServer
in your mesh configuration.
- To generate service-level metrics, the proxies must exchange workload metadata.A custom filter handles this exchange. Enable the metadata exchange filter with the following command:
$ kubectl -n istio-system apply -f https://raw.githubusercontent.com/istio/proxy/release-1.4/extensions/stats/testdata/istio/metadata-exchange_filter.yaml
- To actually generate the service-level metrics, you must apply the custom stats filter.
$ kubectl -n istio-system apply -f https://raw.githubusercontent.com/istio/proxy/release-1.4/extensions/stats/testdata/istio/stats_filter.yaml
- Go to the Istio Mesh Grafana dashboard. Verify that the dashboard displays the same telemetry as before but withoutany requests flowing through Istio’s Mixer.
Differences with Mixer-based generation
Small differences between the in-proxy generation and Mixer-based generation of service-level metricspersist in Istio 1.3. We won’t consider the functionality stable until in-proxy generation has full feature-parity withMixer-based generation.
Until then, please consider these differences:
- The
istio_request_duration_seconds
latency metric has the new name:istio_request_duration_milliseconds
.The new metric uses milliseconds instead of seconds. We updated the Grafana dashboards toaccount for these changes. - The
istio_request_duration_milliseconds
metric uses more granular buckets inside the proxy, providingincreased accuracy in latency reporting.
Performance impact
As this work is currently experimental, our primary focus has been on establishingthe base functionality. We have identified several performance optimizations basedon our initial experimentation, and expect to continue to improve the performanceand scalability of this feature as it develops.
We won’t consider this feature for promotion to Beta or Stablestatusuntil performance and scalability assessments and improvements have been made.
The performance of your mesh depends on your configuration.To learn more, see our performance best practices post.
Here’s what we’ve measured so far:
- All new filters together use 10% less CPU resources for the
istio-proxy
containersthan the Mixer filter. - The new filters add ~5ms P90 latency at 1000 rps compared to Envoy proxiesconfigured with no telemetry filters.
- If you only use the
istio-telemetry
service to generate service-level metrics,you can switch off theistio-telemetry
service. This could save up to ~0.5 vCPU per1000 rps of mesh traffic, and could halve the CPU consumed by Istio while collectingstandard metrics.
Known limitations
- We only provide support for exporting these metrics via Prometheus.
- We provide no support to generate TCP metrics.
- We provide no proxy-side customization or configuration of the generated metrics.