Traffic Trace
This policy enables tracing logging to a third party tracing solution.
Tracing is supported over HTTP, HTTP2, and gRPC protocols. You must explicitly specify the protocol for each service and data plane proxy you want to enable tracing for.
You must also:
- Add a tracing backend. You specify a tracing backend as a Mesh resource property.
- Add a TrafficTrace resource. You pass the backend to the
TrafficTrace
resource.
Kuma currently supports the following trace exposition formats:
zipkin
traces in this format can be sent to many different tracing backends.datadog
Services still need to be instrumented to preserve the trace chain across requests made across different services.
You can instrument with a language library of your choice (for zipkin and for datadog). For HTTP you can also manually forward the following headers:
x-request-id
x-b3-traceid
x-b3-parentspanid
x-b3-spanid
x-b3-sampled
x-b3-flags
Add a tracing backend to the mesh
Zipkin
This assumes you already have a zipkin compatible collector running. If you haven’t, read the observability docs.
apiVersion: kuma.io/v1alpha1
kind: Mesh
metadata:
name: default
spec:
tracing:
defaultBackend: jaeger-collector
backends:
- name: jaeger-collector
type: zipkin
sampling: 100.0
conf:
url: http://jaeger-collector.mesh-observability:9411/api/v2/spans # If not using `kuma install observability` replace by any zipkin compatible collector address.
Apply the configuration with kubectl apply -f [..]
.
type: Mesh
name: default
tracing:
defaultBackend: jaeger-collector
backends:
- name: jaeger-collector
type: zipkin
sampling: 100.0
conf:
url: http://my-jaeger-collector:9411/api/v2/spans # Replace by any zipkin compatible collector address.
Apply the configuration with kumactl apply -f [..]
or with the HTTP API.
Datadog
This assumes a Datadog agent is configured and running. If you haven’t already check the Datadog observability page.
apiVersion: kuma.io/v1alpha1
kind: Mesh
metadata:
name: default
spec:
tracing:
defaultBackend: datadog-collector
backends:
- name: datadog-collector
type: datadog
sampling: 100.0
conf:
address: trace-svc.default.svc.cluster.local
port: 8126
splitService: true
where trace-svc
is the name of the Kubernetes Service you specified when you configured the Datadog APM agent.
Apply the configuration with kubectl apply -f [..]
.
type: Mesh
name: default
tracing:
defaultBackend: datadog-collector
backends:
- name: datadog-collector
type: datadog
sampling: 100.0
conf:
address: 127.0.0.1
port: 8126
splitService: true
Apply the configuration with kumactl apply -f [..]
or with the HTTP API.
The defaultBackend
property specifies the tracing backend to use if it’s not explicitly specified in the TrafficTrace
resource.
The splitService
property determines if Datadog service names should be split based on traffic direction and destination. For example, with splitService: true
and a backend
service that communicates with a couple of databases, you would get service names like backend_INBOUND
, backend_OUTBOUND_db1
, and backend_OUTBOUND_db2
in Datadog. By default, this property is set to false.
Add TrafficTrace resource
Next, create TrafficTrace
resources that specify how to collect traces, and which backend to send them to.
apiVersion: kuma.io/v1alpha1
kind: TrafficTrace
mesh: default
metadata:
name: trace-all-traffic
spec:
selectors:
- match:
kuma.io/service: '*'
conf:
backend: jaeger-collector # or the name of any backend defined for the mesh
Apply the configuration with kubectl apply -f [..]
.
type: TrafficTrace
name: trace-all-traffic
mesh: default
selectors:
- match:
kuma.io/service: '*'
conf:
backend: jaeger-collector # or the name of any backend defined for the mesh
Apply the configuration with kumactl apply -f [..]
or with the HTTP API.
When backend
field is omitted, the logs will be forwarded into the defaultBackend
of that Mesh
.
You can also add tags to apply the TrafficTrace
resource only a subset of data plane proxies. TrafficTrace
is a Dataplane policy, so you can specify any of the selectors
tags.
While most commonly we want all the traces to be sent to the same tracing backend, we can optionally create multiple tracing backends in a Mesh
resource and store traces for different paths of our service traffic in different backends by leveraging Kuma tags. This is especially useful when we want traces to never leave a world region, or a cloud, for example.