Policies

What is a policy?

A policy is a set of configuration that will be used to generate the data plane proxy configuration. Kuma combines policies with the Dataplane resource to generate the Envoy configuration of a data plane proxy.

What do policies look like?

Like all resources in Kuma, there are two parts to a policy: the metadata and the spec.

Metadata

Metadata identifies the policies by its name, type and what mesh it’s part of:

In Kubernetes all our policies are implemented as custom resource definitions (CRD) in the group kuma.io/v1alpha1.

  1. apiVersion: kuma.io/v1alpha1
  2. kind: ExamplePolicy
  3. metadata:
  4. name: my-policy-name
  5. namespace: kuma-system
  6. spec: ... # spec data specific to the policy kind

By default the policy is created in the default mesh. You can specify the mesh by using the kuma.io/mesh label.

For example:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: ExamplePolicy
  3. metadata:
  4. name: my-policy-name
  5. namespace: kuma-system
  6. labels:
  7. kuma.io/mesh: "my-mesh"
  8. spec: ... # spec data specific to the policy kind
  1. type: ExamplePolicy
  2. name: my-policy-name
  3. mesh: default
  4. spec: ... # spec data specific to the policy kind

Spec

The spec field contains the actual configuration of the policy.

Some policies apply to only a subset of the configuration of the proxy.

  • Inbound policies apply only to incoming traffic. The spec.from[].targetRef field defines the subset of clients that are going to be impacted by this policy.
  • Outbound policies apply only to outgoing traffic. The spec.to[].targetRef field defines the outbounds that are going to be impacted by this policy

The actual configuration is defined under the default field.

For example:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: ExamplePolicy
  3. metadata:
  4. name: my-example
  5. namespace: kuma-system
  6. labels:
  7. kuma.io/mesh: default
  8. spec:
  9. targetRef:
  10. kind: Mesh
  11. to:
  12. - targetRef:
  13. kind: Mesh
  14. default:
  15. key: value
  16. from:
  17. - targetRef:
  18. kind: Mesh
  19. default:
  20. key: value
  1. type: ExamplePolicy
  2. name: my-example
  3. mesh: default
  4. spec:
  5. targetRef:
  6. kind: Mesh
  7. to:
  8. - targetRef:
  9. kind: Mesh
  10. default:
  11. key: value
  12. from:
  13. - targetRef:
  14. kind: Mesh
  15. default:
  16. key: value

While some policies can have both a to and a from section, it is strongly advised to create 2 different policies, one for to and one for from.

Some policies are not directional and will not have to and from. Some examples of such policies are MeshTrace or MeshProxyPatch. For example

  1. apiVersion: kuma.io/v1alpha1
  2. kind: NonDirectionalPolicy
  3. metadata:
  4. name: my-example
  5. namespace: kuma-system
  6. labels:
  7. kuma.io/mesh: default
  8. spec:
  9. targetRef:
  10. kind: Mesh
  11. default:
  12. key: value
  1. type: NonDirectionalPolicy
  2. name: my-example
  3. mesh: default
  4. spec:
  5. targetRef:
  6. kind: Mesh
  7. default:
  8. key: value

All specs have a top level targetRef which identifies which proxies this policy applies to. In particular, it defines which proxies have their Envoy configuration modified.

One of the benefits of targetRef policies is that the spec is always the same between Kubernetes and Universal.

This means that converting policies between Universal and Kubernetes only means rewriting the metadata.

Writing a targetRef

targetRef is a concept borrowed from Kubernetes Gateway API. Its goal is to select subsets of proxies with maximum flexibility.

It looks like:

  1. targetRef:
  2. kind: Mesh | MeshSubset | MeshService | MeshGateway
  3. name: "my-name" # For kinds MeshService, and MeshGateway a name has to be defined
  4. tags:
  5. key: value # For kinds MeshSubset and MeshGateway a list of matching tags can be used
  6. proxyTypes: ["Sidecar", "Gateway"] # For kinds Mesh and MeshSubset a list of matching Dataplanes types can be used
  7. labels:
  8. key: value # In the case of policies that apply to labeled resources you can use these to apply the policy to each resource
  9. sectionName: ASection # This is used when trying to attach to a specific part of a resource (for example a port of a `MeshService`)
  10. namespace: ns # valid when the policy is applied by a Kubernetes control plane

Here’s an explanation of each kinds and their scope:

  • Mesh: applies to all proxies running in the mesh
  • MeshSubset: same as Mesh but filters only proxies who have matching targetRef.tags
  • MeshService: all proxies with a tag kuma.io/service equal to targetRef.name. This can work differently when using explicit services.
  • MeshGateway: targets proxies matched by the named MeshGateway
  • MeshServiceSubset: same as MeshService but further refine to proxies that have matching targetRef.tags. ⚠️This is deprecated from version 2.9.x ⚠️.

Consider the two example policies below:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: MeshAccessLog
  3. metadata:
  4. name: example-outbound
  5. namespace: kuma-system
  6. labels:
  7. kuma.io/mesh: default
  8. spec:
  9. targetRef:
  10. kind: MeshSubset
  11. tags:
  12. app: web-frontend
  13. to:
  14. - targetRef:
  15. kind: MeshService
  16. name: web-backend_kuma-demo_svc_8080
  17. default:
  18. backends:
  19. - file:
  20. format:
  21. plain: '{"start_time": "%START_TIME%"}'
  22. path: "/tmp/logs.txt"
  1. apiVersion: kuma.io/v1alpha1
  2. kind: MeshAccessLog
  3. metadata:
  4. name: example-outbound
  5. namespace: kuma-system
  6. labels:
  7. kuma.io/mesh: default
  8. spec:
  9. targetRef:
  10. kind: MeshSubset
  11. tags:
  12. app: web-frontend
  13. to:
  14. - targetRef:
  15. kind: MeshService
  16. name: web-backend
  17. namespace: kuma-demo
  18. sectionName: httpport
  19. default:
  20. backends:
  21. - file:
  22. format:
  23. plain: '{"start_time": "%START_TIME%"}'
  24. path: "/tmp/logs.txt"

I am using MeshService

  1. type: MeshAccessLog
  2. name: example-outbound
  3. mesh: default
  4. spec:
  5. targetRef:
  6. kind: MeshSubset
  7. tags:
  8. app: web-frontend
  9. to:
  10. - targetRef:
  11. kind: MeshService
  12. name: web-backend
  13. default:
  14. backends:
  15. - file:
  16. format:
  17. plain: '{"start_time": "%START_TIME%"}'
  18. path: "/tmp/logs.txt"
  1. type: MeshAccessLog
  2. name: example-outbound
  3. mesh: default
  4. spec:
  5. targetRef:
  6. kind: MeshSubset
  7. tags:
  8. app: web-frontend
  9. to:
  10. - targetRef:
  11. kind: MeshService
  12. name: web-backend
  13. sectionName: httpport
  14. default:
  15. backends:
  16. - file:
  17. format:
  18. plain: '{"start_time": "%START_TIME%"}'
  19. path: "/tmp/logs.txt"
  1. apiVersion: kuma.io/v1alpha1
  2. kind: MeshAccessLog
  3. metadata:
  4. name: example-inbound
  5. namespace: kuma-system
  6. labels:
  7. kuma.io/mesh: default
  8. spec:
  9. targetRef:
  10. kind: MeshSubset
  11. tags:
  12. app: web-frontend
  13. from:
  14. - targetRef:
  15. kind: Mesh
  16. default:
  17. backends:
  18. - file:
  19. format:
  20. plain: '{"start_time": "%START_TIME%"}'
  21. path: "/tmp/logs.txt"
  1. type: MeshAccessLog
  2. name: example-inbound
  3. mesh: default
  4. spec:
  5. targetRef:
  6. kind: MeshSubset
  7. tags:
  8. app: web-frontend
  9. from:
  10. - targetRef:
  11. kind: Mesh
  12. default:
  13. backends:
  14. - file:
  15. format:
  16. plain: '{"start_time": "%START_TIME%"}'
  17. path: "/tmp/logs.txt"

Using spec.targetRef, this policy targets all proxies that have a tag app:web-frontend. It defines the scope of this policy as applying to traffic either from or to data plane proxies with the tag app:web-frontend.

The spec.to[].targetRef section enables logging for any traffic going to web-backend. The spec.from[].targetRef section enables logging for any traffic coming from anywhere in the Mesh.

Omitting targetRef

When a targetRef is not present, it is semantically equivalent to targetRef.kind: Mesh and refers to everything inside the Mesh.

Applying to specific proxy types

The top level targetRef field can select a specific subset of data plane proxies. The field named proxyTypes can restrict policies to specific types of data plane proxies:

  • Sidecar: Targets data plane proxies acting as sidecars to applications (including delegated gateways).
  • Gateway: Applies to data plane proxies operating in built-in Gateway mode.
  • Empty list: Defaults to targeting all data plane proxies.

Example

The following policy will only apply to gateway data-planes:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: MeshTimeout
  3. metadata:
  4. name: gateway-only-timeout
  5. namespace: kuma-system
  6. labels:
  7. kuma.io/mesh: default
  8. spec:
  9. targetRef:
  10. kind: Mesh
  11. proxyTypes:
  12. - Gateway
  13. to:
  14. - targetRef:
  15. kind: Mesh
  16. default:
  17. idleTimeout: 10s
  1. type: MeshTimeout
  2. name: gateway-only-timeout
  3. mesh: default
  4. spec:
  5. targetRef:
  6. kind: Mesh
  7. proxyTypes:
  8. - Gateway
  9. to:
  10. - targetRef:
  11. kind: Mesh
  12. default:
  13. idleTimeout: 10s

Targeting gateways

Given a MeshGateway:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: MeshGateway
  3. mesh: default
  4. metadata:
  5. name: edge
  6. namespace: kuma-system
  7. conf:
  8. listeners:
  9. - port: 80
  10. protocol: HTTP
  11. tags:
  12. port: http-80
  13. - port: 443
  14. protocol: HTTPS
  15. tags:
  16. port: https-443

Policies can attach to all listeners:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: MeshTimeout
  3. metadata:
  4. name: timeout-all
  5. namespace: kuma-system
  6. labels:
  7. kuma.io/mesh: default
  8. spec:
  9. targetRef:
  10. kind: MeshGateway
  11. name: edge
  12. to:
  13. - targetRef:
  14. kind: Mesh
  15. default:
  16. idleTimeout: 10s
  1. type: MeshTimeout
  2. name: timeout-all
  3. mesh: default
  4. spec:
  5. targetRef:
  6. kind: MeshGateway
  7. name: edge
  8. to:
  9. - targetRef:
  10. kind: Mesh
  11. default:
  12. idleTimeout: 10s

so that requests to either port 80 or 443 will have an idle timeout of 10 seconds, or just some listeners:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: MeshTimeout
  3. metadata:
  4. name: timeout-8080
  5. namespace: kuma-system
  6. labels:
  7. kuma.io/mesh: default
  8. spec:
  9. targetRef:
  10. kind: MeshGateway
  11. name: edge
  12. tags:
  13. port: http-80
  14. to:
  15. - targetRef:
  16. kind: Mesh
  17. default:
  18. idleTimeout: 10s
  1. type: MeshTimeout
  2. name: timeout-8080
  3. mesh: default
  4. spec:
  5. targetRef:
  6. kind: MeshGateway
  7. name: edge
  8. tags:
  9. port: http-80
  10. to:
  11. - targetRef:
  12. kind: Mesh
  13. default:
  14. idleTimeout: 10s

So that only requests to port 80 will have the idle timeout.

Note that depending on the policy, there may be restrictions on whether or not specific listeners can be selected.

Routes

Read the MeshHTTPRoute docs and MeshTCPRoute docs for more on how to target gateways for routing traffic.

Target kind support for different policies

Not every policy supports to and from levels. Additionally, not every resource can appear at every supported level. The specified top level resource can also affect which resources can appear in to or from.

To help users, each policy documentation includes tables indicating which targetRef kinds is supported at each level. For each type of proxy, sidecar or builtin gateway, the table indicates for each targetRef level, which kinds are supported.

Example tables

These are just examples, remember to check the docs specific to your policy.

targetRefAllowed kinds
targetRef.kindMesh, MeshSubset
to[].targetRef.kindMesh, MeshService
from[].targetRef.kindMesh

The table above show that we can select sidecar proxies via Mesh, MeshSubset

We can use the policy as an outbound policy with:

  • to[].targetRef.kind: Mesh which will apply to all traffic originating at the sidecar to anywhere
  • to[].tagerRef.kind: MeshService which will apply to all traffic to specific services

We can also apply policy as an inbound policy with:

  • from[].targetRef.kind: Mesh which will apply to all traffic received by the sidecar from anywhere in the mesh
targetRefAllowed kinds
targetRef.kindMesh, MeshGateway, MeshGateway with tags
to[].targetRef.kindMesh

The table above indicates that we can select builtin gateway via Mesh, MeshGateway or even specific listeners with MeshGateway using tags.

We can use the policy only as an outbound policy with:

  • to[].targetRef.kind: Mesh all traffic from the gateway to anywhere.

Merging configuration

A proxy can be targeted by multiple targetRef’s, to define how policies are merged together the following strategy is used:

We define a total order of policy priority:

  • MeshServiceSubset > MeshService > MeshSubset > Mesh (the more a targetRef is focused the higher priority it has)
  • If levels are equal the lexicographic order of policy names is used

Remember: the broader a targetRef, the lower its priority.

For to and from policies we concatenate the array for each matching policies. We then build configuration by merging each level using JSON patch merge.

For example if I have 2 default ordered this way:

  1. default:
  2. conf: 1
  3. sub:
  4. array: [1, 2, 3]
  5. other: 50
  6. other-array: [3, 4, 5]
  7. ---
  8. default:
  9. sub:
  10. array: []
  11. other: null
  12. other-array: [5, 6]
  13. extra: 2

The merge result is:

  1. default:
  2. conf: 1
  3. sub:
  4. array: []
  5. other-array: [5, 6]
  6. extra: 2

Using policies with MeshService, MeshMultizoneService and MeshExternalService.

MeshService is a feature to define services explicitly in Kuma. It can be selectively enabled and disable depending on the value of meshServices.mode on your Mesh object.

When using explicit services, MeshServiceSubset is no longer a valid kind and MeshService can only be used to select an actual MeshService resource (it can no longer select a kuma.io/service).

In the following example we’ll assume we have a MeshService:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: MeshService
  3. metadata:
  4. name: my-service
  5. namespace: kuma-demo
  6. labels:
  7. k8s.kuma.io/namespace: kuma-demo
  8. kuma.io/zone: my-zone
  9. app: redis
  10. kuma.io/mesh:
  11. spec:
  12. selector:
  13. dataplaneTags:
  14. app: redis
  15. k8s.kuma.io/namespace: kuma-demo
  16. ports:
  17. - port: 6739
  18. targetPort: 6739
  19. appProtocol: tcp
  1. type: MeshService
  2. name: my-service
  3. labels:
  4. k8s.kuma.io/namespace: kuma-demo
  5. kuma.io/zone: my-zone
  6. app: redis
  7. spec:
  8. selector:
  9. dataplaneTags:
  10. app: redis
  11. k8s.kuma.io/namespace: kuma-demo
  12. ports:
  13. - port: 6739
  14. targetPort: 6739
  15. appProtocol: tcp

There are 2 ways to select a MeshService:

If you are in the same namespace (or same zone in Universal) you can select one specific service by using its explicit name:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: MeshTimeout
  3. metadata:
  4. name: timeout-to-redis
  5. namespace: kuma-demo
  6. spec:
  7. to:
  8. - targetRef:
  9. kind: MeshService
  10. name: redis
  11. default:
  12. connectionTimeout: 10s

Selecting all matching MeshServices by labels:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: MeshTimeout
  3. metadata:
  4. name: all-in-my-namespace
  5. namespace: kuma-demo
  6. spec:
  7. to:
  8. - targetRef:
  9. kind: MeshService
  10. labels:
  11. k8s.kuma.io/namespace: kuma-demo
  12. default:
  13. connectionTimeout: 10s

In this case this is equivalent to writing a specific policy for each service that matches this label (in our example for each service in this namespace in each zones).

When MeshService have multiple ports, you can use sectionName to restrict policy to a single port.

Global, zonal, producer and consumer policies

Policies can be applied to a zone or to a namespace when using Kubernetes. Policies will always impact at most the scope at which they are defined. In other words:

  1. a policy applied to the global control plane will apply to all proxies in all zones.
  2. a policy applied to a zone will only apply to proxies inside this zone. It is equivalent to having:

    1. spec:
    2. targetRef:
    3. kind: MeshSubset
    4. tags:
    5. kuma.io/zone: "my-zone"
  3. a policy applied to a namespace will only apply to proxies inside this namespace. It is equivalent to having:

    1. spec:
    2. targetRef:
    3. kind: MeshSubset
    4. tags:
    5. kuma.io/zone: "my-zone"
    6. kuma.io/namespace: "my-ns"

There is however, one exception to this when using MeshService with outbound policies (policies with spec.to[].targetRef). In this case, if you define a policy in the same namespace as the MeshService it is defined in, that policy will be considered a producer policy. This means that all clients of this service (even in different zones) will be impacted by this policy.

An example of a producer policy is:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: MeshTimeout
  3. metadata:
  4. name: timeout-to-redis
  5. namespace: kuma-demo
  6. spec:
  7. to:
  8. - targetRef:
  9. kind: MeshService
  10. name: redis
  11. default:
  12. connectionTimeout: 10s

The other type of policy is a consumer policy which most commonly use labels to match a service.

An example of a consumer policy which would override the previous producer policy:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: MeshTimeout
  3. metadata:
  4. name: timeout-to-redis-consumer
  5. namespace: kuma-demo
  6. spec:
  7. to:
  8. - targetRef:
  9. kind: MeshService
  10. labels:
  11. k8s.kuma.io/service-name: redis
  12. default:
  13. connectionTimeout: 10s

Remember that labels on a MeshService applies to each matching MeshService. To communicate to services named the same way in different namespaces or zones with different configuration use a more specific set of labels.

Kuma adds a label kuma.io/policy-role to identify the type of the policy. The values of the label are:

  • system: Policies defined on global or in the zone’s system namespace
  • workload-owner: Policies defined in a non system namespaces that do not have spec.to entries, or have both spec.from and spec.to entries
  • consumer: Policies defined in a non system namespace that have spec.to which either do not use name or have a different namespace
  • producer: Policies defined in the same namespace as the services identified in the spec.to[].targetRef

The merging order of the different policy scopes is: workload-owner > consumer > producer > zonal > global.

Example

We have 2 clients client1 and client2 they run in different namespaces respectively ns1 and ns2.

  1. flowchart LR
  2. subgraph ns1
  3. client1(client)
  4. end
  5. subgraph ns2
  6. client2(client)
  7. server(MeshService: server)
  8. end
  9. client1 --> server
  10. client2 --> server

We’re going to define a producer policy first:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: MeshTimeout
  3. metadata:
  4. name: producer-policy
  5. namespace: ns2
  6. spec:
  7. to:
  8. - targetRef:
  9. kind: MeshService
  10. name: server
  11. default:
  12. idleTimeout: 20s

We know it’s a producer policy because it is defined in the same namespace as the MeshService: server and names this server in its spec.to[].targetRef. So both client1 and client2 will receive the timeout of 20 seconds.

We now create a consumer policy:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: MeshTimeout
  3. metadata:
  4. name: consumer-policy
  5. namespace: ns1
  6. spec:
  7. to:
  8. - targetRef:
  9. kind: MeshService
  10. labels:
  11. k8s.kuma.io/service-name: server
  12. default:
  13. idleTimeout: 30s

Here the policy only impacts client1 as client2 doesn’t run in ns1. As consumer policies have a higher priority over producer policies, client1 will have a idleTimeout: 30s.

We can define another policy to impact client2:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: MeshTimeout
  3. metadata:
  4. name: consumer-policy
  5. namespace: ns2
  6. spec:
  7. to:
  8. - targetRef:
  9. kind: MeshService
  10. labels:
  11. k8s.kuma.io/service-name: server
  12. default:
  13. idleTimeout: 40s

Note that the only different here is the namespace, we now define a consumer policy inside ns2.

Use labels for consumer policies and name for producer policies. It will be easier to differentiate between producer and consumer policies.

Examples

Applying a global default

  1. type: ExamplePolicy
  2. name: example
  3. mesh: default
  4. spec:
  5. targetRef:
  6. kind: Mesh
  7. to:
  8. - targetRef:
  9. kind: Mesh
  10. default:
  11. key: value

All traffic from any proxy (top level targetRef) going to any proxy (to targetRef) will have this policy applied with value key=value.

Recommending to users

  1. type: ExamplePolicy
  2. name: example
  3. mesh: default
  4. spec:
  5. targetRef:
  6. kind: Mesh
  7. to:
  8. - targetRef:
  9. kind: MeshService
  10. name: my-service
  11. default:
  12. key: value

All traffic from any proxy (top level targetRef) going to the service “my-service” (to targetRef) will have this policy applied with value key=value.

This is useful when a service owner wants to suggest a set of configurations to its clients.

Configuring all proxies of a team

  1. type: ExamplePolicy
  2. name: example
  3. mesh: default
  4. spec:
  5. targetRef:
  6. kind: MeshSubset
  7. tags:
  8. team: "my-team"
  9. from:
  10. - targetRef:
  11. kind: Mesh
  12. default:
  13. key: value

All traffic from any proxies (from targetRef) going to any proxy that has the tag team=my-team (top level targetRef) will have this policy applied with value key=value.

This is a useful way to define coarse-grained rules for example.

Configuring all proxies in a zone

  1. type: ExamplePolicy
  2. name: example
  3. mesh: default
  4. spec:
  5. targetRef:
  6. kind: MeshSubset
  7. tags:
  8. kuma.io/zone: "east"
  9. default:
  10. key: value

All proxies in zone east (top level targetRef) will have this policy configured with key=value.

This can be very useful when observability stores are different for each zone for example.

Configuring all gateways in a Mesh

  1. type: ExamplePolicy
  2. name: example
  3. mesh: default
  4. spec:
  5. targetRef:
  6. kind: Mesh
  7. proxyTypes: ["Gateway"]
  8. default:
  9. key: value

All gateway proxies in mesh default will have this policy configured with key=value.

This can be very useful when timeout configurations for gateways need to differ from those of other proxies.

Applying policies in shadow mode

Overview

The new shadow mode functionality allows users to mark policies with a specific label to simulate configuration changes without affecting the live environment. It enables the observation of potential impact on Envoy proxy configurations, providing a risk-free method to test, validate, and fine-tune settings before actual deployment. Ideal for learning, debugging, and migrating, shadow mode ensures configurations are error-free, improving the overall system reliability without disrupting ongoing operations.

It’s not necessary but CLI tools like jq and jd can greatly improve working with Kuma resources.

How to use shadow mode

  1. Before applying the policy, add a kuma.io/effect: shadow label.

  2. Check the proxy config with shadow policies taken into account through the Kuma API. By using HTTP API:

    1. curl http://localhost:5681/meshes/${mesh}/dataplane/${dataplane}/_config?shadow=true

    or by using kumactl:

    1. kumactl inspect dataplane ${name} --type=config --shadow
  3. Check the diff in JSONPatch format through the Kuma API. By using HTTP API:

    1. curl http://localhost:5681/meshes/${mesh}/dataplane/${dataplane}/_config?shadow=true&include=diff

    or by using kumactl:

    1. kumactl inspect dataplane ${name} --type=config --shadow --include=diff

Limitations and Considerations

Currently, the Kuma API mentioned above works only on Zone CP. Attempts to use it on Global CP lead to 405 Method Not Allowed. This might change in the future.

Examples

Apply policy with kuma.io/effect: shadow label:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: MeshTimeout
  3. metadata:
  4. name: frontend-timeouts
  5. namespace: kuma-system
  6. labels:
  7. kuma.io/effect: shadow
  8. kuma.io/mesh: default
  9. spec:
  10. targetRef:
  11. kind: MeshSubset
  12. tags:
  13. kuma.io/service: frontend
  14. to:
  15. - targetRef:
  16. kind: MeshService
  17. name: backend_kuma-demo_svc_3001
  18. default:
  19. idleTimeout: 23s
  1. apiVersion: kuma.io/v1alpha1
  2. kind: MeshTimeout
  3. metadata:
  4. name: frontend-timeouts
  5. namespace: kuma-system
  6. labels:
  7. kuma.io/effect: shadow
  8. kuma.io/mesh: default
  9. spec:
  10. targetRef:
  11. kind: MeshSubset
  12. tags:
  13. kuma.io/service: frontend
  14. to:
  15. - targetRef:
  16. kind: MeshService
  17. name: backend
  18. namespace: kuma-demo
  19. sectionName: httpport
  20. default:
  21. idleTimeout: 23s

I am using MeshService

  1. type: MeshTimeout
  2. name: frontend-timeouts
  3. mesh: default
  4. labels:
  5. kuma.io/effect: shadow
  6. spec:
  7. targetRef:
  8. kind: MeshSubset
  9. tags:
  10. kuma.io/service: frontend
  11. to:
  12. - targetRef:
  13. kind: MeshService
  14. name: backend
  15. default:
  16. idleTimeout: 23s
  1. type: MeshTimeout
  2. name: frontend-timeouts
  3. mesh: default
  4. labels:
  5. kuma.io/effect: shadow
  6. spec:
  7. targetRef:
  8. kind: MeshSubset
  9. tags:
  10. kuma.io/service: frontend
  11. to:
  12. - targetRef:
  13. kind: MeshService
  14. name: backend
  15. sectionName: httpport
  16. default:
  17. idleTimeout: 23s

Check the diff using kumactl:

  1. $ kumactl inspect dataplane frontend-dpp --type=config --include=diff --shadow | jq '.diff' | jd -t patch2jd
  2. @ ["type.googleapis.com/envoy.config.cluster.v3.Cluster","backend_kuma-demo_svc_3001","typedExtensionProtocolOptions","envoy.extensions.upstreams.http.v3.HttpProtocolOptions","commonHttpProtocolOptions","idleTimeout"]
  3. - "3600s"
  4. @ ["type.googleapis.com/envoy.config.cluster.v3.Cluster","backend_kuma-demo_svc_3001","typedExtensionProtocolOptions","envoy.extensions.upstreams.http.v3.HttpProtocolOptions","commonHttpProtocolOptions","idleTimeout"]
  5. + "23s"

The output not only identifies the exact location in Envoy where the change will occur, but also shows the current timeout value that we’re planning to replace.