Configure transparent proxying

In order to automatically intercept traffic from and to a service through a kuma-dp data plane proxy instance, Kuma utilizes a transparent proxying using iptables.

Transparent proxying helps with a smoother rollout of a Service Mesh to a current deployment by preserving existing service naming and as the result - avoid changes to the application code.

Kubernetes

On Kubernetes kuma-dp leverages transparent proxying automatically via iptables installed with kuma-init container or CNI. All incoming and outgoing traffic is automatically intercepted by kuma-dp without having to change the application code.

Kuma integrates with a service naming provided by Kubernetes DNS as well as providing its own Kuma DNS for multi-zone service naming.

Universal

On Universal kuma-dp leverages the data plane proxy specification associated to it for receiving incoming requests on a pre-defined port.

In order to enable transparent-proxy the Zone Control Plane must exist on a seperate server. Running the Zone Control Plane with Postgres does not function with transparent-proxy on the same machine.

There are several advantages for using transparent proxying in universal mode:

  • Simpler Dataplane resource, as the outbound section becomes obsolete and can be skipped.
  • Universal service naming with .mesh DNS domain instead of explicit outbound like https://localhost:10001.
  • Support for hostnames of your choice using VirtualOutbounds that lets you preserve existing service naming.
  • Better service manageability (security, tracing).

Setting up the service host

Prerequisites:

  • kuma-dp, envoy, and coredns must run on the worker node – that is, the node that runs your service mesh workload.
  • coredns must be in the PATH so that kuma-dp can access it.
    • You can also set the location with the --dns-coredns-path flag to kuma-dp.

Kuma comes with kumactl executable which can help us to prepare the host. Due to the wide variety of Linux setup options, these steps may vary and may need to be adjusted for the specifics of the particular deployment. The host that will run the kuma-dp process in transparent proxying mode needs to be prepared with the following steps executed as root:

  1. Create a new dedicated user on the machine.

    1. useradd -u 5678 -U kuma-dp
  2. Redirect all the relevant inbound, outbound and DNS traffic to the Kuma data plane proxy (if you’re running any other services on that machine you need to adjust the comma separated lists of --exclude-inbound-ports and --exclude-outbound-ports accordingly).

    1. kumactl install transparent-proxy \
    2. --kuma-dp-user kuma-dp \
    3. --redirect-dns \
    4. --exclude-inbound-ports 22

Please note that this command will change the host’s iptables rules.

The command excludes port 22, so you can SSH to the machine without kuma-dp running.

The changes won’t persist over restarts. You need to either add this command to your start scripts or use firewalld.

Data plane proxy resource

In transparent proxying mode, the Dataplane resource should omit the networking.outbound section and use networking.transparentProxying section instead.

  1. type: Dataplane
  2. mesh: default
  3. name: {{ name }}
  4. networking:
  5. address: {{ address }}
  6. inbound:
  7. - port: {{ port }}
  8. tags:
  9. kuma.io/service: demo-client
  10. transparentProxying:
  11. redirectPortInbound: 15006
  12. redirectPortOutbound: 15001

The ports illustrated above are the default ones that kumactl install transparent-proxy will set. These can be changed using the relevant flags to that command.

Invoking the Kuma data plane

It is important that the kuma-dp process runs with the same system user that was passed to kumactl install transparent-proxy --kuma-dp-user. The service itself should run with any other user than kuma-dp. Otherwise, it won’t be able to leverage transparent proxying.

When systemd is used, this can be done with an entry User=kuma-dp in the [Service] section of the service file.

When starting kuma-dp with a script or some other automation instead, we can use runuser with the aforementioned yaml resource as follows:

  1. runuser -u kuma-dp -- \
  2. /usr/bin/kuma-dp run \
  3. --cp-address=https://<IP or hostname of CP>:5678 \
  4. --dataplane-token-file=/kuma/token-demo \
  5. --dataplane-file=/kuma/dpyaml-demo \
  6. --dataplane-var name=dp-demo \
  7. --dataplane-var address=<IP of VM> \
  8. --dataplane-var port=<Port of the service> \
  9. --binary-path /usr/local/bin/envoy

You can now reach the service on the same IP and port as before installing transparent proxy, but now the traffic goes through Envoy. At the same time, you can now connect to services using Kuma DNS.

firewalld support

If you run firewalld to manage firewalls and wrap iptables, add the --store-firewalld flag to kumactl install transparent-proxy. This persists the relevant rules across host restarts. The changes are stored in /etc/firewalld/direct.xml. There is no uninstall command for this feature.

Upgrades

Before upgrading to the next version of Kuma, it’s best to clean existing iptables rules and only then replace the kumactl binary.

You can clean the rules either by restarting the host or by running following commands

Executing these commands will remove all iptables rules, including those created by Kuma and any other applications or services.

  1. iptables --table nat --flush
  2. iptables --table raw --flush
  3. ip6tables --table nat --flush
  4. ip6tables --table raw --flush
  5. iptables --table nat --delete-chain
  6. iptables --table raw --delete-chain
  7. ip6tables --table nat --delete-chain
  8. ip6tables --table raw --delete-chain

In the future release, kumactl will ship with uninstall command.

Configuration

Intercepted traffic

By default, all the traffic is intercepted by Envoy. You can exclude which ports are intercepted by Envoy with the following annotations placed on the Pod

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: example-app
  5. namespace: kuma-example
  6. spec:
  7. ...
  8. template:
  9. metadata:
  10. ...
  11. annotations:
  12. # all incoming connections on ports 1234 won't be intercepted by Envoy
  13. traffic.kuma.io/exclude-inbound-ports: "1234"
  14. # all outgoing connections on ports 5678, 8900 won't be intercepted by Envoy
  15. traffic.kuma.io/exclude-outbound-ports: "5678,8900"
  16. spec:
  17. containers:
  18. ...

You can also control this value on whole Kuma deployment with the following Kuma CP configuration

  1. KUMA_RUNTIME_KUBERNETES_SIDECAR_TRAFFIC_EXCLUDE_INBOUND_PORTS=1234
  2. KUMA_RUNTIME_KUBERNETES_SIDECAR_TRAFFIC_EXCLUDE_OUTBOUND_PORTS=5678,8900

By default, all ports are intercepted by the transparent proxy. This may prevent remote access to the host via SSH (port 22) or other management tools when kuma-dp is not running.

If you need to access the host directly, even when kuma-dp is not running, use the --exclude-inbound-ports flag with kumactl install transparent-proxy to specify a comma-separated list of ports to exclude from redirection.

Run kumactl install transparent-proxy --help for all available options.

Reachable Services

By default, every data plane proxy in the mesh follows every other data plane proxy. This may lead to performance problems in larger deployments of the mesh. It is highly recommended to define a list of services that your service connects to.

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: example-app
  5. namespace: kuma-example
  6. spec:
  7. ...
  8. template:
  9. metadata:
  10. ...
  11. annotations:
  12. # a comma separated list of kuma.io/service values
  13. kuma.io/transparent-proxying-reachable-services: "redis_kuma-demo_svc_6379,elastic_kuma-demo_svc_9200"
  14. spec:
  15. containers:
  16. ...
  1. type: Dataplane
  2. mesh: default
  3. name: {{ name }}
  4. networking:
  5. address: {{ address }}
  6. inbound:
  7. - port: {{ port }}
  8. tags:
  9. kuma.io/service: demo-client
  10. transparentProxying:
  11. redirectPortInbound: 15006
  12. redirectPortOutbound: 15001
  13. reachableServices:
  14. - redis_kuma-demo_svc_6379
  15. - elastic_kuma-demo_svc_9200

Reachable Backends

This works only when MeshService is enabled.

Reachable Backends provides similar functionality to reachable services, but it applies to MeshService, MeshExternalService, and MeshMultiZoneService.

By default, every data plane proxy in the mesh tracks every other data plane proxy. Configuring reachableBackends can improve performance and reduce resource utilization.

Unlike reachable services, the model for providing data in Reachable Backends is more structured.

Model

  • refs: A list of all resources your application wants to track and communicate with.

    • kind: The type of resource. Possible values include:
    • name: The name of the resource.
    • namespace: (Kubernetes only) The namespace where the resource is located. When this is defined, the name is required. Only on kubernetes.
    • labels: A list of labels to match on the resources (either labels or name can be defined).
    • port: (Optional) The port of the service you want to communicate with. Works with MeshService and MeshMultiZoneService
  • Kubernetes

  • Universal
  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. ...
  5. spec:
  6. ...
  7. template:
  8. metadata:
  9. kuma.io/reachable-backends: |
  10. refs:
  11. - kind: MeshService
  12. name: redis
  13. namespace: kuma-demo
  14. port: 8080
  15. - kind: MeshMulitZoneService
  16. labels:
  17. kuma.io/display-name: test-server
  18. - kind: MeshExternalService
  19. name: mes-http
  20. namespace: kuma-system
  1. type: Dataplane
  2. mesh: default
  3. name: {{ name }}
  4. networking:
  5. ...
  6. transparentProxying:
  7. redirectPortInbound: 15006
  8. redirectPortOutbound: 15001
  9. reachableBackends:
  10. refs:
  11. - kind: MeshService
  12. name: redis
  13. - kind: MeshMulitZoneService
  14. labels:
  15. kuma.io/display-name: test-server
  16. - kind: MeshExternalService
  17. name: mes-http

Examples

demo-app communicates only with redis on port 6379
  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: demo-app
  5. namespace: kuma-demo
  6. spec:
  7. ...
  8. template:
  9. metadata:
  10. ...
  11. annotations:
  12. kuma.io/reachable-backends: |
  13. refs:
  14. - kind: MeshService
  15. name: redis
  16. namespace: kuma-demo
  17. port: 6379
  18. spec:
  19. containers:
  20. ...
  1. type: Dataplane
  2. mesh: default
  3. name: {{ name }}
  4. networking:
  5. address: {{ address }}
  6. inbound:
  7. - port: {{ port }}
  8. tags:
  9. kuma.io/service: demo-app
  10. transparentProxying:
  11. redirectPortInbound: 15006
  12. redirectPortOutbound: 15001
  13. reachableBackends:
  14. refs:
  15. - kind: MeshService
  16. name: redis
  17. port: 6379
demo-app doesn’t need to communicate with any service
  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: demo-app
  5. namespace: kuma-demo
  6. spec:
  7. ...
  8. template:
  9. metadata:
  10. ...
  11. annotations:
  12. kuma.io/reachable-backends: ""
  13. spec:
  14. containers:
  15. ...
  1. type: Dataplane
  2. mesh: default
  3. name: {{ name }}
  4. networking:
  5. address: {{ address }}
  6. inbound:
  7. - port: {{ port }}
  8. tags:
  9. kuma.io/service: demo-app
  10. transparentProxying:
  11. redirectPortInbound: 15006
  12. redirectPortOutbound: 15001
  13. reachableBackends: {}
demo-app wants to communicate with all MeshServices in kuma-demo namespace
  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: demo-app
  5. namespace: kuma-demo
  6. spec:
  7. ...
  8. template:
  9. metadata:
  10. ...
  11. annotations:
  12. kuma.io/reachable-backends: |
  13. refs:
  14. - kind: MeshService
  15. labels:
  16. k8s.kuma.io/namespace: kuma-demo
  17. spec:
  18. containers:
  19. ...

Transparent Proxy with eBPF (experimental)

Starting from Kuma 2.0 you can set up transparent proxy to use eBPF instead of iptables.

To use Transparent Proxy with eBPF your environment has to use Kernel >= 5.7 and have cgroup2 available

  1. kumactl install control-plane \
  2. --set "experimental.ebpf.enabled=true" | kubectl apply -f-
  1. kumactl install transparent-proxy \
  2. --experimental-transparent-proxy-engine \
  3. --ebpf-enabled \
  4. --ebpf-instance-ip <IP_ADDRESS> \
  5. --ebpf-programs-source-path <PATH>

If your environment contains more than one non-loopback network interface, and you want to specify explicitly which one should be used for transparent proxying you should provide it using --ebpf-tc-attach-iface <IFACE_NAME> flag, during transparent proxy installation.