Data plane on Universal

As mentioned previously in universal you need to create a dataplane definition and pass it to the kuma-dp run command.

When transparent proxying is not enabled, the outbound service dependencies have to be manually specified in the Dataplane entity. This also means that without transparent proxying you must update your codebases to consume those external services on 127.0.0.1 on the port specified in the outbound section.

To avoid users bypassing the sidecar, have the service listen only on the internal interface (127.0.0.1 or ::1) instead of all interfaces (0.0.0.0 or ::).

For example, this is how we start a Dataplane for a hypothetical Redis service and then start the kuma-dp process:

  1. cat dp.yaml
  2. type: Dataplane
  3. mesh: default
  4. name: redis-1
  5. networking:
  6. address: 23.234.0.1 # IP of the instance
  7. inbound:
  8. - port: 9000
  9. servicePort: 6379
  10. tags:
  11. kuma.io/service: redis
  12. kuma-dp run \
  13. --cp-address=https://127.0.0.1:5678 \
  14. --dataplane-file=dp.yaml
  15. --dataplane-token-file=/tmp/kuma-dp-redis-1-token

In the example above, any external client who wants to consume Redis through the sidecar will have to use 23.234.0.1:9000, which will redirect to the Redis service listening on address 127.0.0.1:6379. If your service doesn’t listen on 127.0.0.1 and you can’t change the address it listens on, you can set the serviceAddress as shown below.

  1. type: Dataplane
  2. ...
  3. networking:
  4. ...
  5. inbound:
  6. - port: 9000
  7. serviceAddress: 192.168.1.10
  8. servicePort: 6379
  9. ...

This configuration indicates that your service is listening on 192.168.1.10, and incoming traffic will be redirected to that address.

Note that in Universal dataplanes need to start with a token for authentication. You can learn how to generate tokens in the security section .

Now let’s assume that we have another service called “Backend” that listens on port 80, and that makes outgoing requests to the redis service:

  1. cat dp.yaml
  2. type: Dataplane
  3. mesh: default
  4. name:
  5. networking:
  6. address:
  7. inbound:
  8. - port: 8000
  9. servicePort: 80
  10. tags:
  11. kuma.io/service: backend
  12. kuma.io/protocol: http
  13. outbound:
  14. - port: 10000
  15. tags:
  16. kuma.io/service: redis
  17. kuma-dp run \
  18. --cp-address=https://127.0.0.1:5678 \
  19. --dataplane-file=dp.yaml \
  20. --dataplane-var name=`hostname -s` \
  21. --dataplane-var address=192.168.0.2 \
  22. --dataplane-token-file=/tmp/kuma-dp-backend-1-token

In order for the backend service to successfully consume redis, we specify an outbound networking section in the Dataplane configuration instructing the DP to listen on a new port 10000 and to proxy any outgoing request on port 10000 to the redis service. For this to work, we must update our application to consume redis on 127.0.0.1:10000.

You can parametrize your Dataplane definition, so you can reuse the same file for many kuma-dp instances or even services.

Lifecycle

On Universal you can manage Dataplane resources either in Direct mode or in Indirect mode.

Direct

This is the recommended way to operate with Dataplane resources on Universal.

Joining the mesh

Pass Dataplane resource directly to kuma-dp run command. Dataplane resource could be a Mustache template in this case:

backend-dp-tmpl.yaml

  1. type: Dataplane
  2. mesh: default
  3. name: { { name } }
  4. networking:
  5. address: { { address } }
  6. inbound:
  7. - port: 8000
  8. servicePort: 80
  9. tags:
  10. kuma.io/service: backend
  11. kuma.io/protocol: http

The command with template parameters will look like this:

  1. kuma-dp run \
  2. --dataplane-file=backend-dp-tmpl.yaml \
  3. --dataplane-var name=my-backend-dp \
  4. --dataplane-var address=192.168.0.2 \
  5. ...

When xDS connection between proxy and kuma-cp is established, Dataplane resource will be created automatically by kuma-cp.

To join the mesh in a graceful way, we need to first make sure the application is ready to serve traffic before it can be considered a valid traffic destination. By default, a proxy will be considered healthy regardless of its state. Consider using service probes to mark the data plane proxy as healthy only after all health checks are passed.

Leaving the mesh

To leave the mesh in a graceful shutdown, we need to remove the traffic destination from all the clients before shutting it down.

kuma-dp process upon receiving SIGTERM starts listener draining in Envoy, then it waits for draining time before stopping the process. During the draining process, Envoy can still accept connections however:

  1. It is marked as unhealthy on Envoy Admin /ready endpoint
  2. It sends connection: close for HTTP/1.1 requests and GOAWAY frame for HTTP/2. This forces clients to close a connection and reconnect to the new instance.

If the application next to the kuma-dp process quits immediately after the SIGTERM signal, there is a high chance that clients will still try to send traffic to this destination. To mitigate this, we need to support graceful shutdown in the application. For example, the application should wait X seconds to exit after receiving the first SIGTERM signal.

Consider using service probes to mark data plane proxy as unhealthy when it is in draining state.

If data plane proxy is shutdown gracefully, the Dataplane resource is automatically deleted by kuma-cp.

If the data plane proxy goes down ungracefully, the Dataplane resource isn’t deleted immediately. The following sequence of the events should happen:

  1. After KUMA_METRICS_DATAPLANE_IDLE_TIMEOUT (default 5mins) the data plane proxy is marked as Offline . This is because there’s no active xDS connection between the proxy and kuma-cp.
  2. After KUMA_RUNTIME_UNIVERSAL_DATAPLANE_CLEANUP_AGE (default 72h) offline data plane proxies are deleted.

This guarantees that Dataplane resources are eventually cleaned up even in the case of ungraceful shutdown.

Indirect

The lifecycle is called “Indirect”, because there is no strict dependency between Dataplane resource creation and the startup of the data plane proxy. This is a good way if you have some external components that manages Dataplane lifecycle.

Joining the mesh

Dataplane resource is created using HTTP API or kumactl. Dataplane resource is created before data plane proxy started. There is no support for templates, resource should be a valid Dataplane configuration.

When data plane proxy is started, it takes name and mesh as an input arguments. After connection between proxy and kuma-cp is established, kuma-cp finds the Dataplane resource with name and mesh in the store.

  1. kuma-cp run \
  2. --name=my-backend-dp \
  3. --mesh=default \
  4. ...

To join the mesh in a graceful way, you can use service probes just like in Direct section.

Leaving the mesh

Kuma-cp will never delete the Dataplane resource (with both graceful and ungraceful shutdowns).

If data plane proxy is shutdown gracefully, then Dataplane resource will be marked as Offline. Offline data plane proxies deleted automatically after KUMA_RUNTIME_UNIVERSAL_DATAPLANE_CLEANUP_AGE, by default it’s 72h.

If data plane proxy went down ungracefully, then the following sequence of the events should happen:

  1. After KUMA_METRICS_DATAPLANE_IDLE_TIMEOUT (default 5mins) the data plane proxy is marked as Offline . This is because there’s no active xDS connection between the proxy and kuma-cp.
  2. After KUMA_RUNTIME_UNIVERSAL_DATAPLANE_CLEANUP_AGE (default 72h) offline data plane proxies are deleted.

To leave the mesh in a graceful way, you can use service probes just like in Direct section.

Envoy

Envoy has a powerful Admin API for monitoring and troubleshooting.

By default, kuma-dp starts Envoy Admin API on the loopback interface. The port is configured in the Dataplane entity:

  1. type: Dataplane
  2. mesh: default
  3. name: my-dp
  4. networking:
  5. admin:
  6. port: 1000
  7. # ...

If the admin section is empty or port is equal to zero then the default value for port will be taken from the Kuma Control Plane configuration.

Dataplane configuration