Kubernetes

You can find instructions to install on kubernetes for single-zone or multi-zone. This page covers special steps for some Kubernetes distributions or version and some troubleshooting help.

Helm

Adding the Kuma charts repository

To use Kuma with Helm charts, add the Kuma charts repository locally:

  1. helm repo add kuma https://kumahq.github.io/charts

You can fetch all following updates by running helm repo update.

Helm config

You can find a full reference of helm configuration.

You can also set any control-plane configuration by using the prefix: controlPlane.envVars.. Find detailed explanations in the page: control plane configuration.

Argo CD

Kuma zones require a certificate to verify a connection between the control plane and a data plane proxy. Kuma Helm chart autogenerate self-signed certificate if the certificate isn’t explicitly set. Argo CD uses helm template to compare and apply Kubernetes YAMLs. Helm template doesn’t work with chart logic to verify if the certificate is present. This results in replacing the certificate on each Argo redeployment. The solution to this problem is to explicitly set the certificates. See “Data plane proxy to control plane communication” to learn how to preconfigure Kuma with certificates.

If you use Argo Rollouts for blue-green deployment configure the control plane with KUMA_RUNTIME_KUBERNETES_INJECTOR_IGNORED_SERVICE_SELECTOR_LABELS set to rollouts-pod-template-hash. It will enable traffic shifting between active and preview Service without traffic interruption.

If you are using policies inside Argo managed entities you will want to workaround argoproj/argo-cd#4764. To do so disable the mesh owner reference by setting KUMA_RUNTIME_KUBERNETES_SKIP_MESH_OWNER_REFERENCE=true in your control-plane configuration. If you do this, deleting a mesh will not delete the resources that are attached to it.

Sidecars

Check the notes on DP lifecycle for Kubernetes for important considerations about sidecars with Kuma.

CNI

On Kubernetes there are two ways to redirect traffic to the sidecar:

  • Init containers which need to run with elevated privileges.
  • CNI which requires a little extra setup.

To use the CNI you can use the detailed instructions to configure the Kuma CNI.

OpenShift

Transparent proxy

Starting from version 4.1 OpenShift uses nftables instead of iptables. So using init container for redirecting traffic to the proxy no longer works and you should use the kuma-cni instead.

Webhooks on OpenShift 3.11

By default MutatingAdmissionWebhook and ValidatingAdmissionWebhook are disabled on OpenShift 3.11. In order to make it work add the following pluginConfig into /etc/origin/master/master-config.yaml on the master node:

  1. admissionConfig:
  2. pluginConfig:
  3. MutatingAdmissionWebhook:
  4. configuration:
  5. apiVersion: apiserver.config.k8s.io/v1alpha1
  6. kubeConfigFile: /dev/null
  7. kind: WebhookAdmission
  8. ValidatingAdmissionWebhook:
  9. configuration:
  10. apiVersion: apiserver.config.k8s.io/v1alpha1
  11. kubeConfigFile: /dev/null
  12. kind: WebhookAdmission

After updating master-config.yaml restart the cluster and install control-plane.

GKE Autopilot

By default, GKE Autopilot forbids the use of the NET_ADMIN linux capability. This is required by Kuma to set up the iptables rules in order to intercept inbound and outbound traffic.

It is possible to configure a GKE cluster in autopilot mode so that the NET_ADMIN capability is authorized with the following option in your gcloud command: --workload-policies=allow-net-admin

Full example:

  1. gcloud beta container \
  2. --project ${GCP_PROJECT} \
  3. clusters create-auto ${CLUSTER_NAME} \
  4. --region ${REGION} \
  5. --release-channel "regular" \
  6. --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
  7. --network "projects/${GCP_PROJECT}/global/networks/default" \
  8. --subnetwork "projects/${GCP_PROJECT}/regions/${REGION}/subnetworks/default" \
  9. --no-enable-master-authorized-networks \
  10. --cluster-ipv4-cidr=/20 \
  11. --workload-policies=allow-net-admin