Platform-Specific Prerequisites

This document covers any platform- or environment-specific prerequisites for installing Istio in ambient mode.

Platform

Certain Kubernetes environments require you to set various Istio configuration options to support them.

Google Kubernetes Engine (GKE)

On GKE, Istio components with the system-node-critical priorityClassName can only be installed in namespaces that have a ResourceQuota defined. By default in GKE, only kube-system has a defined ResourceQuota for the node-critical class. The Istio CNI node agent and ztunnel both require the node-critical class, and so in GKE, both components must either:

  • Be installed into kube-system (not istio-system)
  • Be installed into another namespace (such as istio-system) in which a ResourceQuota has been manually created, for example:
  1. apiVersion: v1
  2. kind: ResourceQuota
  3. metadata:
  4. name: gcp-critical-pods
  5. namespace: istio-system
  6. spec:
  7. hard:
  8. pods: 1000
  9. scopeSelector:
  10. matchExpressions:
  11. - operator: In
  12. scopeName: PriorityClass
  13. values:
  14. - system-node-critical

Amazon Elastic Kubernetes Service (EKS)

If you are using EKS:

  • with Amazon’s VPC CNI
  • with Pod ENI trunking enabled
  • and you are using EKS pod-attached SecurityGroups via SecurityGroupPolicy

POD_SECURITY_GROUP_ENFORCING_MODE must be explicitly set to standard, or pod health probes (which are by-default silently exempted from all policy enforcement by the VPC CNI) will fail. This is because Istio uses a link-local SNAT address for kubelet health probes, which Amazon’s VPC CNI is not aware of, and the VPC CNI does not have an option to exempt link-local addresses from policy enforcement.

You can check if you have pod ENI trunking enabled by running the following command:

  1. $ kubectl set env daemonset aws-node -n kube-system --list | grep ENABLE_POD_ENI

You can check if you have any pod-attached security groups in your cluster by running the following command:

  1. $ kubectl get securitygrouppolicies.vpcresources.k8s.aws

You can set POD_SECURITY_GROUP_ENFORCING_MODE=standard by running the following command, and recycling affected pods:

  1. $ kubectl set env daemonset aws-node -n kube-system POD_SECURITY_GROUP_ENFORCING_MODE=standard

k3d

When using k3d with the default Flannel CNI, you must append the correct platform value to your installation commands, as k3d uses nonstandard locations for CNI configuration and binaries which requires some Helm overrides.

  1. Create a cluster with Traefik disabled so it doesn’t conflict with Istio’s ingress gateways:

    1. $ k3d cluster create --api-port 6550 -p '9080:80@loadbalancer' -p '9443:443@loadbalancer' --agents 2 --k3s-arg '--disable=traefik@server:*'
  2. Set global.platform=k3d when installing Istio charts. For example:

    1. $ helm install istio-cni istio/cni -n istio-system --set profile=ambient --set global.platform=k3d --wait
    1. $ istioctl install --set profile=ambient --set values.global.platform=k3d

K3s

When using K3s and one of its bundled CNIs, you must append the correct platform value to your installation commands, as K3s uses nonstandard locations for CNI configuration and binaries which requires some Helm overrides. For the default K3s paths, Istio provides built-in overrides based on the global.platform value.

  1. $ helm install istio-cni istio/cni -n istio-system --set profile=ambient --set global.platform=k3s --wait
  1. $ istioctl install --set profile=ambient --set values.global.platform=k3s

However, these locations may be overridden in K3s, according to K3s documentation. If you are using K3s with a custom, non-bundled CNI, you must manually specify the correct paths for those CNIs, e.g. /etc/cni/net.d - see the K3s docs for details. For example:

  1. $ helm install istio-cni istio/cni -n istio-system --set profile=ambient --wait --set cniConfDir=/var/lib/rancher/k3s/agent/etc/cni/net.d --set cniBinDir=/var/lib/rancher/k3s/data/current/bin/
  1. $ istioctl install --set profile=ambient --set values.cni.cniConfDir=/var/lib/rancher/k3s/agent/etc/cni/net.d --set values.cni.cniBinDir=/var/lib/rancher/k3s/data/current/bin/

MicroK8s

If you are installing Istio on MicroK8s, you must append the correct platform value to your installation commands, as MicroK8s uses non-standard locations for CNI configuration and binaries. For example:

  1. $ helm install istio-cni istio/cni -n istio-system --set profile=ambient --set global.platform=microk8s --wait
  1. $ istioctl install --set profile=ambient --set values.global.platform=microk8s

minikube

If you are using minikube with the Docker driver, you must append the correct platform value to your installation commands, as minikube with Docker uses a nonstandard bind mount path for containers. For example:

  1. $ helm install istio-cni istio/cni -n istio-system --set profile=ambient --set global.platform=minikube --wait"
  1. $ istioctl install --set profile=ambient --set values.global.platform=minikube"

Red Hat OpenShift

OpenShift requires that ztunnel and istio-cni components are installed in the kube-system namespace, and that you set global.platform=openshift for all charts.

You must --set global.platform=openshift for every chart you install, for example with the istiod chart:

  1. $ helm install istiod istio/istiod -n istio-system --set profile=ambient --set global.platform=openshift --wait

In addition, you must install istio-cni and ztunnel in the kube-system namespace, for example:

  1. $ helm install istio-cni istio/istio-cni -n kube-system --set profile=ambient --set global.platform=openshift --wait
  2. $ helm install ztunnel istio/ztunnel -n kube-system --set profile=ambient --set global.platform=openshift --wait
  1. $ istioctl install --set profile=openshift-ambient --skip-confirmation

CNI plugins

The following configurations apply to all platforms, when certain CNI plugins are used:

Cilium

  1. Cilium currently defaults to proactively deleting other CNI plugins and their config, and must be configured with cni.exclusive = false to properly support chaining. See the Cilium documentation for more details.

  2. Cilium’s BPF masquerading is currently disabled by default, and has issues with Istio’s use of link-local IPs for Kubernetes health checking. Enabling BPF masquerading via bpf.masquerade=true is not currently supported, and results in non-functional pod health checks in Istio ambient. Cilium’s default iptables masquerading implementation should continue to function correctly.

  3. Due to how Cilium manages node identity and internally allow-lists node-level health probes to pods, applying any default-DENY NetworkPolicy in a Cilium CNI install underlying Istio in ambient mode will cause kubelet health probes (which are by-default silently exempted from all policy enforcement by Cilium) to be blocked. This is because Istio uses a link-local SNAT address for kubelet health probes, which Cilium is not aware of, and Cilium does not have an option to exempt link-local addresses from policy enforcement.

    This can be resolved by applying the following CiliumClusterWideNetworkPolicy:

    1. apiVersion: "cilium.io/v2"
    2. kind: CiliumClusterwideNetworkPolicy
    3. metadata:
    4. name: "allow-ambient-hostprobes"
    5. spec:
    6. description: "Allows SNAT-ed kubelet health check probes into ambient pods"
    7. enableDefaultDeny:
    8. egress: false
    9. ingress: false
    10. endpointSelector: {}
    11. ingress:
    12. - fromCIDR:
    13. - "169.254.7.127/32"

    This policy override is not required unless you already have other default-deny NetworkPolicies or CiliumNetworkPolicies applied in your cluster.

    Please see issue #49277 and CiliumClusterWideNetworkPolicy for more details.