Platform-Specific Prerequisites

This document covers any platform or environment specific prerequisites for installing Istio in ambient mode.

Platform

Google Kubernetes Engine (GKE)

  1. On GKE, Istio components with the system-node-critical priorityClassName can only be installed in namespaces that have a ResourceQuota defined. By default in GKE, only kube-system has a defined ResourceQuota for the node-critical class. istio-cni and ztunnel both require the node-critical class, and so in GKE, both components must either:

    • Be installed into kube-system (not istio-system)

    • Be installed into another namespace (such as istio-system) in which a ResourceQuota has been manually created, for example:

      1. apiVersion: v1
      2. kind: ResourceQuota
      3. metadata:
      4. name: gcp-critical-pods
      5. namespace: istio-system
      6. spec:
      7. hard:
      8. pods: 1000
      9. scopeSelector:
      10. matchExpressions:
      11. - operator: In
      12. scopeName: PriorityClass
      13. values:
      14. - system-node-critical

Minikube

  1. If you are using Minikube with the Docker driver, you must append --set cni.cniNetnsDir="/var/run/docker/netns" to the helm install command so that the istio-cni node agent can correctly manage and capture pods on the node.

MicroK8s

  1. If you are using MicroK8s, you must append --set values.cni.cniConfDir=/var/snap/microk8s/current/args/cni-network --set values.cni.cniBinDir=/var/snap/microk8s/current/opt/cni/bin to the helm install command, as MicroK8s uses nonstandard locations for CNI configuration and binaries.

K3D

  1. If you are using k3d with the default flannel CNI, you must append --set values.cni.cniConfDir=/var/lib/rancher/k3s/agent/etc/cni/net.d --set values.cni.cniBinDir=/bin/ to your istioctl install or helm install command to install Istio with the ambient profile.

  2. Create a cluster and disable Traefik so it doesn’t conflict with Istio’s ingress gateways:

    1. $ k3d cluster create --api-port 6550 -p '9080:80@loadbalancer' -p '9443:443@loadbalancer' --agents 2 --k3s-arg '--disable=traefik@server:*'
  3. Install Istio with the ambient profile using istioctl:

    1. $ istioctl install --set profile=ambient --skip-confirmation --set values.cni.cniConfDir=/var/lib/rancher/k3s/agent/etc/cni/net.d --set values.cni.cniBinDir=/bin

K3S

  1. If you are using K3S and one of its bundled CNIs, you must append --set values.cni.cniConfDir=/var/lib/rancher/k3s/agent/etc/cni/net.d --set values.cni.cniBinDir=/var/lib/rancher/k3s/data/current/bin/ to your istioctl install or helm install command to install Istio ambient, as K3S uses nonstandard locations for CNI configuration and binaries. These nonstandard locations may be overridden as well according to K3S documentation. If you are using K3S with a custom, non-bundled CNI, you must use the correct paths for those CNIs, e.g. /etc/cni/net.d - see K3S docs for details.

CNI

Cilium

  1. Cilium currently defaults to proactively deleting other CNI plugins and their config, and must be configured with cni.exclusive = false to properly support chaining. See the Cilium documentation for more details.

  2. Due to how Cilium manages node identity and internally allow-lists node-level health probes to pods, applying default-DENY NetworkPolicy in a Cilium CNI install underlying Istio in ambient mode, will cause kubelet health probes (which are by-default exempted from NetworkPolicy enforcement by Cilium) to be blocked.

    This can be resolved by applying the following CiliumClusterWideNetworkPolicy:

    1. apiVersion: "cilium.io/v2"
    2. kind: CiliumClusterwideNetworkPolicy
    3. metadata:
    4. name: "allow-ambient-hostprobes"
    5. spec:
    6. description: "Allows SNAT-ed kubelet health check probes into ambient pods"
    7. endpointSelector: {}
    8. ingress:
    9. - fromCIDR:
    10. - "169.254.7.127/32"

    Please see issue #49277 and CiliumClusterWideNetworkPolicy for more details.