AKS-to-AKS clustermesh preparation

This is a step-by-step guide on how to install and prepare AKS (Azure Kubernetes Service) clusters in BYOCNI mode to meet the requirements for the clustermesh feature.

In this guide we will install two AKS clusters in BYOCNI (Bring Your Own CNI) mode and connect them together via clustermesh. This guide is not applicable for cross-cloud clustermesh since this guide doesn’t expose the node IPs outside of the Azure cloud.

Note

BYOCNI requires the aks-preview CLI extension with version >= 0.5.55, which itself requires an az CLI version >= 2.32.0.

Install cluster one

  1. Create a resource group for the cluster (or set the environment variables to an existing resource group).

    1. export NAME="$(whoami)-$RANDOM"
    2. export AZURE_RESOURCE_GROUP="${NAME}-group"
    3. # westus2 can be changed to any available location (`az account list-locations`)
    4. az group create --name "${AZURE_RESOURCE_GROUP}" -l westus2
  2. Now that we have a resource group we can create a VNet (virtual network). Creating a custom VNet is required so we can specify a unique Node, Pod, and Service CIDRs to make sure we don’t overlap with other clusters.

    Note

    In this case we use the 192.168.10.0/24 range, but this can be exchanged for any range except for 169.254.0.0/16, 172.30.0.0/16, 172.31.0.0/16, or 192.0.2.0/24 which are reserved by Azure.

    1. az network vnet create \
    2. --resource-group "${AZURE_RESOURCE_GROUP}" \
    3. --name "${NAME}-cluster-net" \
    4. --address-prefixes 192.168.10.0/24 \
    5. --subnet-name "${NAME}-node-subnet" \
    6. --subnet-prefix 192.168.10.0/24
    7. # Store the ID of the created subnet
    8. export NODE_SUBNET_ID=$(az network vnet subnet show \
    9. --resource-group "${AZURE_RESOURCE_GROUP}" \
    10. --vnet-name "${NAME}-cluster-net" \
    11. --name "${NAME}-node-subnet" \
    12. --query id \
    13. -o tsv)
  3. We now have a virtual network and a subnet with the same CIDR. We can create an AKS cluster without CNI and request to use our custom VNet and subnet.

    During creation we also request to use "10.10.0.0/16 as the pod CIDR and "10.11.0.0/16 as the services CIDR. These can be changed to any range except for Azure reserved ranges and ranges used by other clusters we intend to add to the clustermesh.

    1. az aks create \
    2. --resource-group "${AZURE_RESOURCE_GROUP}" \
    3. --name "${NAME}" \
    4. --network-plugin none \
    5. --pod-cidr "10.10.0.0/16" \
    6. --service-cidr "10.11.0.0/16" \
    7. --dns-service-ip "10.11.0.10" \
    8. --vnet-subnet-id "${NODE_SUBNET_ID}"
    9. # Get kubectl credentials, the command will merge the new credentials
    10. # with the existing ~/.kube/config
    11. az aks get-credentials \
    12. --resource-group "${AZURE_RESOURCE_GROUP}" \
    13. --name "${NAME}"
  4. Install Cilium, it is important to give the cluster a unique cluster ID and to tell Cilium to use our custom pod CIDR.

    1. cilium install \
    2. --azure-resource-group "${AZURE_RESOURCE_GROUP}" \
    3. --cluster-id 1 \
    4. --config "cluster-pool-ipv4-cidr=10.10.0.0/16"
  5. Check the status of Cilium.

    1. cilium status
  6. Before we continue with cluster two, store the name of the current cluster.

    1. export CLUSTER1=${NAME}

Install cluster two

Installing the second cluster uses the same commands but with slightly different arguments.

  1. Create a new resource group.

    1. export NAME="$(whoami)-$RANDOM"
    2. export AZURE_RESOURCE_GROUP="${NAME}-group"
    3. # eastus2 can be changed to any available location (`az account list-locations`)
    4. az group create --name "${AZURE_RESOURCE_GROUP}" -l eastus2
  2. Create a VNet in this resource group. Make sure to use a non-overlapping prefix.

    Note

    In this case we use the 192.168.20.0/24 range, but this can be exchanged for any range except for 169.254.0.0/16, 172.30.0.0/16, 172.31.0.0/16, or 192.0.2.0/24 which are reserved by Azure.

    1. az network vnet create \
    2. --resource-group "${AZURE_RESOURCE_GROUP}" \
    3. --name "${NAME}-cluster-net" \
    4. --address-prefixes 192.168.20.0/24 \
    5. --subnet-name "${NAME}-node-subnet" \
    6. --subnet-prefix 192.168.20.0/24
    7. # Store the ID of the created subnet
    8. export NODE_SUBNET_ID=$(az network vnet subnet show \
    9. --resource-group "${AZURE_RESOURCE_GROUP}" \
    10. --vnet-name "${NAME}-cluster-net" \
    11. --name "${NAME}-node-subnet" \
    12. --query id \
    13. -o tsv)
  3. Create an AKS cluster without CNI and request to use our custom VNet and subnet.

    During creation we also request to use "10.20.0.0/16 as the pod CIDR and "10.21.0.0/16 as the services CIDR. These can be changed to any range except for Azure reserved ranges and ranges used by other clusters we intend to add to the clustermesh.

    1. az aks create \
    2. --resource-group "${AZURE_RESOURCE_GROUP}" \
    3. --name "${NAME}" \
    4. --network-plugin none \
    5. --pod-cidr "10.20.0.0/16" \
    6. --service-cidr "10.21.0.0/16" \
    7. --dns-service-ip "10.21.0.10" \
    8. --vnet-subnet-id "${NODE_SUBNET_ID}"
    9. # Get kubectl credentials and add them to ~/.kube/config
    10. az aks get-credentials \
    11. --resource-group "${AZURE_RESOURCE_GROUP}" \
    12. --name "${NAME}"
  4. Install Cilium, it is important to give the cluster a unique cluster ID and to tell Cilium to use our custom pod CIDR.

    1. cilium install \
    2. --azure-resource-group "${AZURE_RESOURCE_GROUP}" \
    3. --cluster-id 2 \
    4. --config "cluster-pool-ipv4-cidr=10.20.0.0/16"
  5. Check the status of Cilium.

    1. cilium status
  6. Before we continue with peering and clustermesh, store the current cluster name.

    1. export CLUSTER2=${NAME}

Peering virtual networks

Virtual networks can’t connect to each other by default. We can enable cross VNet communication by creating bi-directional “peering”.

We will start by creating a peering from cluster one to cluster two using the following commands.

  1. export VNET_ID=$(az network vnet show \
  2. --resource-group "${CLUSTER2}-group" \
  3. --name "${CLUSTER2}-cluster-net" \
  4. --query id -o tsv)
  5. az network vnet peering create \
  6. -g "${CLUSTER1}-group" \
  7. --name "peering-${CLUSTER1}-to-${CLUSTER2}" \
  8. --vnet-name "${CLUSTER1}-cluster-net" \
  9. --remote-vnet "${VNET_ID}" \
  10. --allow-vnet-access

This allows outbound traffic from cluster one to cluster two. To allow bi-directional traffic, we need to add peering to the other direction as well.

  1. export VNET_ID=$(az network vnet show \
  2. --resource-group "${CLUSTER1}-group" \
  3. --name "${CLUSTER1}-cluster-net" \
  4. --query id -o tsv)
  5. az network vnet peering create \
  6. -g "${CLUSTER2}-group" \
  7. --name "peering-${CLUSTER2}-to-${CLUSTER1}" \
  8. --vnet-name "${CLUSTER2}-cluster-net" \
  9. --remote-vnet "${VNET_ID}" \
  10. --allow-vnet-access

Node-to-node traffic between clusters is now possible. All requirements for clustermesh are met. Enabling clustermesh is explained in Setting up Cluster Mesh.