Installation using Kubespray
The guide is to use Kubespray for creating an AWS Kubernetes cluster running Cilium as the CNI. The guide uses:
- Kubespray v2.6.0
- Latest Cilium released version (instructions for using the version are mentioned below)
Please consult Kubespray Prerequisites and Cilium System Requirements.
Installing Kubespray
$ git clone --branch v2.6.0 https://github.com/kubernetes-sigs/kubespray
Install dependencies from requirements.txt
$ cd kubespray
$ sudo pip install -r requirements.txt
Infrastructure Provisioning
We will use Terraform for provisioning AWS infrastructure.
Configure AWS credentials
Export the variables for your AWS credentials
export AWS_ACCESS_KEY_ID="www"
export AWS_SECRET_ACCESS_KEY ="xxx"
export AWS_SSH_KEY_NAME="yyy"
export AWS_DEFAULT_REGION="zzz"
Configure Terraform Variables
We will start by specifying the infrastructure needed for the Kubernetes cluster.
$ cd contrib/terraform/aws
$ cp contrib/terraform/aws/terraform.tfvars.example terraform.tfvars`
Open the file and change any defaults particularly, the number of master, etcd, and worker nodes. You can change the master and etcd number to 1 for deployments that don’t need high availability. By default, this tutorial will create:
- VPC with 2 public and private subnets
- Bastion Hosts and NAT Gateways in the Public Subnet
- Three of each (masters, etcd, and worker nodes) in the Private Subnet
- AWS ELB in the Public Subnet for accessing the Kubernetes API from the internet
- Terraform scripts using
CoreOS
as base image.
Example terraform.tfvars
file:
#Global Vars
aws_cluster_name = "kubespray"
#VPC Vars
aws_vpc_cidr_block = "XXX.XXX.192.0/18"
aws_cidr_subnets_private = ["XXX.XXX.192.0/20","XXX.XXX.208.0/20"]
aws_cidr_subnets_public = ["XXX.XXX.224.0/20","XXX.XXX.240.0/20"]
#Bastion Host
aws_bastion_size = "t2.medium"
#Kubernetes Cluster
aws_kube_master_num = 3
aws_kube_master_size = "t2.medium"
aws_etcd_num = 3
aws_etcd_size = "t2.medium"
aws_kube_worker_num = 3
aws_kube_worker_size = "t2.medium"
#Settings AWS ELB
aws_elb_api_port = 6443
k8s_secure_api_port = 6443
kube_insecure_apiserver_address = "0.0.0.0"
Apply the configuration
terraform init
to initialize the following modules
module.aws-vpc
module.aws-elb
module.aws-iam
$ terraform init
Once initialized , execute:
$ terraform plan -out=aws_kubespray_plan
This will generate a file, aws_kubespray_plan
, depicting an execution plan of the infrastructure that will be created on AWS. To apply, execute:
$ terraform init
$ terraform apply "aws_kubespray_plan"
Terraform automatically creates an Ansible Inventory file at inventory/hosts
.
Installing Kubernetes cluster with Cilium as CNI
Kubespray uses Ansible as its substrate for provisioning and orchestration. Once the infrastructure is created, you can run the Ansible playbook to install Kubernetes and all the required dependencies. Execute the below command in the kubespray clone repo, providing the correct path of the AWS EC2 ssh private key in ansible_ssh_private_key_file=<path to EC2 SSH private key file>
We recommend using the latest released Cilium version by passing the variable when running the ansible-playbook
command. For example, you could add the following flag to the command below: -e cilium_version=v1.11.0
.
$ ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=core -e bootstrap_os=coreos -e kube_network_plugin=cilium -b --become-user=root --flush-cache -e ansible_ssh_private_key_file=<path to EC2 SSH private key file>
If you are interested in configuring your Kubernetes cluster setup, you should consider copying the sample inventory. Then, you can edit the variables in the relevant file in the group_vars
directory.
$ cp -r inventory/sample inventory/my-inventory
$ cp ./inventory/hosts ./inventory/my-inventory/hosts
$ echo 'cilium_version: "v1.11.0"' >> ./inventory/my-inventory/group_vars/k8s_cluster/k8s-net-cilium.yml
$ ansible-playbook -i ./inventory/my-inventory/hosts ./cluster.yml -e ansible_user=core -e bootstrap_os=coreos -e kube_network_plugin=cilium -b --become-user=root --flush-cache -e ansible_ssh_private_key_file=<path to EC2 SSH private key file>
Validate Cluster
To check if cluster is created successfully, ssh into the bastion host with the user core
.
$ # Get information about the basiton host
$ cat ssh-bastion.conf
$ ssh -i ~/path/to/ec2-key-file.pem core@public_ip_of_bastion_host
Execute the commands below from the bastion host. If kubectl
isn’t installed on the bastion host, you can login to the master node to test the below commands. You may need to copy the private key to the bastion host to access the master node.
Validate the Installation
Cilium CLI
Manually
Install the latest version of the Cilium CLI. The Cilium CLI can be used to install Cilium, inspect the state of a Cilium installation, and enable/disable various features (e.g. clustermesh, Hubble).
Linux
macOS
Other
curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-amd64.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
rm cilium-linux-amd64.tar.gz{,.sha256sum}
curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-darwin-amd64.tar.gz{,.sha256sum}
shasum -a 256 -c cilium-darwin-amd64.tar.gz.sha256sum
sudo tar xzvfC cilium-darwin-amd64.tar.gz /usr/local/bin
rm cilium-darwin-amd64.tar.gz{,.sha256sum}
See the full page of releases.
To validate that Cilium has been properly installed, you can run
$ cilium status --wait
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Hubble: disabled
\__/¯¯\__/ ClusterMesh: disabled
\__/
DaemonSet cilium Desired: 2, Ready: 2/2, Available: 2/2
Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2
Containers: cilium-operator Running: 2
cilium Running: 2
Image versions cilium quay.io/cilium/cilium:v1.9.5: 2
cilium-operator quay.io/cilium/operator-generic:v1.9.5: 2
Run the following command to validate that your cluster has proper network connectivity:
$ cilium connectivity test
ℹ️ Monitor aggregation detected, will skip some flow validation steps
✨ [k8s-cluster] Creating namespace for connectivity check...
(...)
---------------------------------------------------------------------------------------------------------------------
📋 Test Report
---------------------------------------------------------------------------------------------------------------------
✅ 69/69 tests successful (0 warnings)
Congratulations! You have a fully functional Kubernetes cluster with Cilium. 🎉
You can monitor as Cilium and all required components are being installed:
$ kubectl -n kube-system get pods --watch
NAME READY STATUS RESTARTS AGE
cilium-operator-cb4578bc5-q52qk 0/1 Pending 0 8s
cilium-s8w5m 0/1 PodInitializing 0 7s
coredns-86c58d9df4-4g7dd 0/1 ContainerCreating 0 8m57s
coredns-86c58d9df4-4l6b2 0/1 ContainerCreating 0 8m57s
It may take a couple of minutes for all components to come up:
cilium-operator-cb4578bc5-q52qk 1/1 Running 0 4m13s
cilium-s8w5m 1/1 Running 0 4m12s
coredns-86c58d9df4-4g7dd 1/1 Running 0 13m
coredns-86c58d9df4-4l6b2 1/1 Running 0 13m
You can deploy the “connectivity-check” to test connectivity between pods. It is recommended to create a separate namespace for this.
kubectl create ns cilium-test
Deploy the check with:
kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/v1.11/examples/kubernetes/connectivity-check/connectivity-check.yaml
It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:
$ kubectl get pods -n cilium-test
NAME READY STATUS RESTARTS AGE
echo-a-76c5d9bd76-q8d99 1/1 Running 0 66s
echo-b-795c4b4f76-9wrrx 1/1 Running 0 66s
echo-b-host-6b7fc94b7c-xtsff 1/1 Running 0 66s
host-to-b-multi-node-clusterip-85476cd779-bpg4b 1/1 Running 0 66s
host-to-b-multi-node-headless-dc6c44cb5-8jdz8 1/1 Running 0 65s
pod-to-a-79546bc469-rl2qq 1/1 Running 0 66s
pod-to-a-allowed-cnp-58b7f7fb8f-lkq7p 1/1 Running 0 66s
pod-to-a-denied-cnp-6967cb6f7f-7h9fn 1/1 Running 0 66s
pod-to-b-intra-node-nodeport-9b487cf89-6ptrt 1/1 Running 0 65s
pod-to-b-multi-node-clusterip-7db5dfdcf7-jkjpw 1/1 Running 0 66s
pod-to-b-multi-node-headless-7d44b85d69-mtscc 1/1 Running 0 66s
pod-to-b-multi-node-nodeport-7ffc76db7c-rrw82 1/1 Running 0 65s
pod-to-external-1111-d56f47579-d79dz 1/1 Running 0 66s
pod-to-external-fqdn-allow-google-cnp-78986f4bcf-btjn7 1/1 Running 0 66s
Note
If you deploy the connectivity check to a single node cluster, pods that check multi-node functionalities will remain in the Pending
state. This is expected since these pods need at least 2 nodes to be scheduled successfully.
Once done with the test, remove the cilium-test
namespace:
kubectl delete ns cilium-test
Delete Cluster
$ cd contrib/terraform/aws
$ terraform destroy