Installation on OpenShift OKD
OpenShift Requirements
- Choose preferred cloud provider. This guide was tested in AWS, Azure, and GCP from a Linux host.
- Read OpenShift documentation to find out about provider-specific prerequisites.
- Get OpenShift Installer.
Note
It is highly recommended to read the OpenShift documentation, unless you have installed OpenShift in the past. Here are a few notes that you may find useful.
- With the AWS provider
openshift-install
will not work properly when MFA credentials are stored in~/.aws/credentials
, traditional credentials are required. - With the Azure provider
openshift-install
will prompt for credentials and store them in~/.azure/osServicePrincipal.json
, it doesn’t simply pickupaz login
credentials. It’s recommended to setup a dedicated service principal and use it. - With the GCP provider
openshift-install
will only work with a service account key, which has to be set usingGOOGLE_CREDENTIALS
environment variable (e.g.GOOGLE_CREDENTIALS=service-account.json
). Follow Openshift Installer documentation to assign required roles to your service account.
Create an OpenShift OKD Cluster
First, set the cluster name:
CLUSTER_NAME="cluster-1"
Now, create configuration files:
Note
The sample output below is showing the AWS provider, but it should work the same way with other providers.
$ openshift-install create install-config --dir "${CLUSTER_NAME}"
? SSH Public Key ~/.ssh/id_rsa.pub
? Platform aws
INFO Credentials loaded from default AWS environment variables
? Region eu-west-1
? Base Domain openshift-test-1.cilium.rocks
? Cluster Name cluster-1
? Pull Secret [? for help] **********************************
And set networkType: Cilium
:
sed -i "s/networkType: .*/networkType: Cilium/" "${CLUSTER_NAME}/install-config.yaml"
The resulting configuration will look like this:
apiVersion: v1
baseDomain: ilya-openshift-test-1.cilium.rocks
compute:
- architecture: amd64
hyperthreading: Enabled
name: worker
platform: {}
replicas: 3
controlPlane:
architecture: amd64
hyperthreading: Enabled
name: master
platform: {}
replicas: 3
metadata:
creationTimestamp: null
name: cluster-1
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: Cilium
serviceNetwork:
- 172.30.0.0/16
platform:
aws:
region: eu-west-1
publish: External
pullSecret: '{"auths":{"fake":{"auth": "bar"}}}'
sshKey: |
ssh-rsa <REDACTED>
You may wish to make a few changes, e.g. increase the number of nodes.
If you do change any of the CIDRs, you will need to make sure that Helm values in ${CLUSTER_NAME}/manifests/cluster-network-07-cilium-ciliumconfig.yaml
reflect those changes. Namely clusterNetwork
should match ipv4NativeRoutingCIDR
, clusterPoolIPv4PodCIDRList
and clusterPoolIPv4MaskSize
. Also make sure that the clusterNetwork
does not conflict with machineNetwork
(which represents the VPC CIDR in AWS).
Warning
Ensure that there are multiple replicas of the controlPlane
. A single controlPlane
will lead to failure to bootstrap the cluster during installation.
Next, generate OpenShift manifests:
openshift-install create manifests --dir "${CLUSTER_NAME}"
Next, obtain Cilium manifest from cilium/cilium-olm
repository and copy to ${CLUSTER_NAME}/manifests
:
cilium_olm_rev="master"
cilium_version="1.11.7"
curl --silent --location --fail --show-error "https://github.com/cilium/cilium-olm/archive/${cilium_olm_rev}.tar.gz" --output /tmp/cilium-olm.tgz
tar -C /tmp -xf /tmp/cilium-olm.tgz
cp /tmp/cilium-olm-${cilium_olm_rev}/manifests/cilium.v${cilium_version}/* "${CLUSTER_NAME}/manifests"
rm -rf -- /tmp/cilium-olm.tgz "/tmp/cilium-olm-${cilium_olm_rev}"
At this stage manifest directory contains all that is needed to install Cilium. To get a list of the Cilium manifests, run:
ls ${CLUSTER_NAME}/manifests/cluster-network-*-cilium-*
You can set any custom Helm values by editing ${CLUSTER_NAME}/manifests/cluster-network-07-cilium-ciliumconfig.yaml
.
It is also possible to update Helm values once the cluster is running by changing the CiliumConfig
object, e.g. with kubectl edit ciliumconfig -n cilium cilium
. You may need to restart the Cilium agent pods for certain options to take effect.
Note
If you are not using a real OpenShift pull secret, you will not be able to install the Cilium OLM operator using RedHat registry. You can fix this by running:
sed -i 's|image:\ registry.connect.redhat.com/isovalent/|image:\ quay.io/cilium/|g' \
"${CLUSTER_NAME}/manifests/cluster-network-06-cilium-00002-cilium-olm-deployment.yaml" \
${CLUSTER_NAME}/manifests/cluster-network-06-cilium-00014-cilium.*-clusterserviceversion.yaml
Create the cluster:
Note
The sample output below is showing the AWS provider, but it should work the same way with other providers.
$ openshift-install create cluster --dir "${CLUSTER_NAME}"
INFO Consuming OpenShift Install (Manifests) from target directory
INFO Consuming Master Machines from target directory
INFO Consuming Worker Machines from target directory
INFO Consuming Openshift Manifests from target directory
INFO Consuming Common Manifests from target directory
INFO Credentials loaded from the "default" profile in file "/home/twp/.aws/credentials"
INFO Creating infrastructure resources...
INFO Waiting up to 20m0s for the Kubernetes API at https://api.cluster-name.ilya-openshift-test-1.cilium.rocks:6443...
INFO API v1.20.0-1058+7d0a2b269a2741-dirty up
INFO Waiting up to 30m0s for bootstrapping to complete...
INFO Destroying the bootstrap resources...
INFO Waiting up to 40m0s for the cluster at https://api.cluster-name.ilya-openshift-test-1.cilium.rocks:6443 to initialize...
INFO Waiting up to 10m0s for the openshift-console route to be created...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/twp/okd/cluster-name/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.cluster-name.ilya-openshift-test-1.cilium.rocks
INFO Login to the console with user: "kubeadmin", and password: "<REDACTED>"
INFO Time elapsed: 32m9s
Accessing the cluster
To access the cluster you will need to use kubeconfig
file from the ${CLUSTER_NAME}/auth
directory:
export KUBECONFIG="${CLUSTER_NAME}/auth/kubeconfig"
Prepare cluster for Cilium connectivity test
In order for Cilium connectivity test pods to run on OpenShift, a simple custom SecurityContextConstraints
object is required. It will to allow hostPort
/hostNetwork
that some of the connectivity test pods rely on, it sets only allowHostPorts
and allowHostNetwork
without any other privileges.
kubectl apply -f - <<EOF
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
name: cilium-test
allowHostPorts: true
allowHostNetwork: true
users:
- system:serviceaccount:cilium-test:default
priority: null
readOnlyRootFilesystem: false
runAsUser:
type: MustRunAsRange
seLinuxContext:
type: MustRunAs
volumes: null
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostPID: false
allowPrivilegeEscalation: false
allowPrivilegedContainer: false
allowedCapabilities: null
defaultAddCapabilities: null
requiredDropCapabilities: null
groups: null
EOF
Deploy the connectivity test
You can deploy the “connectivity-check” to test connectivity between pods. It is recommended to create a separate namespace for this.
kubectl create ns cilium-test
Deploy the check with:
kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/v1.11/examples/kubernetes/connectivity-check/connectivity-check.yaml
It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:
$ kubectl get pods -n cilium-test
NAME READY STATUS RESTARTS AGE
echo-a-76c5d9bd76-q8d99 1/1 Running 0 66s
echo-b-795c4b4f76-9wrrx 1/1 Running 0 66s
echo-b-host-6b7fc94b7c-xtsff 1/1 Running 0 66s
host-to-b-multi-node-clusterip-85476cd779-bpg4b 1/1 Running 0 66s
host-to-b-multi-node-headless-dc6c44cb5-8jdz8 1/1 Running 0 65s
pod-to-a-79546bc469-rl2qq 1/1 Running 0 66s
pod-to-a-allowed-cnp-58b7f7fb8f-lkq7p 1/1 Running 0 66s
pod-to-a-denied-cnp-6967cb6f7f-7h9fn 1/1 Running 0 66s
pod-to-b-intra-node-nodeport-9b487cf89-6ptrt 1/1 Running 0 65s
pod-to-b-multi-node-clusterip-7db5dfdcf7-jkjpw 1/1 Running 0 66s
pod-to-b-multi-node-headless-7d44b85d69-mtscc 1/1 Running 0 66s
pod-to-b-multi-node-nodeport-7ffc76db7c-rrw82 1/1 Running 0 65s
pod-to-external-1111-d56f47579-d79dz 1/1 Running 0 66s
pod-to-external-fqdn-allow-google-cnp-78986f4bcf-btjn7 1/1 Running 0 66s
Note
If you deploy the connectivity check to a single node cluster, pods that check multi-node functionalities will remain in the Pending
state. This is expected since these pods need at least 2 nodes to be scheduled successfully.
Once done with the test, remove the cilium-test
namespace:
kubectl delete ns cilium-test
Cleanup after connectivity test
Remove the SecurityContextConstraints
:
kubectl delete scc cilium-test
Delete the cluster
openshift-install destroy cluster --dir="${CLUSTER_NAME}"