- Setting up the environment for an OKD installation
- Preparing the provisioner node on IBM Cloud® Bare Metal (Classic) infrastructure
- Configuring the public subnet
- Retrieving the OKD installer
- Extracting the OKD installer
- Configuring the install-config.yaml file
- Additional
install-config
parameters - Root device hints
- Creating the OKD manifests
- Deploying the cluster via the OKD installer
- Following the installation
Setting up the environment for an OKD installation
Preparing the provisioner node on IBM Cloud® Bare Metal (Classic) infrastructure
Perform the following steps to prepare the provisioner node.
Procedure
Log in to the provisioner node via
ssh
.Create a non-root user (
kni
) and provide that user withsudo
privileges:# useradd kni
# passwd kni
# echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni
# chmod 0440 /etc/sudoers.d/kni
Create an
ssh
key for the new user:# su - kni -c "ssh-keygen -f /home/kni/.ssh/id_rsa -N ''"
Log in as the new user on the provisioner node:
# su - kni
Use Red Hat Subscription Manager to register the provisioner node:
$ sudo subscription-manager register --username=<user> --password=<pass> --auto-attach
$ sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms \
--enable=rhel-8-for-x86_64-baseos-rpms
For more information about Red Hat Subscription Manager, see Using and Configuring Red Hat Subscription Manager.
Install the following packages:
$ sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool
Modify the user to add the
libvirt
group to the newly created user:$ sudo usermod --append --groups libvirt kni
Start
firewalld
:$ sudo systemctl start firewalld
Enable
firewalld
:$ sudo systemctl enable firewalld
Start the
http
service:$ sudo firewall-cmd --zone=public --add-service=http --permanent
$ sudo firewall-cmd --reload
Start and enable the
libvirtd
service:$ sudo systemctl enable libvirtd --now
Set the ID of the provisioner node:
$ PRVN_HOST_ID=<ID>
You can view the ID with the following
ibmcloud
command:$ ibmcloud sl hardware list
Set the ID of the public subnet:
$ PUBLICSUBNETID=<ID>
You can view the ID with the following
ibmcloud
command:$ ibmcloud sl subnet list
Set the ID of the private subnet:
$ PRIVSUBNETID=<ID>
You can view the ID with the following
ibmcloud
command:$ ibmcloud sl subnet list
Set the provisioner node public IP address:
$ PRVN_PUB_IP=$(ibmcloud sl hardware detail $PRVN_HOST_ID --output JSON | jq .primaryIpAddress -r)
Set the CIDR for the public network:
$ PUBLICCIDR=$(ibmcloud sl subnet detail $PUBLICSUBNETID --output JSON | jq .cidr)
Set the IP address and CIDR for the public network:
$ PUB_IP_CIDR=$PRVN_PUB_IP/$PUBLICCIDR
Set the gateway for the public network:
$ PUB_GATEWAY=$(ibmcloud sl subnet detail $PUBLICSUBNETID --output JSON | jq .gateway -r)
Set the private IP address of the provisioner node:
$ PRVN_PRIV_IP=$(ibmcloud sl hardware detail $PRVN_HOST_ID --output JSON | \
jq .primaryBackendIpAddress -r)
Set the CIDR for the private network:
$ PRIVCIDR=$(ibmcloud sl subnet detail $PRIVSUBNETID --output JSON | jq .cidr)
Set the IP address and CIDR for the private network:
$ PRIV_IP_CIDR=$PRVN_PRIV_IP/$PRIVCIDR
Set the gateway for the private network:
$ PRIV_GATEWAY=$(ibmcloud sl subnet detail $PRIVSUBNETID --output JSON | jq .gateway -r)
Set up the bridges for the
baremetal
andprovisioning
networks:$ sudo nohup bash -c "
nmcli --get-values UUID con show | xargs -n 1 nmcli con delete
nmcli connection add ifname provisioning type bridge con-name provisioning
nmcli con add type bridge-slave ifname eth1 master provisioning
nmcli connection add ifname baremetal type bridge con-name baremetal
nmcli con add type bridge-slave ifname eth2 master baremetal
nmcli connection modify baremetal ipv4.addresses $PUB_IP_CIDR ipv4.method manual ipv4.gateway $PUB_GATEWAY
nmcli connection modify provisioning ipv4.addresses 172.22.0.1/24,$PRIV_IP_CIDR ipv4.method manual
nmcli connection modify provisioning +ipv4.routes \"10.0.0.0/8 $PRIV_GATEWAY\"
nmcli con down baremetal
nmcli con up baremetal
nmcli con down provisioning
nmcli con up provisioning
init 6
"
For
eth1
andeth2
, substitute the appropriate interface name, as needed.If required, SSH back into the
provisioner
node:# ssh kni@provisioner.<cluster-name>.<domain>
Verify the connection bridges have been properly created:
$ sudo nmcli con show
Example output
NAME UUID TYPE DEVICE
baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal
provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning
virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0
bridge-slave-eth1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eth1
bridge-slave-eth2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eth2
Create a
pull-secret.txt
file:$ vim pull-secret.txt
In a web browser, navigate to Install on Bare Metal with user-provisioned infrastructure. In step 1, click Download pull secret. Paste the contents into the
pull-secret.txt
file and save the contents in thekni
user’s home directory.
Configuring the public subnet
All of the OKD cluster nodes must be on the public subnet. IBM Cloud® Bare Metal (Classic) does not provide a DHCP server on the subnet. Set it up separately on the provisioner node.
You must reset the BASH variables defined when preparing the provisioner node. Rebooting the provisioner node after preparing it will delete the BASH variables previously set.
Procedure
Install
dnsmasq
:$ sudo dnf install dnsmasq
Open the
dnsmasq
configuration file:$ sudo vi /etc/dnsmasq.conf
Add the following configuration to the
dnsmasq
configuration file:interface=baremetal
except-interface=lo
bind-dynamic
log-dhcp
dhcp-range=<ip_addr>,<ip_addr>,<pub_cidr> (1)
dhcp-option=baremetal,121,0.0.0.0/0,<pub_gateway>,<prvn_priv_ip>,<prvn_pub_ip> (2)
dhcp-hostsfile=/var/lib/dnsmasq/dnsmasq.hostsfile
1 Set the DHCP range. Replace both instances of <ip_addr>
with one unused IP address from the public subnet so that thedhcp-range
for thebaremetal
network begins and ends with the same the IP address. Replace<pub_cidr>
with the CIDR of the public subnet.2 Set the DHCP option. Replace <pub_gateway>
with the IP address of the gateway for thebaremetal
network. Replace<prvn_priv_ip>
with the IP address of the provisioner node’s private IP address on theprovisioning
network. Replace<prvn_pub_ip>
with the IP address of the provisioner node’s public IP address on thebaremetal
network.To retrieve the value for
<pub_cidr>
, execute:$ ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .cidr
Replace
<publicsubnetid>
with the ID of the public subnet.To retrieve the value for
<pub_gateway>
, execute:$ ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .gateway -r
Replace
<publicsubnetid>
with the ID of the public subnet.To retrieve the value for
<prvn_priv_ip>
, execute:$ ibmcloud sl hardware detail <id> --output JSON | \
jq .primaryBackendIpAddress -r
Replace
<id>
with the ID of the provisioner node.To retrieve the value for
<prvn_pub_ip>
, execute:$ ibmcloud sl hardware detail <id> --output JSON | jq .primaryIpAddress -r
Replace
<id>
with the ID of the provisioner node.Obtain the list of hardware for the cluster:
$ ibmcloud sl hardware list
Obtain the MAC addresses and IP addresses for each node:
$ ibmcloud sl hardware detail <id> --output JSON | \
jq '.networkComponents[] | \
"\(.primaryIpAddress) \(.macAddress)"' | grep -v null
Replace
<id>
with the ID of the node.Example output
"10.196.130.144 00:e0:ed:6a:ca:b4"
"141.125.65.215 00:e0:ed:6a:ca:b5"
Make a note of the MAC address and IP address of the public network. Make a separate note of the MAC address of the private network, which you will use later in the
install-config.yaml
file. Repeat this procedure for each node until you have all the public MAC and IP addresses for the publicbaremetal
network, and the MAC addresses of the privateprovisioning
network.Add the MAC and IP address pair of the public
baremetal
network for each node into thednsmasq.hostsfile
file:$ sudo vim /var/lib/dnsmasq/dnsmasq.hostsfile
Example input
00:e0:ed:6a:ca:b5,141.125.65.215,master-0
<mac>,<ip>,master-1
<mac>,<ip>,master-2
<mac>,<ip>,worker-0
<mac>,<ip>,worker-1
...
Replace
<mac>,<ip>
with the public MAC address and public IP address of the corresponding node name.Start
dnsmasq
:$ sudo systemctl start dnsmasq
Enable
dnsmasq
so that it starts when booting the node:$ sudo systemctl enable dnsmasq
Verify
dnsmasq
is running:$ sudo systemctl status dnsmasq
Example output
● dnsmasq.service - DNS caching server.
Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-10-05 05:04:14 CDT; 49s ago
Main PID: 3101 (dnsmasq)
Tasks: 1 (limit: 204038)
Memory: 732.0K
CGroup: /system.slice/dnsmasq.service
└─3101 /usr/sbin/dnsmasq -k
Open ports
53
and67
with UDP protocol:$ sudo firewall-cmd --add-port 53/udp --permanent
$ sudo firewall-cmd --add-port 67/udp --permanent
Add
provisioning
to the external zone with masquerade:$ sudo firewall-cmd --change-zone=provisioning --zone=external --permanent
This step ensures network address translation for IPMI calls to the management subnet.
Reload the
firewalld
configuration:$ sudo firewall-cmd --reload
Retrieving the OKD installer
Use the stable-4.x
version of the installation program and your selected architecture to deploy the generally available stable version of OKD:
$ export VERSION=stable-4
$ export RELEASE_ARCH=<architecture>
$ export RELEASE_IMAGE=$(curl -s https://mirror.openshift.com/pub/openshift-v4/$RELEASE_ARCH/clients/ocp/$VERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print $3}')
Extracting the OKD installer
After retrieving the installer, the next step is to extract it.
Procedure
Set the environment variables:
$ export cmd=openshift-baremetal-install
$ export pullsecret_file=~/pull-secret.txt
$ export extract_dir=$(pwd)
Get the
oc
binary:$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/openshift-client-linux.tar.gz | tar zxvf - oc
Extract the installer:
$ sudo cp oc /usr/local/bin
$ oc adm release extract --registry-config "${pullsecret_file}" --command=$cmd --to "${extract_dir}" ${RELEASE_IMAGE}
$ sudo cp openshift-baremetal-install /usr/local/bin
Configuring the install-config.yaml file
The install-config.yaml
file requires some additional details. Most of the information is teaching the installer and the resulting cluster enough about the available IBM Cloud® Bare Metal (Classic) hardware so that it is able to fully manage it. The material difference between installing on bare metal and installing on IBM Cloud® Bare Metal (Classic) is that you must explicitly set the privilege level for IPMI in the BMC section of the install-config.yaml
file.
Procedure
Configure
install-config.yaml
. Change the appropriate variables to match the environment, includingpullSecret
andsshKey
.apiVersion: v1
baseDomain: <domain>
metadata:
name: <cluster_name>
networking:
machineNetwork:
- cidr: <public-cidr>
networkType: OVNKubernetes
compute:
- name: worker
replicas: 2
controlPlane:
name: master
replicas: 3
platform:
baremetal: {}
platform:
baremetal:
apiVIP: <api_ip>
ingressVIP: <wildcard_ip>
provisioningNetworkInterface: <NIC1>
provisioningNetworkCIDR: <CIDR>
hosts:
- name: openshift-master-0
role: master
bmc:
address: ipmi://10.196.130.145?privilegelevel=OPERATOR (1)
username: root
password: <password>
bootMACAddress: 00:e0:ed:6a:ca:b4 (2)
rootDeviceHints:
deviceName: "/dev/sda"
- name: openshift-worker-0
role: worker
bmc:
address: ipmi://<out-of-band-ip>?privilegelevel=OPERATOR (1)
username: <user>
password: <password>
bootMACAddress: <NIC1_mac_address> (2)
rootDeviceHints:
deviceName: "/dev/sda"
pullSecret: '<pull_secret>'
sshKey: '<ssh_pub_key>'
1 The bmc.address
provides aprivilegelevel
configuration setting with the value set toOPERATOR
. This is required for IBM Cloud® Bare Metal (Classic) infrastructure.2 Add the MAC address of the private provisioning
network NIC for the corresponding node.You can use the
ibmcloud
command-line utility to retrieve the password.$ ibmcloud sl hardware detail <id> —output JSON | \
jq ‘“(.networkManagementIpAddress) (.remoteManagementAccounts[0].password)”‘
Replace
<id>
with the ID of the node.Create a directory to store the cluster configuration:
$ mkdir ~/clusterconfigs
Copy the
install-config.yaml
file into the directory:$ cp install-config.yaml ~/clusterconfig
Ensure all bare metal nodes are powered off prior to installing the OKD cluster:
$ ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off
Remove old bootstrap resources if any are left over from a previous deployment attempt:
for i in $(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print $2'});
do
sudo virsh destroy $i;
sudo virsh undefine $i;
sudo virsh vol-delete $i --pool $i;
sudo virsh vol-delete $i.ign --pool $i;
sudo virsh pool-destroy $i;
sudo virsh pool-undefine $i;
done
Additional install-config
parameters
See the following tables for the required parameters, the hosts
parameter, and the bmc
parameter for the install-config.yaml
file.
Parameters | Default | Description | ||
---|---|---|---|---|
| The domain name for the cluster. For example, | |||
|
| The boot mode for a node. Options are | ||
| The static network DNS of the bootstrap node. This can be useful in environments without a DHCP server. | |||
| The static IP address for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. | |||
| The static IP address of the gateway for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. | |||
| The | |||
| The | |||
| The name to be given to the OKD cluster. For example, | |||
| The public CIDR (Classless Inter-Domain Routing) of the external network. For example, | |||
| The OKD cluster requires a name be provided for worker (or compute) nodes even if there are zero nodes. | |||
| Replicas sets the number of worker (or compute) nodes in the OKD cluster. | |||
| The OKD cluster requires a name for control plane (master) nodes. | |||
| Replicas sets the number of control plane (master) nodes included as part of the OKD cluster. | |||
| The name of the network interface on nodes connected to the provisioning network. For OKD 4.9 and later releases, use the | |||
| The default configuration used for machine pools without a platform configuration. | |||
| (Optional) The virtual IP address for Kubernetes API communication. This setting must either be provided in the
| |||
|
|
| ||
| (Optional) The virtual IP address for ingress traffic. This setting must either be provided in the
|
Parameters | Default | Description |
---|---|---|
|
| Defines the IP range for nodes on the provisioning network. |
|
| The CIDR for the network to use for provisioning. This option is required when not using the default address range on the provisioning network. |
| The third IP address of the | The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the provisioning subnet. For example, |
| The second IP address of the | The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the provisioning subnet. For example, |
|
| The name of the bare-metal bridge of the hypervisor attached to the bare-metal network. |
|
| The name of the provisioning bridge on the |
| Defines the host architecture for your cluster. Valid values are | |
| The default configuration used for machine pools without a platform configuration. | |
| A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example: | |
| The
| |
| Set this parameter to the appropriate HTTP proxy used within your environment. | |
| Set this parameter to the appropriate HTTPS proxy used within your environment. | |
| Set this parameter to the appropriate list of exclusions for proxy usage within your environment. |
Hosts
The hosts
parameter is a list of separate bare metal assets used to build the cluster.
Name | Default | Description | ||
---|---|---|---|---|
| The name of the | |||
| The role of the bare metal node. Either | |||
| Connection details for the baseboard management controller. See the BMC addressing section for additional details. | |||
| The MAC address of the NIC that the host uses for the provisioning network. Ironic retrieves the IP address using the
| |||
| Set this optional parameter to configure the network interface of a host. See “(Optional) Configuring host network interfaces” for additional details. |
Root device hints
The rootDeviceHints
parameter enables the installer to provision the Fedora CoreOS (FCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it.
Subfield | Description |
---|---|
| A string containing a Linux device name such as |
| A string containing a SCSI bus address like |
| A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. |
| A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. |
| A string containing the device serial number. The hint must match the actual value exactly. |
| An integer representing the minimum size of the device in gigabytes. |
| A string containing the unique storage identifier. The hint must match the actual value exactly. |
| A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. |
| A string containing the unique vendor storage identifier. The hint must match the actual value exactly. |
| A boolean indicating whether the device should be a rotating disk (true) or not (false). |
Example usage
- name: master-0
role: master
bmc:
address: ipmi://10.10.0.3:6203
username: admin
password: redhat
bootMACAddress: de:ad:be:ef:00:40
rootDeviceHints:
deviceName: "/dev/sda"
Creating the OKD manifests
Create the OKD manifests.
$ ./openshift-baremetal-install --dir ~/clusterconfigs create manifests
INFO Consuming Install Config from target directory
WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated
Deploying the cluster via the OKD installer
Run the OKD installer:
$ ./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster
Following the installation
During the deployment process, you can check the installation’s overall status by issuing the tail
command to the .openshift_install.log
log file in the install directory folder:
$ tail -f /path/to/install-dir/.openshift_install.log