Prerequisites
Installer-provisioned installation of OKD requires:
One provisioner node with Fedora CoreOS (FCOS) installed. The provisioner can be removed after installation.
Three control plane nodes
Baseboard management controller (BMC) access to each node
At least one network:
One required routable network
One optional provisioning network
One optional management network
Before starting an installer-provisioned installation of OKD, ensure the hardware environment meets the following requirements.
Node requirements
Installer-provisioned installation involves a number of hardware node requirements:
CPU architecture: All nodes must use
x86_64
CPU architecture.Similar nodes: Red Hat recommends nodes have an identical configuration per role. That is, Red Hat recommends nodes be the same brand and model with the same CPU, memory, and storage configuration.
Baseboard Management Controller: The
provisioner
node must be able to access the baseboard management controller (BMC) of each OKD cluster node. You may use IPMI, Redfish, or a proprietary protocol.Latest generation: Nodes must be of the most recent generation. Installer-provisioned installation relies on BMC protocols, which must be compatible across nodes. Additionally, Fedora CoreOS (FCOS) ships with the most recent drivers for RAID controllers. Ensure that the nodes are recent enough to support FCOS for the
provisioner
node and FCOS for the control plane and worker nodes.Registry node: (Optional) If setting up a disconnected mirrored registry, it is recommended the registry reside in its own node.
Provisioner node: Installer-provisioned installation requires one
provisioner
node.Control plane: Installer-provisioned installation requires three control plane nodes for high availability. You can deploy an OKD cluster with only three control plane nodes, making the control plane nodes schedulable as worker nodes. Smaller clusters are more resource efficient for administrators and developers during development, production, and testing.
Worker nodes: While not required, a typical production cluster has two or more worker nodes.
Do not deploy a cluster with only one worker node, because the cluster will deploy with routers and ingress traffic in a degraded state.
Network interfaces: Each node must have at least one network interface for the routable
baremetal
network. Each node must have one network interface for aprovisioning
network when using theprovisioning
network for deployment. Using theprovisioning
network is the default configuration.Unified Extensible Firmware Interface (UEFI): Installer-provisioned installation requires UEFI boot on all OKD nodes when using IPv6 addressing on the
provisioning
network. In addition, UEFI Device PXE Settings must be set to use the IPv6 protocol on theprovisioning
network NIC, but omitting theprovisioning
network removes this requirement.When starting the installation from virtual media such as an ISO image, delete all old UEFI boot table entries. If the boot table includes entries that are not generic entries provided by the firmware, the installation might fail.
Secure Boot: Many production scenarios require nodes with Secure Boot enabled to verify the node only boots with trusted software, such as UEFI firmware drivers, EFI applications, and the operating system. You may deploy with Secure Boot manually or managed.
Manually: To deploy an OKD cluster with Secure Boot manually, you must enable UEFI boot mode and Secure Boot on each control plane node and each worker node. Red Hat supports Secure Boot with manually enabled UEFI and Secure Boot only when installer-provisioned installations use Redfish virtual media. See “Configuring nodes for Secure Boot manually” in the “Configuring nodes” section for additional details.
Managed: To deploy an OKD cluster with managed Secure Boot, you must set the
bootMode
value toUEFISecureBoot
in theinstall-config.yaml
file. Red Hat only supports installer-provisioned installation with managed Secure Boot on 10th generation HPE hardware and 13th generation Dell hardware running firmware version2.75.75.75
or greater. Deploying with managed Secure Boot does not require Redfish virtual media. See “Configuring managed Secure Boot” in the “Setting up the environment for an OpenShift installation” section for details.Red Hat does not support Secure Boot with self-generated keys.
Firmware requirements for installing with virtual media
The installer for installer-provisioned OKD clusters validates the hardware and firmware compatibility with Redfish virtual media. The following table lists supported firmware for installer-provisioned OKD clusters deployed with Redfish virtual media.
Hardware | Model | Management | Firmware Versions |
---|---|---|---|
HP | 10th Generation | iLO5 | N/A |
Dell | 14th Generation | iDRAC 9 | v4.20.20.20 - 04.40.00.00 |
13th Generation | iDRAC 8 | v2.75.75.75+ |
See the hardware documentation for the nodes or contact the hardware vendor for information on updating the firmware. For HP servers, Redfish virtual media is not supported on 9th generation systems running iLO4, because Ironic does not support iLO4 with virtual media. For Dell servers, ensure the OKD cluster nodes have AutoAttach Enabled through the iDRAC console. The menu path is: Configuration → Virtual Media → Attach Mode → AutoAttach . With iDRAC 9 firmware version |
The installer will not initiate installation on a node if the node firmware is below the foregoing versions when installing with virtual media. |
Network requirements
Installer-provisioned installation of OKD involves several network requirements. First, installer-provisioned installation involves an optional non-routable provisioning
network for provisioning the operating system on each bare metal node. Second, installer-provisioned installation involves a routable baremetal
network.
Increase the network MTU
Before deploying OKD, increase the network maximum transmission unit (MTU) to 1500 or more. If the MTU is lower than 1500, the Ironic image that is used to boot the node might fail to communicate with the Ironic inspector pod, and inspection will fail. If this occurs, installation stops because the nodes are not available for installation.
Configuring NICs
OKD deploys with two networks:
provisioning
: Theprovisioning
network is an optional non-routable network used for provisioning the underlying operating system on each node that is a part of the OKD cluster. The network interface for theprovisioning
network on each cluster node must have the BIOS or UEFI configured to PXE boot.The
provisioningNetworkInterface
configuration setting specifies theprovisioning
network NIC name on the control plane nodes, which must be identical on the control plane nodes. ThebootMACAddress
configuration setting provides a means to specify a particular NIC on each node for theprovisioning
network.The
provisioning
network is optional, but it is required for PXE booting. If you deploy without aprovisioning
network, you must use a virtual media BMC addressing option such asredfish-virtualmedia
oridrac-virtualmedia
.baremetal
: Thebaremetal
network is a routable network. You can use any NIC to interface with thebaremetal
network provided the NIC is not configured to use theprovisioning
network.
When using a VLAN, each NIC must be on a separate VLAN corresponding to the appropriate network. |
DNS requirements
Clients access the OKD cluster nodes over the baremetal
network. A network administrator must configure a subdomain or subzone where the canonical name extension is the cluster name.
<cluster_name>.<base_domain>
For example:
test-cluster.example.com
OKD includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. After the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS.
In OKD deployments, DNS name resolution is required for the following components:
The Kubernetes API
The OKD application wildcard ingress API
A/AAAA records are used for name resolution and PTR records are used for reverse name resolution. Fedora CoreOS (FCOS) uses the reverse records or DHCP to set the hostnames for all the nodes.
Installer-provisioned installation includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. In each record, <cluster_name>
is the cluster name and <base_domain>
is the base domain that you specify in the install-config.yaml
file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>.
.
Component | Record | Description |
---|---|---|
Kubernetes API |
| An A/AAAA record and a PTR record identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. |
Routes |
| The wildcard A/AAAA record refers to the application ingress load balancer. The application ingress load balancer targets the nodes that run the Ingress Controller pods. The Ingress Controller pods run on the worker nodes by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, |
You can use the |
Dynamic Host Configuration Protocol (DHCP) requirements
By default, installer-provisioned installation deploys ironic-dnsmasq
with DHCP enabled for the provisioning
network. No other DHCP servers should be running on the provisioning
network when the provisioningNetwork
configuration setting is set to managed
, which is the default value. If you have a DHCP server running on the provisioning
network, you must set the provisioningNetwork
configuration setting to unmanaged
in the install-config.yaml
file.
Network administrators must reserve IP addresses for each node in the OKD cluster for the baremetal
network on an external DHCP server.
Reserving IP addresses for nodes with the DHCP server
For the baremetal
network, a network administrator must reserve a number of IP addresses, including:
Two unique virtual IP addresses.
One virtual IP address for the API endpoint.
One virtual IP address for the wildcard ingress endpoint.
One IP address for the provisioner node.
One IP address for each control plane (master) node.
One IP address for each worker node, if applicable.
Reserving IP addresses so they become static IP addresses Some administrators prefer to use static IP addresses so that each node’s IP address remains constant in the absence of a DHCP server. To configure static IP addresses with NMState, see “(Optional) Configuring host network interfaces” in the “Setting up the environment for an OpenShift installation” section. |
Networking between external load balancers and control plane nodes External load balancing services and the control plane nodes must run on the same L2 network, and on the same VLAN when using VLANs to route traffic between the load balancing services and the control plane nodes. |
The storage interface requires a DHCP reservation or a static IP. |
The following table provides an exemplary embodiment of fully qualified domain names. The API and Nameserver addresses begin with canonical name extensions. The hostnames of the control plane and worker nodes are exemplary, so you can use any host naming convention you prefer.
Usage | Host Name | IP |
---|---|---|
API |
|
|
Ingress LB (apps) |
|
|
Provisioner node |
|
|
Master-0 |
|
|
Master-1 |
|
|
Master-2 |
|
|
Worker-0 |
|
|
Worker-1 |
|
|
Worker-n |
|
|
If you do not create DHCP reservations, the installer requires reverse DNS resolution to set the hostnames for the Kubernetes API node, the provisioner node, the control plane nodes, and the worker nodes. |
Network Time Protocol (NTP)
Each OKD node in the cluster must have access to an NTP server. OKD nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL certificates that require validation, which might fail if the date and time between the nodes are not in sync.
Define a consistent clock date and time format in each cluster node’s BIOS settings, or installation might fail. |
You can reconfigure the control plane nodes to act as NTP servers on disconnected clusters, and reconfigure worker nodes to retrieve time from the control plane nodes.
Port access for the out-of-band management IP address
The out-of-band management IP address is on a separate network from the node. To ensure that the out-of-band management can communicate with the provisioner during installation, the out-of-band management IP address must be granted access to port 80
on the bootstrap host and port 6180
on the OKD control plane hosts.
Configuring nodes
Configuring nodes when using the provisioning
network
Each node in the cluster requires the following configuration for proper installation.
A mismatch between nodes will cause an installation failure. |
While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs. In the following table, NIC1 is a non-routable network (provisioning
) that is only used for the installation of the OKD cluster.
NIC | Network | VLAN |
---|---|---|
NIC1 |
|
|
NIC2 |
|
|
The Fedora CoreOS (FCOS) installation process on the provisioner node might vary. To install FCOS using a local Satellite server or a PXE server, PXE-enable NIC2.
PXE | Boot order |
---|---|
NIC1 PXE-enabled | 1 |
NIC2 | 2 |
Ensure PXE is disabled on all other NICs. |
Configure the control plane and worker nodes as follows:
PXE | Boot order |
---|---|
NIC1 PXE-enabled (provisioning network) | 1 |
Configuring nodes without the provisioning
network
The installation process requires one NIC:
NIC | Network | VLAN |
---|---|---|
NICx |
|
|
NICx is a routable network (baremetal
) that is used for the installation of the OKD cluster, and routable to the internet.
The |
Configuring nodes for Secure Boot manually
Secure Boot prevents a node from booting unless it verifies the node is using only trusted software, such as UEFI firmware drivers, EFI applications, and the operating system.
Red Hat only supports manually configured Secure Boot when deploying with Redfish virtual media. |
To enable Secure Boot manually, refer to the hardware guide for the node and execute the following:
Procedure
Boot the node and enter the BIOS menu.
Set the node’s boot mode to
UEFI Enabled
.Enable Secure Boot.
Red Hat does not support Secure Boot with self-generated keys. |
Configuring the Compatibility Support Module for Fujitsu iRMC
The Compatibility Support Module (CSM) configuration provides support for legacy BIOS backward compatibility with UEFI systems. You must configure the CSM when you deploy a cluster with Fujitsu iRMC, otherwise the installation might fail.
For information about configuring the CSM for your specific node type, refer to the hardware guide for the node. |
Prerequisites
- Ensure that you have disabled Secure Boot Control. You can disable the feature under Security → Secure Boot Configuration → Secure Boot Control.
Procedure
Boot the node and select the BIOS menu.
Under the Advanced tab, select CSM Configuration from the list.
Enable the Launch CSM option and set the following values:
Item Value Boot option filter
UEFI and Legacy
Launch PXE OpROM Policy
UEFI only
Launch Storage OpROM policy
UEFI only
Other PCI device ROM priority
UEFI only
Out-of-band management
Nodes will typically have an additional NIC used by the baseboard management controllers (BMCs). These BMCs must be accessible from the provisioner node.
Each node must be accessible via out-of-band management. When using an out-of-band management network, the provisioner node requires access to the out-of-band management network for a successful OKD installation.
The out-of-band management setup is out of scope for this document. We recommend setting up a separate management network for out-of-band management. However, using the provisioning network or the baremetal network are valid options.
Required data for installation
Prior to the installation of the OKD cluster, gather the following information from all cluster nodes:
Out-of-band management IP
Examples
Dell (iDRAC) IP
HP (iLO) IP
Fujitsu (iRMC) IP
When using the provisioning
network
NIC (
provisioning
) MAC addressNIC (
baremetal
) MAC address
When omitting the provisioning
network
- NIC (
baremetal
) MAC address
Validation checklist for nodes
When using the provisioning
network
NIC1 VLAN is configured for the
provisioning
network.NIC1 for the
provisioning
network is PXE-enabled on the provisioner, control plane (master), and worker nodes.NIC2 VLAN is configured for the
baremetal
network.PXE has been disabled on all other NICs.
DNS is configured with API and Ingress endpoints.
Control plane and worker nodes are configured.
All nodes accessible via out-of-band management.
(Optional) A separate management network has been created.
Required data for installation.
When omitting the provisioning
network
NIC1 VLAN is configured for the
baremetal
network.DNS is configured with API and Ingress endpoints.
Control plane and worker nodes are configured.
All nodes accessible via out-of-band management.
(Optional) A separate management network has been created.
Required data for installation.