Prerequisites
Installer-provisioned installation of OKD requires:
One provisioner node with Fedora CoreOS (FCOS) installed.
Three control plane nodes.
Baseboard Management Controller (BMC) access to each node.
At least one network:
One required routable network
One optional network for provisioning nodes; and,
One optional management network.
Before starting an installer-provisioned installation of OKD, ensure the hardware environment meets the following requirements.
Node requirements
Installer-provisioned installation involves a number of hardware node requirements:
CPU architecture: All nodes must use
x86_64
CPU architecture.Similar nodes: Red Hat recommends nodes have an identical configuration per role. That is, Red Hat recommends nodes be the same brand and model with the same CPU, memory, and storage configuration.
Baseboard Management Controller: The
provisioner
node must be able to access the baseboard management controller (BMC) of each OKD cluster node. You may use IPMI, RedFish, or a proprietary protocol.Latest generation: Nodes must be of the most recent generation. Installer-provisioned installation relies on BMC protocols, which must be compatible across nodes. Additionally, Fedora CoreOS (FCOS) ships with the most recent drivers for RAID controllers. Ensure that the nodes are recent enough to support FCOS for the
provisioner
node and FCOS for the control plane and worker nodes.Registry node: Optional: If setting up a disconnected mirrored registry, it is recommended the registry reside in its own node.
Provisioner node: Installer-provisioned installation requires one
provisioner
node.Control plane: Installer-provisioned installation requires three control plane nodes for high availability.
Worker nodes: While not required, a typical production cluster has one or more worker nodes. Smaller clusters are more resource efficient for administrators and developers during development and testing.
Network interfaces: Each node must have at least one 10GB network interface for the routable
baremetal
network. Each node must have one 10GB network interface for aprovisioning
network when using theprovisioning
network for deployment. Using theprovisioning
network is the default configuration. Network interface names must follow the same naming convention across all nodes. For example, the first NIC name on a node, such aseth0
oreno1
, must be the same name on all of the other nodes. The same principle applies to the remaining NICs on each node.Unified Extensible Firmware Interface (UEFI): Installer-provisioned installation requires UEFI boot on all OKD nodes when using IPv6 addressing on the
provisioning
network. In addition, UEFI Device PXE Settings must be set to use the IPv6 protocol on theprovisioning
network NIC, but omitting theprovisioning
network removes this requirement.
Network requirements
Installer-provisioned installation of OKD involves several network requirements by default. First, installer-provisioned installation involves a non-routable provisioning
network for provisioning the operating system on each bare metal node and a routable baremetal
network. Since installer-provisioned installation deploys ironic-dnsmasq
, the networks should have no other DHCP servers running on the same broadcast domain. Network administrators must reserve IP addresses for each node in the OKD cluster.
Network Time Protocol (NTP)
It is recommended that each OKD node in the cluster have access to a Network Time Protocol (NTP) server that is discoverable using DHCP. While installation without an NTP server is possible, asynchronous server clocks can cause errors. Using an NTP server can prevent this issue.
Configuring NICs
OKD deploys with two networks:
provisioning
: Theprovisioning
network is an optional non-routable network used for provisioning the underlying operating system on each node that is a part of the OKD cluster. When deploying using theprovisioning
network, the first NIC on each node, such aseth0
oreno1
, must interface with theprovisioning
network.baremetal
: Thebaremetal
network is a routable network. When deploying using theprovisioning
network, the second NIC on each node, such aseth1
oreno2
, must interface with thebaremetal
network. When deploying without aprovisioning
network, you can use any NIC on each node to interface with thebaremetal
network.
Each NIC should be on a separate VLAN corresponding to the appropriate network. |
Configuring the DNS server
Clients access the OKD cluster nodes over the baremetal
network. A network administrator must configure a subdomain or subzone where the canonical name extension is the cluster name.
<cluster-name>.<domain-name>
For example:
test-cluster.example.com
Reserving IP addresses for nodes with the DHCP server
For the baremetal
network, a network administrator must reserve a number of IP addresses, including:
Two virtual IP addresses.
One IP address for the API endpoint
One IP address for the wildcard Ingress endpoint
One IP address for the provisioner node.
One IP address for each control plane (master) node.
One IP address for each worker node, if applicable.
The following table provides an exemplary embodiment of fully-qualified domain names. The API and Nameserver addresses begin with canonical name extensions. The hostnames of the control plane and worker nodes are exemplary, so you can use any host naming convention you prefer.
Usage | Hostname | IP |
---|---|---|
API | api.<cluster-name>.<domain> | <ip> |
Ingress LB (apps) | *.apps.<cluster-name>.<domain> | <ip> |
Provisioner node | provisioner.<cluster-name>.<domain> | <ip> |
Master-0 | openshift-master-0.<cluster-name>.<domain> | <ip> |
Master-1 | openshift-master-1.<cluster-name>-.<domain> | <ip> |
Master-2 | openshift-master-2.<cluster-name>.<domain> | <ip> |
Worker-0 | openshift-worker-0.<cluster-name>.<domain> | <ip> |
Worker-1 | openshift-worker-1.<cluster-name>.<domain> | <ip> |
Worker-n | openshift-worker-n.<cluster-name>.<domain> | <ip> |
Additional requirements with no provisioning network
All installer-provisioned installations require a baremetal
network. The baremetal
network is a routable network used for external network access to the outside world. In addition to the IP address supplied to the OKD cluster node, installations without a provisioning
network require the following:
Setting an available IP address from the
baremetal
network to thebootstrapProvisioningIP
configuration setting within theinstall-config.yaml
configuration file.Setting an available IP address from the
baremetal
network to theprovisioningHostIP
configuration setting within theinstall-config.yaml
configuration file.Deploying the OKD cluster using RedFish Virtual Media/iDRAC Virtual Media.
Configuring additional IP addresses for |
Configuring nodes
Configuring nodes when using the provisioning
network
Each node in the cluster requires the following configuration for proper installation.
A mismatch between nodes will cause an installation failure. |
While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs:
NIC | Network | VLAN |
NIC1 |
| <provisioning-vlan> |
NIC2 |
| <baremetal-vlan> |
NIC1 is a non-routable network (provisioning
) that is only used for the installation of the OKD cluster.
The Fedora CoreOS (FCOS) installation process on the provisioner node might vary. To install FCOS using a local Satellite server or a PXE server, PXE-enable NIC2.
PXE | Boot order |
NIC1 PXE-enabled | 1 |
NIC2 | 2 |
Ensure PXE is disabled on all other NICs. |
Configure the control plane and worker nodes as follows:
PXE | Boot order |
NIC1 PXE-enabled (provisioning network) | 1 |
Configuring nodes without the provisioning
network
The installation process requires one NIC:
NIC | Network | VLAN |
NICx |
| <baremetal-vlan> |
NICx is a routable network (baremetal
) that is used for the installation of the OKD cluster, and routable to the Internet.
Out-of-band management
Nodes will typically have an additional NIC used by the Baseboard Management Controllers (BMCs). These BMCs must be accessible from the provisioner
node.
Each node must be accessible via out-of-band management. When using an out-of-band management network, the provisioner
node requires access to the out-of-band management network for a successful OKD 4 installation.
The out-of-band management setup is out of scope for this document. We recommend setting up a separate management network for out-of-band management. However, using the provisioning
network or the baremetal
network are valid options.
Required data for installation
Prior to the installation of the OKD cluster, gather the following information from all cluster nodes:
Out-of-band management IP
Examples
Dell (iDRAC) IP
HP (iLO) IP
When using the provisioning
network
NIC1 (
provisioning
) MAC addressNIC2 (
baremetal
) MAC address
When omitting the provisioning
network
- NICx (
baremetal
) MAC address
Validation checklist for nodes
When using the provisioning
network
NIC1 VLAN is configured for the
provisioning
network.NIC2 VLAN is configured for the
baremetal
network.NIC1 is PXE-enabled on the provisioner, control plane (master), and worker nodes.
PXE has been disabled on all other NICs.
Control plane and worker nodes are configured.
All nodes accessible via out-of-band management.
A separate management network has been created. (optional)
Required data for installation.
When omitting the provisioning
network
NICx VLAN is configured for the
baremetal
network.Control plane and worker nodes are configured.
All nodes accessible via out-of-band management.
A separate management network has been created. (optional)
Required data for installation.