Planning your installation
You install OKD by running a series of Ansible playbooks. As you prepare to install your cluster, you create an inventory file that represents your environment and OKD cluster configuration. While familiarity with Ansible might make this process easier, it is not required.
You can read more about Ansible and its basic usage in the official documentation.
Initial planning
Before you install your production OKD cluster, you need answers to the following questions:
Do you install on-premise or in a public or private cloud? The Installation Methods section provides more information about the cloud providers options available.
How many pods are required in your cluster? The Sizing Considerations section provides limits for nodes and pods so you can calculate how large your environment needs to be.
How many hosts do you require in the cluster? The Environment Scenarios section provides multiple examples of Single Master and Multiple Master configurations.
Do you need a high availability cluster? High availability configurations improve fault tolerance. In this situation, you might use the Multiple Masters Using Native HA example to set up your environment.
Is cluster monitoring required? The monitoring stack requires additional system resources. Note that the monitoring stack is installed by default. See the cluster monitoring documentation for more information.
Do you want to use Red Hat Enterprise Linux (RHEL) or RHEL Atomic Host as the operating system for your cluster nodes? If you install OKD on RHEL, you use an RPM-based installation. On RHEL Atomic Host, you use a system container. Both installation types provide a working OKD environment.
Which identity provider do you use for authentication? If you already use a supported identity provider, configure OKD to use that identity provider during installation.
On-premise versus cloud providers
You can install OKD on-premise or host it on public or private clouds. You can use the provided Ansible playbooks to help you automate the provisioning and installation processes. For information, see Running Installation Playbooks.
Limitations and Considerations for Installations on IBM POWER
As of version 3.10.45, you can install OKD on IBM POWER servers.
Your cluster must use only Power nodes and masters. Because of the way that images are tagged, OKD cannot differentiate between x86 images and Power images.
Image streams and templates are not installed by default or updated when you upgrade. You can manually install and update the image streams.
You can install only on on-premise Power servers. You cannot install OKD on nodes in any cloud provider.
Not all storage providers are supported. You can use only the following storage providers:
GlusterFS
NFS
Local storage
Sizing considerations
Determine how many nodes and pods you require for your OKD cluster. Cluster scalability correlates to the number of pods in a cluster environment. That number influences the other numbers in your setup. See Cluster Limits for the latest limits for objects in OKD.
Environment scenarios
Use these environment scenarios to help plan your OKD cluster based on your sizing needs.
Moving from a single master cluster to multiple masters after installation is not supported. |
In all environments, if your etcd hosts are co-located with master hosts, etcd runs as a static pod on the host. If your etcd hosts are not co-located with master hosts, they run etcd as standalone processes.
If you use RHEL Atomic Host, you can configure etcd on only master hosts. |
Single master and node on one system
You can install OKD on a single system for only a development environment. You cannot use an all-in-one environment as a production environment.
Single master and multiple nodes
The following table describes an example environment for a single master (with etcd installed on the same host) and two nodes:
Host Name | Infrastructure Component to Install |
---|---|
master.example.com | Master, etcd, and node |
node1.example.com | Node |
node2.example.com |
Multiple masters using native HA
The following describes an example environment for three masters, one HAProxy load balancer, and two nodes using the native
HA method. etcd runs as static pods on the master nodes:
Routers and master nodes must be load balanced to have a highly available and fault-tolerant environment. Red Hat recommends the use of an enterprise-grade external load balancer for production environments. This load balancing applies to the masters and nodes, hosts running the OKD routers. Transmission Control Protocol (TCP) layer 4 load balancing, in which the load is spread across IP addresses, is recommended. See External Load Balancer Integrations with OpenShift Enterprise 3 for a reference design, which is not recommended for production use cases. |
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com | Master (clustered using native HA) and node and clustered etcd |
master2.example.com | |
master3.example.com | |
lb.example.com | HAProxy to load balance API master endpoints |
node1.example.com | Node |
node2.example.com |
Multiple Masters Using Native HA with External Clustered etcd
The following describes an example environment for three masters, one HAProxy load balancer, three external clustered etcd hosts, and two nodes using the native
HA method:
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com | Master (clustered using native HA) and node |
master2.example.com | |
master3.example.com | |
lb.example.com | HAProxy to load balance API master endpoints |
etcd1.example.com | Clustered etcd |
etcd2.example.com | |
etcd3.example.com | |
node1.example.com | Node |
node2.example.com |
Stand-alone registry
You can also install OKD to act as a stand-alone registry using the OKD’s integrated registry. See Installing a Stand-alone Registry for details on this scenario.
Installation types for supported operating systems
Starting in OKD 3.10, if you use RHEL as the underlying OS for a host, the RPM method is used to install OKD components on that host. If you use RHEL Atomic Host, the system container method is used on that host. Either installation type provides the same functionality for the cluster, but the operating system you use determines how you manage services and host updates.
An RPM installation installs all services through package management and configures services to run in the same user space, while a system container installation installs services using system container images and runs separate services in individual containers.
When using RPMs on RHEL, all services are installed and updated by package management from an outside source. These packages modify a host’s existing configuration in the same user space. With system container installations on RHEL Atomic Host, each component of OKD is shipped as a container, in a self-contained package, that uses the host’s kernel to run. Updated, newer containers replace any existing ones on your host.
The following table and sections outline further differences between the installation types:
Red Hat Enterprise Linux | RHEL Atomic Host | |
---|---|---|
Installation Type | RPM-based | System container |
Delivery Mechanism | RPM packages using | System container images using |
Service Management | systemd |
|
Required images for system containers
The system container installation type makes use of the following images:
- openshift/origin-node
If you need to use a private registry to pull these images during the installation, you can specify the registry information ahead of time. Set the following Ansible variables in your inventory file, as required:
oreg_url='<registry_hostname>/openshift/origin-${component}:${version}'
openshift_docker_insecure_registries=<registry_hostname>
openshift_docker_blocked_registries=<registry_hostname>
You can also set the |
The default component inherits the image prefix and version from the oreg_url
value.
The configuration of additional, insecure, and blocked container registries occurs at the beginning of the installation process to ensure that these settings are applied before attempting to pull any of the required images.
systemd service names
The installation process creates relevant systemd units which can be used to start, stop, and poll services using normal systemctl commands. For system container installations, these unit names match those of an RPM installation.
File path locations
All OKD configuration files are placed in the same locations during containerized installation as RPM based installations and will survive os-tree upgrades.
However, the default image stream and template files are installed at /etc/origin/examples/ for Atomic Host installations rather than the standard /usr/share/openshift/examples/ because that directory is read-only on RHEL Atomic Host.
Storage requirements
RHEL Atomic Host installations normally have a very small root file system. However, the etcd, master, and node containers persist data in the /var/lib/ directory. Ensure that you have enough space on the root file system before installing OKD. See the System Requirements section for details.