Recommended Installation Practices
You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]
You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]
Pre-installing Dependencies
A node host will access the network to install any RPMs dependencies, such as atomic-openshift-*
, iptables
, and docker
. Pre-installing these dependencies, creates a more efficient install, because the RPMs are only accessed when necessary, instead of a number of times per host during the install.
This is also useful for machines that cannot access the registry for security purposes.
Ansible Install Optimization
The OKD install method uses Ansible. Ansible is useful for running parallel operations, meaning a fast and efficient installation. However, these can be improved upon with additional tuning options. See the Configuring Ansible section for a list of available Ansible configuration options.
Parallel behavior can overwhelm a content source, such as your image registry or Red Hat Satellite server. Preparing your server’s infrastructure pods and operating system patches can help prevent this issue. |
Run the installer from the lowest-possible latency control node (LAN speeds). Running over a wide area network (WAN) is not advised, neither is running the installation over a lossy network connection.
Ansible provides its own guidance for performance and scaling, including using RHEL 6.6 or later to ensure the version of OpenSSH supports ControlPersist
, and running the installer from the same LAN as the cluster, but not running it from a machine in the cluster.
The following is an example Ansible configuration for large cluster installation and administration that incorporates the recommendations documented by Ansible:
# cat /etc/ansible/ansible.cfg
# config file for ansible -- http://ansible.com/
# ==============================================
[defaults]
forks = 20 (1)
host_key_checking = False
remote_user = root
roles_path = roles/
gathering = smart
fact_caching = jsonfile
fact_caching_connection = $HOME/ansible/facts
fact_caching_timeout = 600
log_path = $HOME/ansible.log
nocows = 1
callback_whitelist = profile_tasks
[privilege_escalation]
become = False
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=600s -o ServerAliveInterval=60
control_path = %(directory)s/%%h-%%r
pipelining = True (2)
timeout = 10
1 | 20 forks is ideal, because larger forks can lead to installations failing. |
2 | Pipelining reduces the number of connections between control and target nodes, helping to improve installer performance. |
Networking Considerations
Network subnets can be changed post-install, but with difficulty. It is much easier to consider the network subnet size prior to installation, because underestimating the size can create problems with growing clusters.
See the Network Optimization topic for recommended network subnetting practices.