Installer-provisioned post-installation configuration
After successfully deploying an installer-provisioned cluster, consider the following post-installation procedures.
Optional: Configuring NTP for disconnected clusters
OKD installs the chrony
Network Time Protocol (NTP) service on the cluster nodes. Use the following procedure to configure NTP servers on the control plane nodes and configure worker nodes as NTP clients of the control plane nodes after a successful deployment.
OKD nodes must agree on a date and time to run properly. When worker nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server.
Procedure
Create a Butane config,
99-master-chrony-conf-override.bu
, including the contents of thechrony.conf
file for the control plane nodes.See “Creating machine configs with Butane” for information about Butane.
Butane config example
variant: openshift
version: 4.13.0
metadata:
name: 99-master-chrony-conf-override
labels:
machineconfiguration.openshift.io/role: master
storage:
files:
- path: /etc/chrony.conf
mode: 0644
overwrite: true
contents:
inline: |
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (https://www.pool.ntp.org/join.html).
# The Machine Config Operator manages this file
server openshift-master-0.<cluster-name>.<domain> iburst (1)
server openshift-master-1.<cluster-name>.<domain> iburst
server openshift-master-2.<cluster-name>.<domain> iburst
stratumweight 0
driftfile /var/lib/chrony/drift
rtcsync
makestep 10 3
bindcmdaddress 127.0.0.1
bindcmdaddress ::1
keyfile /etc/chrony.keys
commandkey 1
generatecommandkey
noclientlog
logchange 0.5
logdir /var/log/chrony
# Configure the control plane nodes to serve as local NTP servers
# for all worker nodes, even if they are not in sync with an
# upstream NTP server.
# Allow NTP client access from the local network.
allow all
# Serve time even if not synchronized to a time source.
local stratum 3 orphan
1 You must replace <cluster-name>
with the name of the cluster and replace<domain>
with the fully qualified domain name.Use Butane to generate a
MachineConfig
object file,99-master-chrony-conf-override.yaml
, containing the configuration to be delivered to the control plane nodes:$ butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml
Create a Butane config,
99-worker-chrony-conf-override.bu
, including the contents of thechrony.conf
file for the worker nodes that references the NTP servers on the control plane nodes.Butane config example
variant: openshift
version: 4.13.0
metadata:
name: 99-worker-chrony-conf-override
labels:
machineconfiguration.openshift.io/role: worker
storage:
files:
- path: /etc/chrony.conf
mode: 0644
overwrite: true
contents:
inline: |
# The Machine Config Operator manages this file.
server openshift-master-0.<cluster-name>.<domain> iburst (1)
server openshift-master-1.<cluster-name>.<domain> iburst
server openshift-master-2.<cluster-name>.<domain> iburst
stratumweight 0
driftfile /var/lib/chrony/drift
rtcsync
makestep 10 3
bindcmdaddress 127.0.0.1
bindcmdaddress ::1
keyfile /etc/chrony.keys
commandkey 1
generatecommandkey
noclientlog
logchange 0.5
logdir /var/log/chrony
1 You must replace <cluster-name>
with the name of the cluster and replace<domain>
with the fully qualified domain name.Use Butane to generate a
MachineConfig
object file,99-worker-chrony-conf-override.yaml
, containing the configuration to be delivered to the worker nodes:$ butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml
Apply the
99-master-chrony-conf-override.yaml
policy to the control plane nodes.$ oc apply -f 99-master-chrony-conf-override.yaml
Example output
machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override created
Apply the
99-worker-chrony-conf-override.yaml
policy to the worker nodes.$ oc apply -f 99-worker-chrony-conf-override.yaml
Example output
machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created
Check the status of the applied NTP settings.
$ oc describe machineconfigpool
Enabling a provisioning network after installation
The assisted installer and installer-provisioned installation for bare metal clusters provide the ability to deploy a cluster without a provisioning
network. This capability is for scenarios such as proof-of-concept clusters or deploying exclusively with Redfish virtual media when each node’s baseboard management controller is routable via the baremetal
network.
You can enable a provisioning
network after installation using the Cluster Baremetal Operator (CBO).
Prerequisites
A dedicated physical network must exist, connected to all worker and control plane nodes.
You must isolate the native, untagged physical network.
The network cannot have a DHCP server when the
provisioningNetwork
configuration setting is set toManaged
.You can omit the
provisioningInterface
setting in OKD 4.10 to use thebootMACAddress
configuration setting.
Procedure
When setting the
provisioningInterface
setting, first identify the provisioning interface name for the cluster nodes. For example,eth0
oreno1
.Enable the Preboot eXecution Environment (PXE) on the
provisioning
network interface of the cluster nodes.Retrieve the current state of the
provisioning
network and save it to a provisioning custom resource (CR) file:$ oc get provisioning -o yaml > enable-provisioning-nw.yaml
Modify the provisioning CR file:
$ vim ~/enable-provisioning-nw.yaml
Scroll down to the
provisioningNetwork
configuration setting and change it fromDisabled
toManaged
. Then, add theprovisioningIP
,provisioningNetworkCIDR
,provisioningDHCPRange
,provisioningInterface
, andwatchAllNameSpaces
configuration settings after theprovisioningNetwork
setting. Provide appropriate values for each setting.apiVersion: v1
items:
- apiVersion: metal3.io/v1alpha1
kind: Provisioning
metadata:
name: provisioning-configuration
spec:
provisioningNetwork: (1)
provisioningIP: (2)
provisioningNetworkCIDR: (3)
provisioningDHCPRange: (4)
provisioningInterface: (5)
watchAllNameSpaces: (6)
1 The provisioningNetwork
is one ofManaged
,Unmanaged
, orDisabled
. When set toManaged
, Metal3 manages the provisioning network and the CBO deploys the Metal3 pod with a configured DHCP server. When set toUnmanaged
, the system administrator configures the DHCP server manually.2 The provisioningIP
is the static IP address that the DHCP server and ironic use to provision the network. This static IP address must be within theprovisioning
subnet, and outside of the DHCP range. If you configure this setting, it must have a valid IP address even if theprovisioning
network isDisabled
. The static IP address is bound to the metal3 pod. If the metal3 pod fails and moves to another server, the static IP address also moves to the new server.3 The Classless Inter-Domain Routing (CIDR) address. If you configure this setting, it must have a valid CIDR address even if the provisioning
network isDisabled
. For example:192.168.0.1/24
.4 The DHCP range. This setting is only applicable to a Managed
provisioning network. Omit this configuration setting if theprovisioning
network isDisabled
. For example:192.168.0.64, 192.168.0.253
.5 The NIC name for the provisioning
interface on cluster nodes. TheprovisioningInterface
setting is only applicable toManaged
andUnmanaged
provisioning networks. Omit theprovisioningInterface
configuration setting if theprovisioning
network isDisabled
. Omit theprovisioningInterface
configuration setting to use thebootMACAddress
configuration setting instead.6 Set this setting to true
if you want metal3 to watch namespaces other than the defaultopenshift-machine-api
namespace. The default value isfalse
.Save the changes to the provisioning CR file.
Apply the provisioning CR file to the cluster:
$ oc apply -f enable-provisioning-nw.yaml
Configuring an external load balancer
You can configure an OKD cluster to use an external load balancer in place of the default load balancer.
You can also configure an OKD cluster to use an external load balancer that supports multiple subnets. If you use multiple subnets, you can explicitly list all the IP addresses in any networks that are used by your load balancer targets. This configuration can reduce maintenance overhead because you can create and destroy nodes within those networks without reconfiguring the load balancer targets.
If you deploy your ingress pods by using a machine set on a smaller network, such as a /27
or /28
, you can simplify your load balancer targets.
You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. |
Prerequisites
On your load balancer, TCP over ports 6443, 443, and 80 must be reachable by all users of your system that are located outside the cluster.
Load balance the application ports, 443 and 80, between all the compute nodes.
Load balance the API port, 6443, between each of the control plane nodes.
On your load balancer, port 22623, which is used to serve ignition startup configurations to nodes, is not exposed outside of the cluster.
Your load balancer can access the required ports on each node in your cluster. You can ensure this level of access by completing the following actions:
The API load balancer can access ports 22623 and 6443 on the control plane nodes.
The ingress load balancer can access ports 443 and 80 on the nodes where the ingress pods are located.
External load balancing services and the control plane nodes must run on the same L2 network, and on the same VLAN when using VLANs to route traffic between the load balancing services and the control plane nodes. |
Procedure
Enable access to the cluster from your load balancer on ports 6443, 443, and 80.
As an example, note this HAProxy configuration:
A section of a sample HAProxy configuration
...
listen my-cluster-api-6443
bind 0.0.0.0:6443
mode tcp
balance roundrobin
server my-cluster-master-2 192.0.2.2:6443 check
server my-cluster-master-0 192.0.2.3:6443 check
server my-cluster-master-1 192.0.2.1:6443 check
listen my-cluster-apps-443
bind 0.0.0.0:443
mode tcp
balance roundrobin
server my-cluster-worker-0 192.0.2.6:443 check
server my-cluster-worker-1 192.0.2.5:443 check
server my-cluster-worker-2 192.0.2.4:443 check
listen my-cluster-apps-80
bind 0.0.0.0:80
mode tcp
balance roundrobin
server my-cluster-worker-0 192.0.2.7:80 check
server my-cluster-worker-1 192.0.2.9:80 check
server my-cluster-worker-2 192.0.2.8:80 check
Add records to your DNS server for the cluster API and apps over the load balancer. For example:
<load_balancer_ip_address> api.<cluster_name>.<base_domain>
<load_balancer_ip_address> apps.<cluster_name>.<base_domain>
From a command line, use
curl
to verify that the external load balancer and DNS configuration are operational.Verify that the cluster API is accessible:
$ curl https://<loadbalancer_ip_address>:6443/version --insecure
If the configuration is correct, you receive a JSON object in response:
{
"major": "1",
"minor": "11+",
"gitVersion": "v1.11.0+ad103ed",
"gitCommit": "ad103ed",
"gitTreeState": "clean",
"buildDate": "2019-01-09T06:44:10Z",
"goVersion": "go1.10.3",
"compiler": "gc",
"platform": "linux/amd64"
}
Verify that cluster applications are accessible:
You can also verify application accessibility by opening the OKD console in a web browser.
$ curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure
If the configuration is correct, you receive an HTTP response:
HTTP/1.1 302 Found
content-length: 0
location: https://console-openshift-console.apps.<cluster-name>.<base domain>/
cache-control: no-cacheHTTP/1.1 200 OK
referrer-policy: strict-origin-when-cross-origin
set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure
x-content-type-options: nosniff
x-dns-prefetch-control: off
x-frame-options: DENY
x-xss-protection: 1; mode=block
date: Tue, 17 Nov 2020 08:42:10 GMT
content-type: text/html; charset=utf-8
set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None
cache-control: private