Customizing nodes
Although directly making changes to OKD nodes is discouraged, there are times when it is necessary to implement a required low-level security, networking, or performance feature. Direct changes to OKD nodes can be done by:
Creating machine configs that are included in manifest files to start up a cluster during
openshift-install
.Creating machine configs that are passed to running OKD nodes via the Machine Config Operator.
The following sections describe features that you might want to configure on your nodes in this way.
Adding day-1 kernel arguments
Although it is often preferable to modify kernel arguments as a day-2 activity, you might want to add kernel arguments to all master or worker nodes during initial cluster installation. Here are some reasons you might want to add kernel arguments during cluster installation so they take effect before the systems first boot up:
You want to disable a feature, such as SELinux, so it has no impact on the systems when they first come up.
You need to do some low-level network configuration before the systems start.
To add kernel arguments to master or worker nodes, you can create a MachineConfig
object and inject that object into the set of manifest files used by Ignition during cluster setup.
For a listing of arguments you can pass to a RHEL 8 kernel at boot time, see Kernel.org kernel parameters. It is best to only add kernel arguments with this procedure if they are needed to complete the initial OKD installation.
Procedure
Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster:
$ ./openshift-install create manifests --dir=<installation_directory>
Decide if you want to add kernel arguments to worker or control plane nodes (also known as the master nodes).
In the
openshift
directory, create a file (for example,99-openshift-machineconfig-master-kargs.yaml
) to define aMachineConfig
object to add the kernel settings. This example adds aloglevel=7
kernel argument to control plane nodes:$ cat << EOF > 99-openshift-machineconfig-master-kargs.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: 99-openshift-machineconfig-master-kargs
spec:
kernelArguments:
- 'loglevel=7'
EOF
You can change
master
toworker
to add kernel arguments to worker nodes instead. Create a separate YAML file to add to both master and worker nodes.
You can now continue on to create the cluster.
Adding kernel modules to nodes
For most common hardware, the Linux kernel includes the device driver modules needed to use that hardware when the computer starts up. For some hardware, however, modules are not available in Linux. Therefore, you must find a way to provide those modules to each host computer. This procedure describes how to do that for nodes in an OKD cluster.
When a kernel module is first deployed by following these instructions, the module is made available for the current kernel. If a new kernel is installed, the kmods-via-containers software will rebuild and deploy the module so a compatible version of that module is available with the new kernel.
The way that this feature is able to keep the module up to date on each node is by:
Adding a systemd service to each node that starts at boot time to detect if a new kernel has been installed and
If a new kernel is detected, the service rebuilds the module and installs it to the kernel
For information on the software needed for this procedure, see the kmods-via-containers github site.
A few important issues to keep in mind:
This procedure is Technology Preview.
Software tools and examples are not yet available in official RPM form and can only be obtained for now from unofficial
github.com
sites noted in the procedure.Third-party kernel modules you might add through these procedures are not supported by Red Hat.
In this procedure, the software needed to build your kernel modules is deployed in a RHEL 8 container. Keep in mind that modules are rebuilt automatically on each node when that node gets a new kernel. For that reason, each node needs access to a
yum
repository that contains the kernel and related packages needed to rebuild the module. That content is best provided with a valid RHEL subscription.
Building and testing the kernel module container
Before deploying kernel modules to your OKD cluster, you can test the process on a separate RHEL system. Gather the kernel module’s source code, the KVC framework, and the kmod-via-containers software. Then build and test the module. To do that on a RHEL 8 system, do the following:
Procedure
Register a RHEL 8 system:
# subscription-manager register
Attach a subscription to the RHEL 8 system:
# subscription-manager attach --auto
Install software that is required to build the software and container:
# yum install podman make git -y
Clone the
kmod-via-containers
repository:Create a folder for the repository:
$ mkdir kmods; cd kmods
Clone the repository:
$ git clone https://github.com/kmods-via-containers/kmods-via-containers
Install a KVC framework instance on your RHEL 8 build host to test the module. This adds a
kmods-via-container
systemd service and loads it:Change to the
kmod-via-containers
directory:$ cd kmods-via-containers/
Install the KVC framework instance:
$ sudo make install
Reload the systemd manager configuration:
$ sudo systemctl daemon-reload
Get the kernel module source code. The source code might be used to build a third-party module that you do not have control over, but is supplied by others. You will need content similar to the content shown in the
kvc-simple-kmod
example that can be cloned to your system as follows:$ cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod
Edit the configuration file,
simple-kmod.conf
file, in this example, and change the name of the Dockerfile toDockerfile.rhel
:Change to the
kvc-simple-kmod
directory:$ cd kvc-simple-kmod
Rename the Dockerfile:
$ cat simple-kmod.conf
Example Dockerfile
KMOD_CONTAINER_BUILD_CONTEXT="https://github.com/kmods-via-containers/kvc-simple-kmod.git"
KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel
KMOD_SOFTWARE_VERSION=dd1a7d4
KMOD_NAMES="simple-kmod simple-procfs-kmod"
Create an instance of
kmods-via-containers@.service
for your kernel module,simple-kmod
in this example:$ sudo make install
Enable the
kmods-via-containers@.service
instance:$ sudo kmods-via-containers build simple-kmod $(uname -r)
Enable and start the systemd service:
$ sudo systemctl enable kmods-via-containers@simple-kmod.service --now
Review the service status:
$ sudo systemctl status kmods-via-containers@simple-kmod.service
Example output
● kmods-via-containers@simple-kmod.service - Kmods Via Containers - simple-kmod
Loaded: loaded (/etc/systemd/system/kmods-via-containers@.service;
enabled; vendor preset: disabled)
Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago...
To confirm that the kernel modules are loaded, use the
lsmod
command to list the modules:$ lsmod | grep simple_
Example output
simple_procfs_kmod 16384 0
simple_kmod 16384 0
Optional. Use other methods to check that the
simple-kmod
example is working:Look for a “Hello world” message in the kernel ring buffer with
dmesg
:$ dmesg | grep 'Hello world'
Example output
[ 6420.761332] Hello world from simple_kmod.
Check the value of
simple-procfs-kmod
in/proc
:$ sudo cat /proc/simple-procfs-kmod
Example output
simple-procfs-kmod number = 0
Run the
spkut
command to get more information from the module:$ sudo spkut 44
Example output
KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64
Running userspace wrapper using the kernel module container...
+ podman run -i --rm --privileged
simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44
simple-procfs-kmod number = 0
simple-procfs-kmod number = 44
Going forward, when the system boots this service will check if a new kernel is running. If there is a new kernel, the service builds a new version of the kernel module and then loads it. If the module is already built, it will just load it.
Provisioning a kernel module to OKD
Depending on whether or not you must have the kernel module in place when OKD cluster first boots, you can set up the kernel modules to be deployed in one of two ways:
Provision kernel modules at cluster install time (day-1): You can create the content as a
MachineConfig
object and provide it toopenshift-install
by including it with a set of manifest files.Provision kernel modules via Machine Config Operator (day-2): If you can wait until the cluster is up and running to add your kernel module, you can deploy the kernel module software via the Machine Config Operator (MCO).
In either case, each node needs to be able to get the kernel packages and related software packages at the time that a new kernel is detected. There are a few ways you can set up each node to be able to obtain that content.
Provide RHEL entitlements to each node.
Get RHEL entitlements from an existing RHEL host, from the
/etc/pki/entitlement
directory and copy them to the same location as the other files you provide when you build your Ignition config.Inside the Dockerfile, add pointers to a
yum
repository containing the kernel and other packages. This must include new kernel packages as they are needed to match newly installed kernels.
Provision kernel modules via a MachineConfig
object
By packaging kernel module software with a MachineConfig
object, you can deliver that software to worker or master nodes at installation time or via the Machine Config Operator.
First create a base Ignition config that you would like to use. At installation time, the Ignition config will contain the ssh public key to add to the authorized_keys
file for the core
user on the cluster. To add the MachineConfig
object later via the MCO instead, the SSH public key is not required. For both type, the example simple-kmod service creates a systemd unit file, which requires a kmods-via-containers@simple-kmod.service
.
The systemd unit is a workaround for an upstream bug and makes sure that the |
Register a RHEL 8 system:
# subscription-manager register
Attach a subscription to the RHEL 8 system:
# subscription-manager attach --auto
Install software needed to build the software:
# yum install podman make git -y
Create an Ignition config file that creates a systemd unit file:
Create a directory to host the Ignition config file:
$ mkdir kmods; cd kmods
Create the Ignition config file that creates a systemd unit file:
$ cat <<EOF > ./baseconfig.ign
{
"ignition": { "version": "3.1.0" },
"passwd": {
"users": [
{
"name": "core",
"groups": ["sudo"],
"sshAuthorizedKeys": [
"ssh-rsa AAAA"
]
}
]
},
"systemd": {
"units": [{
"name": "require-kvc-simple-kmod.service",
"enabled": true,
"contents": "[Unit]\nRequires=kmods-via-containers@simple-kmod.service\n[Service]\nType=oneshot\nExecStart=/usr/bin/true\n\n[Install]\nWantedBy=multi-user.target"
}]
}
}
EOF
You must add your public SSH key to the
baseconfig.ign
file to use the file duringopenshift-install
. The public SSH key is not needed if you create theMachineConfig
object using the MCO.
Create a base MCO YAML snippet that uses the following configuration:
$ cat <<EOF > mc-base.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 10-kvc-simple-kmod
spec:
config:
EOF
The
mc-base.yaml
is set to deploy the kernel module onworker
nodes. To deploy on master nodes, change the role fromworker
tomaster
. To do both, you could repeat the whole procedure using different file names for the two types of deployments.Get the
kmods-via-containers
software:Clone the
kmods-via-containers
repository:$ git clone https://github.com/kmods-via-containers/kmods-via-containers
Clone the
kvc-simple-kmod
repository:$ git clone https://github.com/kmods-via-containers/kvc-simple-kmod
Get your module software. In this example,
kvc-simple-kmod
is used:Create a fakeroot directory and populate it with files that you want to deliver via Ignition, using the repositories cloned earlier:
Create the directory:
$ FAKEROOT=$(mktemp -d)
Change to the
kmod-via-containers
directory:$ cd kmods-via-containers
Install the KVC framework instance:
$ make install DESTDIR=${FAKEROOT}/usr/local CONFDIR=${FAKEROOT}/etc/
Change to the
kvc-simple-kmod
directory:$ cd ../kvc-simple-kmod
Create the instance:
$ make install DESTDIR=${FAKEROOT}/usr/local CONFDIR=${FAKEROOT}/etc/
Get a tool called
filetranspiler
and dependent software:$ cd .. ; sudo yum install -y python3
git clone https://github.com/ashcrow/filetranspiler.git
Generate a final machine config YAML (
mc.yaml
) and have it include the base Ignition config, base machine config, and the fakeroot directory with files you would like to deliver:$ ./filetranspiler/filetranspile -i ./baseconfig.ign \
-f ${FAKEROOT} --format=yaml --dereference-symlinks \
| sed 's/^/ /' | (cat mc-base.yaml -) > 99-simple-kmod.yaml
If the cluster is not up yet, generate manifest files and add this file to the
openshift
directory. If the cluster is already running, apply the file as follows:$ oc create -f 99-simple-kmod.yaml
Your nodes will start the
kmods-via-containers@simple-kmod.service
service and the kernel modules will be loaded.To confirm that the kernel modules are loaded, you can log in to a node (using
oc debug node/<openshift-node>
, thenchroot /host
). To list the modules, use thelsmod
command:$ lsmod | grep simple_
Example output
simple_procfs_kmod 16384 0
simple_kmod 16384 0
Encrypting disks during installation
You can enable encryption for the boot disks on the control plane and compute nodes at installation time. OKD supports the Trusted Platform Module (TPM) v2 and Tang encryption modes.
TPM v2: This is the preferred mode. TPM v2 stores passphrases in a secure cryptoprocessor contained within a server. You can use this mode to prevent the boot disk data on a cluster node from being decrypted if the disk is removed from the server.
Tang: Tang and Clevis are server and client components that enable network-bound disk encryption (NBDE). You can bind the boot disk data on your cluster nodes to a Tang server. This prevents the data from being decrypted unless the nodes are on a secure network where the Tang server can be accessed. Clevis is an automated decryption framework that is used to implement the decryption on the client side.
The use of Tang encryption mode to encrypt your disks is only supported for bare metal and vSphere installations on user-provisioned infrastructure. |
When the TPM v2 or Tang encryption modes are enabled, the FCOS boot disks are encrypted using the LUKS2 format.
This feature:
Is available for installer-provisioned infrastructure and user-provisioned infrastructure deployments
Is supported on Fedora CoreOS (FCOS) systems only
Sets up disk encryption during the manifest installation phase so all data written to disk, from first boot forward, is encrypted
Encrypts data on the root filesystem only (
/dev/mapper/coreos-luks-root
on/
)Requires no user intervention for providing passphrases
Uses AES-256-CBC encryption
Follow one of the two procedures to enable disk encryption for the nodes in your cluster.
Enabling TPM v2 disk encryption
Use the following procedure to enable TPM v2 mode disk encryption during an OKD installation.
Prerequisites
- You have downloaded the OKD installation program on your installation node.
Procedure
Check to see if TPM v2 encryption needs to be enabled in the BIOS on each node. This is required on most Dell systems. Check the manual for your computer.
On your installation node, change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster:
$ ./openshift-install create manifests --dir=<installation_directory> (1)
1 Replace <installation_directory>
with the path to the directory that you want to store the installation files in.Create machine config files to encrypt the boot disks for the control plane or compute nodes using the TPM v2 encryption mode.
To configure encryption on the control plane nodes, save the following machine config sample to a file in the
<installation_directory>/openshift
directory. For example, name the file99-openshift-master-tpmv2-encryption.yaml
:apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: master-tpm
labels:
machineconfiguration.openshift.io/role: master
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:text/plain;base64,e30K
mode: 420
overwrite: true
path: /etc/clevis.json
To configure encryption on the compute nodes, save the following machine config sample to a file in the
<installation_directory>/openshift
directory. For example, name the file99-openshift-worker-tpmv2-encryption.yaml
:apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: worker-tpm
labels:
machineconfiguration.openshift.io/role: worker
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:text/plain;base64,e30K
mode: 420
overwrite: true
path: /etc/clevis.json
Create a backup copy of the YAML files. The original YAML files are consumed when you create the Ignition config files.
Continue with the remainder of the OKD installation.
Enabling Tang disk encryption
Use the following procedure to enable Tang mode disk encryption during an OKD installation.
Prerequisites
You have downloaded the OKD installation program on your installation node.
You have access to a Fedora 8 machine that can be used to generate a thumbprint of the Tang exchange key.
Procedure
Set up a Tang server or access an existing one. See Network-bound disk encryption for instructions.
Add kernel arguments to configure networking when you do the Fedora CoreOS (FCOS) installations for your cluster. For example, to configure DHCP networking, identify
ip=dhcp
, or set static networking when you add parameters to the kernel command line. For both DHCP and static networking, you also must provide therd.neednet=1
kernel argument.Skipping this step causes the second boot to fail.
Install the
clevis
package on a Fedora 8 machine, if it is not already installed:$ sudo yum install clevis
On the Fedora 8 machine, run the following command to generate a thumbprint of the exchange key. Replace
http://tang.example.com:7500
with the URL of your Tang server:$ clevis-encrypt-tang '{"url":"http://tang.example.com:7500"}' < /dev/null > /dev/null (1)
1 In this example, tangd.socket
is listening on port7500
on the Tang server.The
clevis-encrypt-tang
command is used in this step only to generate a thumbprint of the exchange key. No data is being passed to the command for encryption at this point, so/dev/null
is provided as an input instead of plain text. The encrypted output is also sent to/dev/null
, because it is not required for this procedure.Example output
The advertisement contains the following signing keys:
PLjNyRdGw03zlRoGjQYMahSZGu9 (1)
1 The thumbprint of the exchange key. When the
Do you wish to trust these keys? [ynYN]
prompt displays, typeY
.Fedora 8 provides Clevis version 15, which uses the SHA-1 hash algorithm to generate thumbprints. Some other distributions provide Clevis version 17 or later, which use the SHA-256 hash algorithm for thumbprints. You must use a Clevis version that uses SHA-1 to create the thumbprint, to prevent Clevis binding issues when you install Fedora CoreOS (FCOS) on your OKD cluster nodes.
Create a Base64 encoded file, replacing the URL of the Tang server (
url
) and thumbprint (thp
) you just generated:$ (cat <<EOM
{
"url": "http://tang.example.com:7500", (1)
"thp": "PLjNyRdGw03zlRoGjQYMahSZGu9" (2)
}
EOM
) | base64 -w0
1 Specify the URL of a Tang server. In this example, tangd.socket
is listening on port7500
on the Tang server.2 Specify the exchange key thumbprint, which was generated in a preceding step. Example output
ewogInVybCI6ICJodHRwOi8vdGFuZy5leGFtcGxlLmNvbTo3NTAwIiwgCiAidGhwIjogIlBMak55UmRHdzAzemxSb0dqUVlNYWhTWkd1OSIgCn0K
If you have not yet generated the Kubernetes manifests, change to the directory that contains the installation program on your installation node and create them:
Example output
$ ./openshift-install create manifests --dir=<installation_directory> (1)
1 Replace <installation_directory>
with the path to the directory that you want to store the installation files in.Create machine config files to encrypt the boot disks for the control plane or compute nodes using the Tang encryption mode.
To configure encryption on the control plane nodes, save the following machine config sample to a file in the
<installation_directory>/openshift
directory. For example, name the file99-openshift-master-tang-encryption.yaml
:apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: master-tang
labels:
machineconfiguration.openshift.io/role: master
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:text/plain;base64,e30K
source: data:text/plain;base64,ewogInVybCI6ICJodHRwOi8vdGFuZy5leGFtcGxlLmNvbTo3NTAwIiwgCiAidGhwIjogIlBMak55UmRHdzAzemxSb0dqUVlNYWhTWkd1OSIgCn0K (1)
mode: 420
overwrite: true
path: /etc/clevis.json
kernelArguments:
- rd.neednet=1 (2)
1 Specify the Base64 encoded string that was generated in the preceding step. 2 Add the rd.neednet=1
kernel argument to bring the network up in the initramfs. This argument is required.To configure encryption on the compute nodes, save the following machine config sample to a file in the
<installation_directory>/openshift
directory. For example, name the file99-openshift-worker-tang-encryption.yaml
:apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: worker-tang
labels:
machineconfiguration.openshift.io/role: worker
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:text/plain;base64,e30K
source: data:text/plain;base64,ewogInVybCI6ICJodHRwOi8vdGFuZy5leGFtcGxlLmNvbTo3NTAwIiwgCiAidGhwIjogIlBMak55UmRHdzAzemxSb0dqUVlNYWhTWkd1OSIgCn0K (1)
mode: 420
overwrite: true
path: /etc/clevis.json
kernelArguments:
- rd.neednet=1 (2)
1 Specify the Base64 encoded string that was generated in the preceding step. 2 Add the rd.neednet=1
kernel argument to bring the network up in the initramfs. This argument is required.
Create a backup copy of the YAML files. The original YAML files are consumed when you create the Ignition config files.
Continue with the remainder of the OKD installation.
Configuring chrony time service
You can set the time server and related settings used by the chrony time service (chronyd
) by modifying the contents of the chrony.conf
file and passing those contents to your nodes as a machine config.
Procedure
Create the contents of the
chrony.conf
file and encode it as base64. For example:$ cat << EOF | base64
pool 0.rhel.pool.ntp.org iburst (1)
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
EOF
1 Specify any valid, reachable time source, such as the one provided by your DHCP server. Alternately, you can specify any of the following NTP servers: 1.rhel.pool.ntp.org
,2.rhel.pool.ntp.org
, or3.rhel.pool.ntp.org
.Example output
ICAgIHNlcnZlciBjbG9jay5yZWRoYXQuY29tIGlidXJzdAogICAgZHJpZnRmaWxlIC92YXIvbGli
L2Nocm9ueS9kcmlmdAogICAgbWFrZXN0ZXAgMS4wIDMKICAgIHJ0Y3N5bmMKICAgIGxvZ2RpciAv
dmFyL2xvZy9jaHJvbnkK
Create the
MachineConfig
object file, replacing the base64 string with the one you just created. This example adds the file tomaster
nodes. You can change it toworker
or make an additional MachineConfig for theworker
role. Create MachineConfig files for each type of machine that your cluster uses:$ cat << EOF > ./99-masters-chrony-configuration.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: 99-masters-chrony-configuration
spec:
config:
ignition:
config: {}
security:
tls: {}
timeouts: {}
version: 3.1.0
networkd: {}
passwd: {}
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,ICAgIHNlcnZlciBjbG9jay5yZWRoYXQuY29tIGlidXJzdAogICAgZHJpZnRmaWxlIC92YXIvbGliL2Nocm9ueS9kcmlmdAogICAgbWFrZXN0ZXAgMS4wIDMKICAgIHJ0Y3N5bmMKICAgIGxvZ2RpciAvdmFyL2xvZy9jaHJvbnkK
mode: 420
overwrite: true
path: /etc/chrony.conf
osImageURL: ""
EOF
Make a backup copy of the configuration files.
Apply the configurations in one of two ways:
If the cluster is not up yet, after you generate manifest files, add this file to the
<installation_directory>/openshift
directory, and then continue to create the cluster.If the cluster is already running, apply the file:
$ oc apply -f ./99-masters-chrony-configuration.yaml