- Manually installing a single-node OpenShift cluster with ZTP
- Generating GitOps ZTP installation and configuration CRs manually
- Creating the managed bare-metal host secrets
- Configuring Discovery ISO kernel arguments for manual installations using GitOps ZTP
- Installing a single managed cluster
- Monitoring the managed cluster installation status
- Troubleshooting the managed cluster
- Installing LVM Storage by using the web console
- Installing LVM Storage by using the CLI
- RHACM generated cluster installation CRs reference
Manually installing a single-node OpenShift cluster with ZTP
You can deploy a managed single-node OpenShift cluster by using Red Hat Advanced Cluster Management (RHACM) and the assisted service.
If you are creating multiple managed clusters, use the |
The target bare-metal host must meet the networking, firmware, and hardware requirements listed in Recommended cluster configuration for vDU application workloads. |
Generating GitOps ZTP installation and configuration CRs manually
Use the generator
entrypoint for the ztp-site-generate
container to generate the site installation and configuration custom resource (CRs) for a cluster based on SiteConfig
and PolicyGenTemplate
CRs.
Prerequisites
You have installed the OpenShift CLI (
oc
).You have logged in to the hub cluster as a user with
cluster-admin
privileges.
Procedure
Create an output folder by running the following command:
$ mkdir -p ./out
Export the
argocd
directory from theztp-site-generate
container image:$ podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.13 extract /home/ztp --tar | tar x -C ./out
The
./out
directory has the referencePolicyGenTemplate
andSiteConfig
CRs in theout/argocd/example/
folder.Example output
out
└── argocd
└── example
├── policygentemplates
│ ├── common-ranGen.yaml
│ ├── example-sno-site.yaml
│ ├── group-du-sno-ranGen.yaml
│ ├── group-du-sno-validator-ranGen.yaml
│ ├── kustomization.yaml
│ └── ns.yaml
└── siteconfig
├── example-sno.yaml
├── KlusterletAddonConfigOverride.yaml
└── kustomization.yaml
Create an output folder for the site installation CRs:
$ mkdir -p ./site-install
Modify the example
SiteConfig
CR for the cluster type that you want to install. Copyexample-sno.yaml
tosite-1-sno.yaml
and modify the CR to match the details of the site and bare-metal host that you want to install, for example:Example single-node OpenShift cluster SiteConfig CR
apiVersion: ran.openshift.io/v1
kind: SiteConfig
metadata:
name: "<site_name>"
namespace: "<site_name>"
spec:
baseDomain: "example.com"
pullSecretRef:
name: "assisted-deployment-pull-secret" (1)
clusterImageSetNameRef: "openshift-4.13" (2)
sshPublicKey: "ssh-rsa AAAA..." (3)
clusters:
- clusterName: "<site_name>"
networkType: "OVNKubernetes"
clusterLabels: (4)
common: true
group-du-sno: ""
sites : "<site_name>"
clusterNetwork:
- cidr: 1001:1::/48
hostPrefix: 64
machineNetwork:
- cidr: 1111:2222:3333:4444::/64
serviceNetwork:
- 1001:2::/112
additionalNTPSources:
- 1111:2222:3333:4444::2
#crTemplates:
# KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml" (5)
nodes:
- hostName: "example-node.example.com" (6)
role: "master"
bmcAddress: idrac-virtualmedia://<out_of_band_ip>/<system_id>/ (7)
bmcCredentialsName:
name: "bmh-secret" (8)
bootMACAddress: "AA:BB:CC:DD:EE:11"
bootMode: "UEFI" (9)
rootDeviceHints:
wwn: "0x11111000000asd123"
cpuset: "0-1,52-53" (10)
nodeNetwork: (11)
interfaces:
- name: eno1
macAddress: "AA:BB:CC:DD:EE:11"
config:
interfaces:
- name: eno1
type: ethernet
state: up
ipv4:
enabled: false
ipv6: (12)
enabled: true
address:
- ip: 1111:2222:3333:4444::aaaa:1
prefix-length: 64
dns-resolver:
config:
search:
- example.com
server:
- 1111:2222:3333:4444::2
routes:
config:
- destination: ::/0
next-hop-interface: eno1
next-hop-address: 1111:2222:3333:4444::1
table-id: 254
1 Create the assisted-deployment-pull-secret
CR with the same namespace as theSiteConfig
CR.2 clusterImageSetNameRef
defines an image set available on the hub cluster. To see the list of supported versions on your hub cluster, runoc get clusterimagesets
.3 Configure the SSH public key used to access the cluster. 4 Cluster labels must correspond to the bindingRules
field in thePolicyGenTemplate
CRs that you define. For example,policygentemplates/common-ranGen.yaml
applies to all clusters withcommon: true
set,policygentemplates/group-du-sno-ranGen.yaml
applies to all clusters withgroup-du-sno: “”
set.5 Optional. The CR specifed under KlusterletAddonConfig
is used to override the defaultKlusterletAddonConfig
that is created for the cluster.6 For single-node deployments, define a single host. For three-node deployments, define three hosts. For standard deployments, define three hosts with role: master
and two or more hosts defined withrole: worker
.7 BMC address that you use to access the host. Applies to all cluster types. 8 Name of the bmh-secret
CR that you separately create with the host BMC credentials. When creating thebmh-secret
CR, use the same namespace as theSiteConfig
CR that provisions the host.9 Configures the boot mode for the host. The default value is UEFI
. UseUEFISecureBoot
to enable secure boot on the host.10 cpuset
must match the value set in the clusterPerformanceProfile
CRspec.cpu.reserved
field for workload partitioning.11 Specifies the network settings for the node. 12 Configures the IPv6 address for the host. For single-node OpenShift clusters with static IP addresses, the node-specific API and Ingress IPs should be the same. Generate the day-0 installation CRs by processing the modified
SiteConfig
CRsite-1-sno.yaml
by running the following command:$ podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-install:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.13.1 generator install site-1-sno.yaml /output
Example output
site-install
└── site-1-sno
├── site-1_agentclusterinstall_example-sno.yaml
├── site-1-sno_baremetalhost_example-node1.example.com.yaml
├── site-1-sno_clusterdeployment_example-sno.yaml
├── site-1-sno_configmap_example-sno.yaml
├── site-1-sno_infraenv_example-sno.yaml
├── site-1-sno_klusterletaddonconfig_example-sno.yaml
├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml
├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml
├── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml
├── site-1-sno_managedcluster_example-sno.yaml
├── site-1-sno_namespace_example-sno.yaml
└── site-1-sno_nmstateconfig_example-node1.example.com.yaml
Optional: Generate just the day-0
MachineConfig
installation CRs for a particular cluster type by processing the referenceSiteConfig
CR with the-E
option. For example, run the following commands:Create an output folder for the
MachineConfig
CRs:$ mkdir -p ./site-machineconfig
Generate the
MachineConfig
installation CRs:$ podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-machineconfig:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.13.1 generator install -E site-1-sno.yaml /output
Example output
site-machineconfig
└── site-1-sno
├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml
├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml
└── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml
Generate and export the day-2 configuration CRs using the reference
PolicyGenTemplate
CRs from the previous step. Run the following commands:Create an output folder for the day-2 CRs:
$ mkdir -p ./ref
Generate and export the day-2 configuration CRs:
$ podman run -it --rm -v `pwd`/out/argocd/example/policygentemplates:/resources:Z -v `pwd`/ref:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.13.1 generator config -N . /output
The command generates example group and site-specific
PolicyGenTemplate
CRs for single-node OpenShift, three-node clusters, and standard clusters in the./ref
folder.Example output
ref
└── customResource
├── common
├── example-multinode-site
├── example-sno
├── group-du-3node
├── group-du-3node-validator
│ └── Multiple-validatorCRs
├── group-du-sno
├── group-du-sno-validator
├── group-du-standard
└── group-du-standard-validator
└── Multiple-validatorCRs
Use the generated CRs as the basis for the CRs that you use to install the cluster. You apply the installation CRs to the hub cluster as described in “Installing a single managed cluster”. The configuration CRs can be applied to the cluster after cluster installation is complete.
Additional resources
Creating the managed bare-metal host secrets
Add the required Secret
custom resources (CRs) for the managed bare-metal host to the hub cluster. You need a secret for the GitOps Zero Touch Provisioning (ZTP) pipeline to access the Baseboard Management Controller (BMC) and a secret for the assisted installer service to pull cluster installation images from the registry.
The secrets are referenced from the |
Procedure
Create a YAML secret file containing credentials for the host Baseboard Management Controller (BMC) and a pull secret required for installing OpenShift and all add-on cluster Operators:
Save the following YAML as the file
example-sno-secret.yaml
:apiVersion: v1
kind: Secret
metadata:
name: example-sno-bmc-secret
namespace: example-sno (1)
data: (2)
password: <base64_password>
username: <base64_username>
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
name: pull-secret
namespace: example-sno (3)
data:
.dockerconfigjson: <pull_secret> (4)
type: kubernetes.io/dockerconfigjson
1 Must match the namespace configured in the related SiteConfig
CR2 Base64-encoded values for password
andusername
3 Must match the namespace configured in the related SiteConfig
CR4 Base64-encoded pull secret
Add the relative path to
example-sno-secret.yaml
to thekustomization.yaml
file that you use to install the cluster.
Configuring Discovery ISO kernel arguments for manual installations using GitOps ZTP
The GitOps Zero Touch Provisioning (ZTP) workflow uses the Discovery ISO as part of the OKD installation process on managed bare-metal hosts. You can edit the InfraEnv
resource to specify kernel arguments for the Discovery ISO. This is useful for cluster installations with specific environmental requirements. For example, configure the rd.net.timeout.carrier
kernel argument for the Discovery ISO to facilitate static networking for the cluster or to receive a DHCP address before downloading the root file system during installation.
In OKD 4.13, you can only add kernel arguments. You can not replace or delete kernel arguments. |
Prerequisites
You have installed the OpenShift CLI (oc).
You have logged in to the hub cluster as a user with cluster-admin privileges.
You have manually generated the installation and configuration custom resources (CRs).
Procedure
- Edit the
spec.kernelArguments
specification in theInfraEnv
CR to configure kernel arguments:
apiVersion: agent-install.openshift.io/v1beta1
kind: InfraEnv
metadata:
name: <cluster_name>
namespace: <cluster_name>
spec:
kernelArguments:
- operation: append (1)
value: audit=0 (2)
- operation: append
value: trace=1
clusterRef:
name: <cluster_name>
namespace: <cluster_name>
pullSecretRef:
name: pull-secret
1 | Specify the append operation to add a kernel argument. |
2 | Specify the kernel argument you want to configure. This example configures the audit kernel argument and the trace kernel argument. |
The |
Verification
To verify that the kernel arguments are applied, after the Discovery image verifies that OKD is ready for installation, you can SSH to the target host before the installation process begins. At that point, you can view the kernel arguments for the Discovery ISO in the /proc/cmdline
file.
Begin an SSH session with the target host:
$ ssh -i /path/to/privatekey core@<host_name>
View the system’s kernel arguments by using the following command:
$ cat /proc/cmdline
Installing a single managed cluster
You can manually deploy a single managed cluster using the assisted service and Red Hat Advanced Cluster Management (RHACM).
Prerequisites
You have installed the OpenShift CLI (
oc
).You have logged in to the hub cluster as a user with
cluster-admin
privileges.You have created the baseboard management controller (BMC)
Secret
and the image pull-secretSecret
custom resources (CRs). See “Creating the managed bare-metal host secrets” for details.Your target bare-metal host meets the networking and hardware requirements for managed clusters.
Procedure
Create a
ClusterImageSet
for each specific cluster version to be deployed, for exampleclusterImageSet-4.13.yaml
. AClusterImageSet
has the following format:apiVersion: hive.openshift.io/v1
kind: ClusterImageSet
metadata:
name: openshift-4.13.0 (1)
spec:
releaseImage: quay.io/openshift-release-dev/ocp-release:4.13.0-x86_64 (2)
1 The descriptive version that you want to deploy. 2 Specifies the releaseImage
to deploy and determines the operating system image version. The discovery ISO is based on the image version as set byreleaseImage
, or the latest version if the exact version is unavailable.Apply the
clusterImageSet
CR:$ oc apply -f clusterImageSet-4.13.yaml
Create the
Namespace
CR in thecluster-namespace.yaml
file:apiVersion: v1
kind: Namespace
metadata:
name: <cluster_name> (1)
labels:
name: <cluster_name> (1)
1 The name of the managed cluster to provision. Apply the
Namespace
CR by running the following command:$ oc apply -f cluster-namespace.yaml
Apply the generated day-0 CRs that you extracted from the
ztp-site-generate
container and customized to meet your requirements:$ oc apply -R ./site-install/site-sno-1
Additional resources
Monitoring the managed cluster installation status
Ensure that cluster provisioning was successful by checking the cluster status.
Prerequisites
- All of the custom resources have been configured and provisioned, and the
Agent
custom resource is created on the hub for the managed cluster.
Procedure
Check the status of the managed cluster:
$ oc get managedcluster
True
indicates the managed cluster is ready.Check the agent status:
$ oc get agent -n <cluster_name>
Use the
describe
command to provide an in-depth description of the agent’s condition. Statuses to be aware of includeBackendError
,InputError
,ValidationsFailing
,InstallationFailed
, andAgentIsConnected
. These statuses are relevant to theAgent
andAgentClusterInstall
custom resources.$ oc describe agent -n <cluster_name>
Check the cluster provisioning status:
$ oc get agentclusterinstall -n <cluster_name>
Use the
describe
command to provide an in-depth description of the cluster provisioning status:$ oc describe agentclusterinstall -n <cluster_name>
Check the status of the managed cluster’s add-on services:
$ oc get managedclusteraddon -n <cluster_name>
Retrieve the authentication information of the
kubeconfig
file for the managed cluster:$ oc get secret -n <cluster_name> <cluster_name>-admin-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d > <directory>/<cluster_name>-kubeconfig
Troubleshooting the managed cluster
Use this procedure to diagnose any installation issues that might occur with the managed cluster.
Procedure
Check the status of the managed cluster:
$ oc get managedcluster
Example output
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE
SNO-cluster true True True 2d19h
If the status in the
AVAILABLE
column isTrue
, the managed cluster is being managed by the hub.If the status in the
AVAILABLE
column isUnknown
, the managed cluster is not being managed by the hub. Use the following steps to continue checking to get more information.Check the
AgentClusterInstall
install status:$ oc get clusterdeployment -n <cluster_name>
Example output
NAME PLATFORM REGION CLUSTERTYPE INSTALLED INFRAID VERSION POWERSTATE AGE
Sno0026 agent-baremetal false Initialized
2d14h
If the status in the
INSTALLED
column isfalse
, the installation was unsuccessful.If the installation failed, enter the following command to review the status of the
AgentClusterInstall
resource:$ oc describe agentclusterinstall -n <cluster_name> <cluster_name>
Resolve the errors and reset the cluster:
Remove the cluster’s managed cluster resource:
$ oc delete managedcluster <cluster_name>
Remove the cluster’s namespace:
$ oc delete namespace <cluster_name>
This deletes all of the namespace-scoped custom resources created for this cluster. You must wait for the
ManagedCluster
CR deletion to complete before proceeding.Recreate the custom resources for the managed cluster.
Installing LVM Storage by using the web console
You can use the OKD web console to install Logical volume manager storage (LVM Storage).
Prerequisites
Install the latest version of the RHACM Operator.
Log in as a user with
cluster-admin
privileges.
Procedure
In the OKD web console, navigate to Operators → OperatorHub.
Search for the LVM Storage from the list of available Operators, and then click Install.
Keep the default selection of Installation mode (“All namespaces on the cluster (default)”) and Installed Namespace (“openshift-operators”) to ensure that the Operator is installed properly.
Click Install.
Verification
To confirm that the installation is successful:
Navigate to the Operators → Installed Operators page.
Check that the Operator is installed in the
All Namespaces
namespace and its status isSucceeded
.
If the Operator is not installed successfully:
Navigate to the Operators → Installed Operators page and inspect the
Status
column for any errors or failures.Navigate to the Workloads → Pods page and check the logs in any containers in the
local-storage-operator
pod that are reporting issues.
Additional resources
Installing LVM Storage by using the CLI
You can use the OpenShift CLI (oc
) to install LVM Storage.
Prerequisites
Install the OpenShift CLI (
oc
).Install the latest version of the RHACM Operator.
Log in as a user with
cluster-admin
privileges.
Procedure
Create the
openshift-storage
namespace by running the following command:$ oc create ns openshift-storage
Create an
OperatorGroup
CR.Define the
OperatorGroup
CR and save the YAML file, for example,lmvs-operatorgroup.yaml
:Example OperatorGroup CR
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: lvms-operator-operatorgroup
namespace: openshift-storage
annotations:
ran.openshift.io/ztp-deploy-wave: "2"
spec:
targetNamespaces:
- openshift-storage
Create the
OperatorGroup
CR by running the following command:$ oc create -f lmvs-operatorgroup.yaml
Create a
Subscription
CR.Define the
Subscription
CR and save the YAML file, for example,lvms-subscription.yaml
:Example Subscription CR
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: lvms-operator
namespace: openshift-storage
annotations:
ran.openshift.io/ztp-deploy-wave: "2"
spec:
channel: "stable-4.13"
name: lvms-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
installPlanApproval: Manual
Create the
Subscription
CR by running the following command:$ oc create -f lvms-subscription.yaml
Verification
Verify that the installation succeeded by inspecting the CSV resource:
$ oc get csv -n openshift-storage
Example output
NAME DISPLAY VERSION REPLACES PHASE
lvms-operator.4.13.x LVM Storage 4.13x Succeeded
Verify that LVM Storage is up and running:
$ oc get deploy -n openshift-storage
Example output
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
openshift-storage lvms-operator 1/1 1 1 14s
RHACM generated cluster installation CRs reference
Red Hat Advanced Cluster Management (RHACM) supports deploying OKD on single-node clusters, three-node clusters, and standard clusters with a specific set of installation custom resources (CRs) that you generate using SiteConfig
CRs for each site.
Every managed cluster has its own namespace, and all of the installation CRs except for |
The following table lists the installation CRs that are automatically applied by the RHACM assisted service when it installs clusters using the SiteConfig
CRs that you configure.
CR | Description | Usage |
---|---|---|
| Contains the connection information for the Baseboard Management Controller (BMC) of the target bare-metal host. | Provides access to the BMC to load and start the discovery image on the target server by using the Redfish protocol. |
| Contains information for installing OKD on the target bare-metal host. | Used with |
| Specifies details of the managed cluster configuration such as networking and the number of control plane nodes. Displays the cluster | Specifies the managed cluster configuration information and provides status during the installation of the cluster. |
| References the | Used with |
| Provides network configuration information such as | Sets up a static IP address for the managed cluster’s Kube API server. |
| Contains hardware information about the target bare-metal host. | Created automatically on the hub when the target machine’s discovery image boots. |
| When a cluster is managed by the hub, it must be imported and known. This Kubernetes object provides that interface. | The hub uses this resource to manage and show the status of managed clusters. |
| Contains the list of services provided by the hub to be deployed to the | Tells the hub which addon services to deploy to the |
| Logical space for | Propagates resources to the |
| Two CRs are created: |
|
| Contains OKD image information such as the repository and image name. | Passed into resources to provide OKD images. |