- Advanced managed cluster configuration with PolicyGenTemplate resources
- Deploying additional changes to clusters
- Using PolicyGenTemplate CRs to override source CRs content
- Adding new content to the GitOps ZTP pipeline
- Configuring policy compliance evaluation timeouts for PolicyGenTemplate CRs
- Signalling ZTP cluster deployment completion with validator inform policies
- Configuring PTP fast events using PolicyGenTemplate CRs
- Configuring the Image Registry Operator for local caching of images
- Configuring bare-metal event monitoring using PolicyGenTemplate CRs
- Using hub templates in PolicyGenTemplate CRs
Advanced managed cluster configuration with PolicyGenTemplate resources
You can use PolicyGenTemplate
CRs to deploy custom functionality in your managed clusters.
Deploying additional changes to clusters
If you require cluster configuration changes outside of the base GitOps ZTP pipeline configuration, there are three options:
Apply the additional configuration after the ZTP pipeline is complete
When the GitOps ZTP pipeline deployment is complete, the deployed cluster is ready for application workloads. At this point, you can install additional Operators and apply configurations specific to your requirements. Ensure that additional configurations do not negatively affect the performance of the platform or allocated CPU budget.
Add content to the ZTP library
The base source custom resources (CRs) that you deploy with the GitOps ZTP pipeline can be augmented with custom content as required.
Create extra manifests for the cluster installation
Extra manifests are applied during installation and make the installation process more efficient.
Providing additional source CRs or modifying existing source CRs can significantly impact the performance or CPU profile of OKD. |
Additional resources
- See Customizing extra installation manifests in the ZTP GitOps pipeline for information about adding extra manifests.
Using PolicyGenTemplate CRs to override source CRs content
PolicyGenTemplate
custom resources (CRs) allow you to overlay additional configuration details on top of the base source CRs provided with the GitOps plugin in the ztp-site-generate
container. You can think of PolicyGenTemplate
CRs as a logical merge or patch to the base CR. Use PolicyGenTemplate
CRs to update a single field of the base CR, or overlay the entire contents of the base CR. You can update values and insert fields that are not in the base CR.
The following example procedure describes how to update fields in the generated PerformanceProfile
CR for the reference configuration based on the PolicyGenTemplate
CR in the group-du-sno-ranGen.yaml
file. Use the procedure as a basis for modifying other parts of the PolicyGenTemplate
based on your requirements.
Prerequisites
- Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for Argo CD.
Procedure
Review the baseline source CR for existing content. You can review the source CRs listed in the reference
PolicyGenTemplate
CRs by extracting them from the zero touch provisioning (ZTP) container.Create an
/out
folder:$ mkdir -p ./out
Extract the source CRs:
$ podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12.1 extract /home/ztp --tar | tar x -C ./out
Review the baseline
PerformanceProfile
CR in./out/source-crs/PerformanceProfile.yaml
:apiVersion: performance.openshift.io/v2
kind: PerformanceProfile
metadata:
name: $name
annotations:
ran.openshift.io/ztp-deploy-wave: "10"
spec:
additionalKernelArgs:
- "idle=poll"
- "rcupdate.rcu_normal_after_boot=0"
cpu:
isolated: $isolated
reserved: $reserved
hugepages:
defaultHugepagesSize: $defaultHugepagesSize
pages:
- size: $size
count: $count
node: $node
machineConfigPoolSelector:
pools.operator.machineconfiguration.openshift.io/$mcp: ""
net:
userLevelNetworking: true
nodeSelector:
node-role.kubernetes.io/$mcp: ''
numa:
topologyPolicy: "restricted"
realTimeKernel:
enabled: true
Any fields in the source CR which contain
$…
are removed from the generated CR if they are not provided in thePolicyGenTemplate
CR.Update the
PolicyGenTemplate
entry forPerformanceProfile
in thegroup-du-sno-ranGen.yaml
reference file. The following examplePolicyGenTemplate
CR stanza supplies appropriate CPU specifications, sets thehugepages
configuration, and adds a new field that setsgloballyDisableIrqLoadBalancing
to false.- fileName: PerformanceProfile.yaml
policyName: "config-policy"
metadata:
name: openshift-node-performance-profile
spec:
cpu:
# These must be tailored for the specific hardware platform
isolated: "2-19,22-39"
reserved: "0-1,20-21"
hugepages:
defaultHugepagesSize: 1G
pages:
- size: 1G
count: 10
globallyDisableIrqLoadBalancing: false
Commit the
PolicyGenTemplate
change in Git, and then push to the Git repository being monitored by the GitOps ZTP argo CD application.
Example output
The ZTP application generates an RHACM policy that contains the generated PerformanceProfile
CR. The contents of that CR are derived by merging the metadata
and spec
contents from the PerformanceProfile
entry in the PolicyGenTemplate
onto the source CR. The resulting CR has the following content:
---
apiVersion: performance.openshift.io/v2
kind: PerformanceProfile
metadata:
name: openshift-node-performance-profile
spec:
additionalKernelArgs:
- idle=poll
- rcupdate.rcu_normal_after_boot=0
cpu:
isolated: 2-19,22-39
reserved: 0-1,20-21
globallyDisableIrqLoadBalancing: false
hugepages:
defaultHugepagesSize: 1G
pages:
- count: 10
size: 1G
machineConfigPoolSelector:
pools.operator.machineconfiguration.openshift.io/master: ""
net:
userLevelNetworking: true
nodeSelector:
node-role.kubernetes.io/master: ""
numa:
topologyPolicy: restricted
realTimeKernel:
enabled: true
In the An exception to this is the
The |
Adding new content to the GitOps ZTP pipeline
The source CRs in the GitOps ZTP site generator container provide a set of critical features and node tuning settings for RAN Distributed Unit (DU) applications. These are applied to the clusters that you deploy with ZTP. To add or modify existing source CRs in the ztp-site-generate
container, rebuild the ztp-site-generate
container and make it available to the hub cluster, typically from the disconnected registry associated with the hub cluster. Any valid OKD CR can be added.
Perform the following procedure to add new content to the ZTP pipeline.
Procedure
Create a directory containing a Containerfile and the source CR YAML files that you want to include in the updated
ztp-site-generate
container, for example:ztp-update/
├── example-cr1.yaml
├── example-cr2.yaml
└── ztp-update.in
Add the following content to the
ztp-update.in
Containerfile:FROM registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12
ADD example-cr2.yaml /kustomize/plugin/ran.openshift.io/v1/policygentemplate/source-crs/
ADD example-cr1.yaml /kustomize/plugin/ran.openshift.io/v1/policygentemplate/source-crs/
Open a terminal at the
ztp-update/
folder and rebuild the container:$ podman build -t ztp-site-generate-rhel8-custom:v4.12-custom-1
Push the built container image to your disconnected registry, for example:
$ podman push localhost/ztp-site-generate-rhel8-custom:v4.12-custom-1 registry.example.com:5000/ztp-site-generate-rhel8-custom:v4.12-custom-1
Patch the Argo CD instance on the hub cluster to point to the newly built container image:
$ oc patch -n openshift-gitops argocd openshift-gitops --type=json -p '[{"op": "replace", "path":"/spec/repo/initContainers/0/image", "value": "registry.example.com:5000/ztp-site-generate-rhel8-custom:v4.12-custom-1"} ]'
When the Argo CD instance is patched, the
openshift-gitops-repo-server
pod automatically restarts.
Verification
Verify that the new
openshift-gitops-repo-server
pod has completed initialization and that the previous repo pod is terminated:$ oc get pods -n openshift-gitops | grep openshift-gitops-repo-server
Example output
openshift-gitops-server-7df86f9774-db682 1/1 Running 1 28s
You must wait until the new
openshift-gitops-repo-server
pod has completed initialization and the previous pod is terminated before the newly added container image content is available.
Additional resources
- Alternatively, you can patch the ArgoCD instance as described in Configuring the hub cluster with ArgoCD by modifying
argocd-openshift-gitops-patch.json
with an updatedinitContainer
image before applying the patch file.
Configuring policy compliance evaluation timeouts for PolicyGenTemplate CRs
Use Red Hat Advanced Cluster Management (RHACM) installed on a hub cluster to monitor and report on whether your managed clusters are compliant with applied policies. RHACM uses policy templates to apply predefined policy controllers and policies. Policy controllers are Kubernetes custom resource definition (CRD) instances.
You can override the default policy evaluation intervals with PolicyGenTemplate
custom resources (CRs). You configure duration settings that define how long a ConfigurationPolicy
CR can be in a state of policy compliance or non-compliance before RHACM re-evaluates the applied cluster policies.
The zero touch provisioning (ZTP) policy generator generates ConfigurationPolicy
CR policies with pre-defined policy evaluation intervals. The default value for the noncompliant
state is 10 seconds. The default value for the compliant
state is 10 minutes. To disable the evaluation interval, set the value to never
.
Prerequisites
You have installed the OpenShift CLI (
oc
).You have logged in to the hub cluster as a user with
cluster-admin
privileges.You have created a Git repository where you manage your custom site configuration data.
Procedure
To configure the evaluation interval for all policies in a
PolicyGenTemplate
CR, addevaluationInterval
to thespec
field, and then set the appropriatecompliant
andnoncompliant
values. For example:spec:
evaluationInterval:
compliant: 30m
noncompliant: 20s
To configure the evaluation interval for the
spec.sourceFiles
object in aPolicyGenTemplate
CR, addevaluationInterval
to thesourceFiles
field, for example:spec:
sourceFiles:
- fileName: SriovSubscription.yaml
policyName: "sriov-sub-policy"
evaluationInterval:
compliant: never
noncompliant: 10s
Commit the
PolicyGenTemplate
CRs files in the Git repository and push your changes.
Verification
Check that the managed spoke cluster policies are monitored at the expected intervals.
Log in as a user with
cluster-admin
privileges on the managed cluster.Get the pods that are running in the
open-cluster-management-agent-addon
namespace. Run the following command:$ oc get pods -n open-cluster-management-agent-addon
Example output
NAME READY STATUS RESTARTS AGE
config-policy-controller-858b894c68-v4xdb 1/1 Running 22 (5d8h ago) 10d
Check the applied policies are being evaluated at the expected interval in the logs for the
config-policy-controller
pod:$ oc logs -n open-cluster-management-agent-addon config-policy-controller-858b894c68-v4xdb
Example output
2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {"policy": "compute-1-config-policy-config"}
2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {"policy": "compute-1-common-compute-1-catalog-policy-config"}
Signalling ZTP cluster deployment completion with validator inform policies
Create a validator inform policy that signals when the zero touch provisioning (ZTP) installation and configuration of the deployed cluster is complete. This policy can be used for deployments of single-node OpenShift clusters, three-node clusters, and standard clusters.
Procedure
Create a standalone
PolicyGenTemplate
custom resource (CR) that contains the source filevalidatorCRs/informDuValidator.yaml
. You only need one standalonePolicyGenTemplate
CR for each cluster type. For example, this CR applies a validator inform policy for single-node OpenShift clusters:Example single-node cluster validator inform policy CR (group-du-sno-validator-ranGen.yaml)
apiVersion: ran.openshift.io/v1
kind: PolicyGenTemplate
metadata:
name: "group-du-sno-validator" (1)
namespace: "ztp-group" (2)
spec:
bindingRules:
group-du-sno: "" (3)
bindingExcludedRules:
ztp-done: "" (4)
mcp: "master" (5)
sourceFiles:
- fileName: validatorCRs/informDuValidator.yaml
remediationAction: inform (6)
policyName: "du-policy" (7)
1 The name of PolicyGenTemplates
object. This name is also used as part of the names for theplacementBinding
,placementRule
, andpolicy
that are created in the requestednamespace
.2 This value should match the namespace
used in the groupPolicyGenTemplates
.3 The group-du-*
label defined inbindingRules
must exist in theSiteConfig
files.4 The label defined in bindingExcludedRules
must beztp-done:
. Theztp-done
label is used in coordination with the Topology Aware Lifecycle Manager.5 mcp
defines theMachineConfigPool
object that is used in the source filevalidatorCRs/informDuValidator.yaml
. It should bemaster
for single node and three-node cluster deployments andworker
for standard cluster deployments.6 Optional. The default value is inform
.7 This value is used as part of the name for the generated RHACM policy. The generated validator policy for the single node example is group-du-sno-validator-du-policy
.Commit the
PolicyGenTemplate
CR file in your Git repository and push the changes.
Additional resources
Configuring PTP fast events using PolicyGenTemplate CRs
You can configure PTP fast events for vRAN clusters that are deployed using the GitOps Zero Touch Provisioning (ZTP) pipeline. Use PolicyGenTemplate
custom resources (CRs) as the basis to create a hierarchy of configuration files tailored to your specific site requirements.
Prerequisites
- Create a Git repository where you manage your custom site configuration data.
Procedure
Add the following YAML into
.spec.sourceFiles
in thecommon-ranGen.yaml
file to configure the AMQP Operator:#AMQ interconnect operator for fast events
- fileName: AmqSubscriptionNS.yaml
policyName: "subscriptions-policy"
- fileName: AmqSubscriptionOperGroup.yaml
policyName: "subscriptions-policy"
- fileName: AmqSubscription.yaml
policyName: "subscriptions-policy"
Apply the following
PolicyGenTemplate
changes togroup-du-3node-ranGen.yaml
,group-du-sno-ranGen.yaml
, orgroup-du-standard-ranGen.yaml
files according to your requirements:In
.sourceFiles
, add thePtpOperatorConfig
CR file that configures the AMQ transport host to theconfig-policy
:- fileName: PtpOperatorConfigForEvent.yaml
policyName: "config-policy"
Configure the
linuxptp
andphc2sys
for the PTP clock type and interface. For example, add the following stanza into.sourceFiles
:- fileName: PtpConfigSlave.yaml (1)
policyName: "config-policy"
metadata:
name: "du-ptp-slave"
spec:
profile:
- name: "slave"
interface: "ens5f1" (2)
ptp4lOpts: "-2 -s --summary_interval -4" (3)
phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" (4)
ptpClockThreshold: (5)
holdOverTimeout: 30 #secs
maxOffsetThreshold: 100 #nano secs
minOffsetThreshold: -100 #nano secs
1 Can be one PtpConfigMaster.yaml
,PtpConfigSlave.yaml
, orPtpConfigSlaveCvl.yaml
depending on your requirements.PtpConfigSlaveCvl.yaml
configureslinuxptp
services for an Intel E810 Columbiaville NIC. For configurations based ongroup-du-sno-ranGen.yaml
orgroup-du-3node-ranGen.yaml
, usePtpConfigSlave.yaml
.2 Device specific interface name. 3 You must append the —summary_interval -4
value toptp4lOpts
in.spec.sourceFiles.spec.profile
to enable PTP fast events.4 Required phc2sysOpts
values.-m
prints messages tostdout
. Thelinuxptp-daemon
DaemonSet
parses the logs and generates Prometheus metrics.5 Optional. If the ptpClockThreshold
stanza is not present, default values are used for theptpClockThreshold
fields. The stanza shows defaultptpClockThreshold
values. TheptpClockThreshold
values configure how long after the PTP master clock is disconnected before PTP events are triggered.holdOverTimeout
is the time value in seconds before the PTP clock event state changes toFREERUN
when the PTP master clock is disconnected. ThemaxOffsetThreshold
andminOffsetThreshold
settings configure offset values in nanoseconds that compare against the values forCLOCK_REALTIME
(phc2sys
) or master offset (ptp4l
). When theptp4l
orphc2sys
offset value is outside this range, the PTP clock state is set toFREERUN
. When the offset value is within this range, the PTP clock state is set toLOCKED
.
Apply the following
PolicyGenTemplate
changes to your specific site YAML files, for example,example-sno-site.yaml
:In
.sourceFiles
, add theInterconnect
CR file that configures the AMQ router to theconfig-policy
:- fileName: AmqInstance.yaml
policyName: "config-policy"
Merge any other required changes and files with your custom site repository.
Push the changes to your site configuration repository to deploy PTP fast events to new sites using GitOps ZTP.
Additional resources
- For more information about how to install the AMQ Interconnect Operator, see Installing the AMQ messaging bus.
Configuring the Image Registry Operator for local caching of images
OKD manages image caching using a local registry. In edge computing use cases, clusters are often subject to bandwidth restrictions when communicating with centralized image registries, which might result in long image download times.
Long download times are unavoidable during initial deployment. Over time, there is a risk that CRI-O will erase the /var/lib/containers/storage
directory in the case of an unexpected shutdown. To address long image download times, you can create a local image registry on remote managed clusters using GitOps ZTP. This is useful in Edge computing scenarios where clusters are deployed at the far edge of the network.
Before you can set up the local image registry with GitOps ZTP, you need to configure disk partitioning in the SiteConfig
CR that you use to install the remote managed cluster. After installation, you configure the local image registry using a PolicyGenTemplate
CR. Then, the ZTP pipeline creates Persistent Volume (PV) and Persistent Volume Claim (PVC) CRs and patches the imageregistry
configuration.
The local image registry can only be used for user application images and cannot be used for the OKD or Operator Lifecycle Manager operator images. |
Additional resources
- For more information about container image registries, see OKD registry overview.
Configuring disk partitioning with SiteConfig
Configure disk partitioning for a managed cluster using a SiteConfig
CR and GitOps ZTP. The disk partition details in the SiteConfig
CR must match the underlying disk.
Use persistent naming for devices to avoid device names such as |
Prerequisites
You have installed the OpenShift CLI (
oc
).You have logged in to the hub cluster as a user with
cluster-admin
privileges.You have created a Git repository where you manage your custom site configuration data for use with GitOps Zero Touch Provisioning (ZTP).
Procedure
Add the following YAML that describes the host disk partitioning to the
SiteConfig
CR that you use to install the managed cluster:nodes:
rootDeviceHints:
wwn: "0x62cea7f05c98c2002708a0a22ff480ea"
diskPartition:
- device: /dev/disk/by-id/wwn-0x62cea7f05c98c2002708a0a22ff480ea (1)
partitions:
- mount_point: /var/imageregistry
size: 102500 (2)
start: 344844 (3)
1 This setting depends on the hardware. The setting can be a serial number or device name. The value must match the value set for rootDeviceHints
.2 The minimum value for size
is 102500 MiB.3 The minimum value for start
is 25000 MiB. The total value ofsize
andstart
must not exceed the disk size, or the installation will fail.Save the
SiteConfig
CR and push it to the site configuration repo.
The ZTP pipeline provisions the cluster using the SiteConfig
CR and configures the disk partition.
Configuring the image registry using PolicyGenTemplate CRs
Use PolicyGenTemplate
(PGT) CRs to apply the CRs required to configure the image registry and patch the imageregistry
configuration.
Prerequisites
You have configured a disk partition in the managed cluster.
You have installed the OpenShift CLI (
oc
).You have logged in to the hub cluster as a user with
cluster-admin
privileges.You have created a Git repository where you manage your custom site configuration data for use with GitOps Zero Touch Provisioning (ZTP).
Procedure
Configure the storage class, persistent volume claim, persistent volume, and image registry configuration in the appropriate
PolicyGenTemplate
CR. For example, to configure an individual site, add the following YAML to the fileexample-sno-site.yaml
:sourceFiles:
# storage class
- fileName: StorageClass.yaml
policyName: "sc-for-image-registry"
metadata:
name: image-registry-sc
annotations:
ran.openshift.io/ztp-deploy-wave: "100" (1)
# persistent volume claim
- fileName: StoragePVC.yaml
policyName: "pvc-for-image-registry"
metadata:
name: image-registry-pvc
namespace: openshift-image-registry
annotations:
ran.openshift.io/ztp-deploy-wave: "100"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
storageClassName: image-registry-sc
volumeMode: Filesystem
# persistent volume
- fileName: ImageRegistryPV.yaml (2)
policyName: "pv-for-image-registry"
metadata:
annotations:
ran.openshift.io/ztp-deploy-wave: "100"
- fileName: ImageRegistryConfig.yaml
policyName: "config-for-image-registry"
complianceType: musthave
metadata:
annotations:
ran.openshift.io/ztp-deploy-wave: "100"
spec:
storage:
pvc:
claim: "image-registry-pvc"
1 Set the appropriate value for ztp-deploy-wave
depending on whether you are configuring image registries at the site, common, or group level.ztp-deploy-wave: “100”
is suitable for development or testing because it allows you to group the referenced source files together.2 In ImageRegistryPV.yaml
, ensure that thespec.local.path
field is set to/var/imageregistry
to match the value set for themount_point
field in theSiteConfig
CR.Do not set
complianceType: mustonlyhave
for the- fileName: ImageRegistryConfig.yaml
configuration. This can cause the registry pod deployment to fail.Commit the
PolicyGenTemplate
change in Git, and then push to the Git repository being monitored by the GitOps ZTP ArgoCD application.
Verification
Use the following steps to troubleshoot errors with the local image registry on the managed clusters:
Verify successful login to the registry while logged in to the managed cluster. Run the following commands:
Export the managed cluster name:
$ cluster=<managed_cluster_name>
Get the managed cluster
kubeconfig
details:$ oc get secret -n $cluster $cluster-admin-password -o jsonpath='{.data.password}' | base64 -d > kubeadmin-password-$cluster
Download and export the cluster
kubeconfig
:$ oc get secret -n $cluster $cluster-admin-kubeconfig -o jsonpath='{.data.kubeconfig}' | base64 -d > kubeconfig-$cluster && export KUBECONFIG=./kubeconfig-$cluster
Verify access to the image registry from the managed cluster. See “Accessing the registry”.
Check that the
Config
CRD in theimageregistry.operator.openshift.io
group instance is not reporting errors. Run the following command while logged in to the managed cluster:$ oc get image.config.openshift.io cluster -o yaml
Example output
apiVersion: config.openshift.io/v1
kind: Image
metadata:
annotations:
include.release.openshift.io/ibm-cloud-managed: "true"
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
release.openshift.io/create-only: "true"
creationTimestamp: "2021-10-08T19:02:39Z"
generation: 5
name: cluster
resourceVersion: "688678648"
uid: 0406521b-39c0-4cda-ba75-873697da75a4
spec:
additionalTrustedCA:
name: acm-ice
Check that the
PersistentVolumeClaim
on the managed cluster is populated with data. Run the following command while logged in to the managed cluster:$ oc get pv image-registry-sc
Check that the
registry*
pod is running and is located under theopenshift-image-registry
namespace.$ oc get pods -n openshift-image-registry | grep registry*
Example output
cluster-image-registry-operator-68f5c9c589-42cfg 1/1 Running 0 8d
image-registry-5f8987879-6nx6h 1/1 Running 0 8d
Check that the disk partition on the managed cluster is correct:
Open a debug shell to the managed cluster:
$ oc debug node/sno-1.example.com
Run
lsblk
to check the host disk partitions:sh-4.4# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 446.6G 0 disk
|-sda1 8:1 0 1M 0 part
|-sda2 8:2 0 127M 0 part
|-sda3 8:3 0 384M 0 part /boot
|-sda4 8:4 0 336.3G 0 part /sysroot
`-sda5 8:5 0 100.1G 0 part /var/imageregistry (1)
sdb 8:16 0 446.6G 0 disk
sr0 11:0 1 104M 0 rom
1 /var/imageregistry
indicates that the disk is correctly partitioned.
Additional resources
Configuring bare-metal event monitoring using PolicyGenTemplate CRs
You can configure bare-metal hardware events for vRAN clusters that are deployed using the GitOps Zero Touch Provisioning (ZTP) pipeline.
Prerequisites
Install the OpenShift CLI (
oc
).Log in as a user with
cluster-admin
privileges.Create a Git repository where you manage your custom site configuration data.
Procedure
To configure the AMQ Interconnect Operator and the Bare Metal Event Relay Operator, add the following YAML to
spec.sourceFiles
in thecommon-ranGen.yaml
file:# AMQ interconnect operator for fast events
- fileName: AmqSubscriptionNS.yaml
policyName: "subscriptions-policy"
- fileName: AmqSubscriptionOperGroup.yaml
policyName: "subscriptions-policy"
- fileName: AmqSubscription.yaml
policyName: "subscriptions-policy"
# Bare Metal Event Rely operator
- fileName: BareMetalEventRelaySubscriptionNS.yaml
policyName: "subscriptions-policy"
- fileName: BareMetalEventRelaySubscriptionOperGroup.yaml
policyName: "subscriptions-policy"
- fileName: BareMetalEventRelaySubscription.yaml
policyName: "subscriptions-policy"
Add the
Interconnect
CR to.spec.sourceFiles
in the site configuration file, for example, theexample-sno-site.yaml
file:- fileName: AmqInstance.yaml
policyName: "config-policy"
Add the
HardwareEvent
CR tospec.sourceFiles
in your specific group configuration file, for example, in thegroup-du-sno-ranGen.yaml
file:- fileName: HardwareEvent.yaml
policyName: "config-policy"
spec:
nodeSelector: {}
transportHost: "amqp://<amq_interconnect_name>.<amq_interconnect_namespace>.svc.cluster.local" (1)
logLevel: "info"
1 The transportHost
URL is composed of the existing AMQ Interconnect CRname
andnamespace
. For example, intransportHost: “amqp://amq-router.amq-router.svc.cluster.local”
, the AMQ Interconnectname
andnamespace
are both set toamq-router
.Each baseboard management controller (BMC) requires a single
HardwareEvent
resource only.Commit the
PolicyGenTemplate
change in Git, and then push the changes to your site configuration repository to deploy bare-metal events monitoring to new sites using GitOps ZTP.Create the Redfish Secret by running the following command:
$ oc -n openshift-bare-metal-events create secret generic redfish-basic-auth \
--from-literal=username=<bmc_username> --from-literal=password=<bmc_password> \
--from-literal=hostaddr="<bmc_host_ip_addr>"
Additional resources
- For more information about how to install the Bare Metal Event Relay, see Installing the Bare Metal Event Relay using the CLI.
Additional resources
- For more information about how to create the username, password, and host IP address for the BMC secret, see Creating the bare-metal event and Secret CRs.
Using hub templates in PolicyGenTemplate CRs
Topology Aware Lifecycle Manager supports partial Red Hat Advanced Cluster Management (RHACM) hub cluster template functions in configuration policies used with GitOps ZTP.
Hub-side cluster templates allow you to define configuration policies that can be dynamically customized to the target clusters. This reduces the need to create separate policies for many clusters with similiar configurations but with different values.
Policy templates are restricted to the same namespace as the namespace where the policy is defined. This means that you must create the objects referenced in the hub template in the same namespace where the policy is created. |
The following supported hub template functions are available for use in GitOps ZTP with TALM:
fromConfigmap returns the value of the provided data key in the named
ConfigMap
resource.There is a 1 MiB size limit for
ConfigMap
CRs. The effective size forConfigMap
CRs is further limited by thelast-applied-configuration
annotation. To avoid thelast-applied-configuration
limitation, add the following annotation to the templateConfigMap
:argocd.argoproj.io/sync-options: Replace=true
base64enc returns the base64-encoded value of the input string
base64dec returns the decoded value of the base64-encoded input string
indent returns the input string with added indent spaces
autoindent returns the input string with added indent spaces based on the spacing used in the parent template
toInt casts and returns the integer value of the input value
toBool converts the input string into a boolean value, and returns the boolean
Various Open source community functions are also available for use with GitOps ZTP.
Additional resources
Example hub templates
The following code examples are valid hub templates. Each of these templates return values from the ConfigMap
CR with the name test-config
in the default
namespace.
Returns the value with the key
common-key
:{{hub fromConfigMap "default" "test-config" "common-key" hub}}
Returns a string by using the concatenated value of the
.ManagedClusterName
field and the string-name
:{{hub fromConfigMap "default" "test-config" (printf "%s-name" .ManagedClusterName) hub}}
Casts and returns a boolean value from the concatenated value of the
.ManagedClusterName
field and the string-name
:{{hub fromConfigMap "default" "test-config" (printf "%s-name" .ManagedClusterName) | toBool hub}}
Casts and returns an integer value from the concatenated value of the
.ManagedClusterName
field and the string-name
:{{hub (printf "%s-name" .ManagedClusterName) | fromConfigMap "default" "test-config" | toInt hub}}
Specifying host NICs in site PolicyGenTemplate CRs with hub cluster templates
You can manage host NICs in a single ConfigMap
CR and use hub cluster templates to populate the custom NIC values in the generated polices that get applied to the cluster hosts. Using hub cluster templates in site PolicyGenTemplate
(PGT) CRs means that you do not need to create multiple single site PGT CRs for each site.
The following example shows you how to use a single ConfigMap
CR to manage cluster host NICs and apply them to the cluster as polices by using a single PolicyGenTemplate
site CR.
When you use the |
Prerequisites
You have installed the OpenShift CLI (
oc
).You have logged in to the hub cluster as a user with
cluster-admin
privileges.You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the GitOps ZTP ArgoCD application.
Procedure
Create a
ConfigMap
resource that describes the NICs for a group of hosts. For example:apiVersion: v1
kind: ConfigMap
metadata:
name: sriovdata
namespace: ztp-site
annotations:
argocd.argoproj.io/sync-options: Replace=true (1)
data:
example-sno-du_fh-numVfs: "8"
example-sno-du_fh-pf: ens1f0
example-sno-du_fh-priority: "10"
example-sno-du_fh-vlan: "140"
example-sno-du_mh-numVfs: "8"
example-sno-du_mh-pf: ens3f0
example-sno-du_mh-priority: "10"
example-sno-du_mh-vlan: "150"
1 The argocd.argoproj.io/sync-options
annotation is required only if theConfigMap
is larger than 1 MiB in size.The
ConfigMap
must be in the same namespace with the policy that has the hub template substitution.Commit the
ConfigMap
CR in Git, and then push to the Git repository being monitored by the Argo CD application.Create a site PGT CR that uses templates to pull the required data from the
ConfigMap
object. For example:apiVersion: ran.openshift.io/v1
kind: PolicyGenTemplate
metadata:
name: "site"
namespace: "ztp-site"
spec:
remediationAction: inform
bindingRules:
group-du-sno: ""
mcp: "master"
sourceFiles:
- fileName: SriovNetwork.yaml
policyName: "config-policy"
metadata:
name: "sriov-nw-du-fh"
spec:
resourceName: du_fh
vlan: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_fh-vlan" .ManagedClusterName) | toInt hub}}'
- fileName: SriovNetworkNodePolicy.yaml
policyName: "config-policy"
metadata:
name: "sriov-nnp-du-fh"
spec:
deviceType: netdevice
isRdma: true
nicSelector:
pfNames:
- '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_fh-pf" .ManagedClusterName) | autoindent hub}}'
numVfs: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_fh-numVfs" .ManagedClusterName) | toInt hub}}'
priority: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_fh-priority" .ManagedClusterName) | toInt hub}}'
resourceName: du_fh
- fileName: SriovNetwork.yaml
policyName: "config-policy"
metadata:
name: "sriov-nw-du-mh"
spec:
resourceName: du_mh
vlan: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_mh-vlan" .ManagedClusterName) | toInt hub}}'
- fileName: SriovNetworkNodePolicy.yaml
policyName: "config-policy"
metadata:
name: "sriov-nnp-du-mh"
spec:
deviceType: vfio-pci
isRdma: false
nicSelector:
pfNames:
- '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_mh-pf" .ManagedClusterName) hub}}'
numVfs: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_mh-numVfs" .ManagedClusterName) | toInt hub}}'
priority: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_mh-priority" .ManagedClusterName) | toInt hub}}'
resourceName: du_mh
Commit the site
PolicyGenTemplate
CR in Git and push to the Git repository that is monitored by the ArgoCD application.Subsequent changes to the referenced
ConfigMap
CR are not automatically synced to the applied policies. You need to manually sync the newConfigMap
changes to update existing PolicyGenTemplate CRs. See “Syncing new ConfigMap changes to existing PolicyGenTemplate CRs”.
Specifying VLAN IDs in group PolicyGenTemplate CRs with hub cluster templates
You can manage VLAN IDs for managed clusters in a single ConfigMap
CR and use hub cluster templates to populate the VLAN IDs in the generated polices that get applied to the clusters.
The following example shows how you how manage VLAN IDs in single ConfigMap
CR and apply them in individual cluster polices by using a single PolicyGenTemplate
group CR.
When using the |
Prerequisites
You have installed the OpenShift CLI (
oc
).You have logged in to the hub cluster as a user with
cluster-admin
privileges.You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application.
Procedure
Create a
ConfigMap
CR that describes the VLAN IDs for a group of cluster hosts. For example:apiVersion: v1
kind: ConfigMap
metadata:
name: site-data
namespace: ztp-group
annotations:
argocd.argoproj.io/sync-options: Replace=true (1)
data:
site-1-vlan: "101"
site-2-vlan: "234"
1 The argocd.argoproj.io/sync-options
annotation is required only if theConfigMap
is larger than 1 MiB in size.The
ConfigMap
must be in the same namespace with the policy that has the hub template substitution.Commit the
ConfigMap
CR in Git, and then push to the Git repository being monitored by the Argo CD application.Create a group PGT CR that uses a hub template to pull the required VLAN IDs from the
ConfigMap
object. For example, add the following YAML snippet to the group PGT CR:- fileName: SriovNetwork.yaml
policyName: "config-policy"
metadata:
name: "sriov-nw-du-mh"
annotations:
ran.openshift.io/ztp-deploy-wave: "10"
spec:
resourceName: du_mh
vlan: '{{hub fromConfigMap "" "site-data" (printf "%s-vlan" .ManagedClusterName) | toInt hub}}'
Commit the group
PolicyGenTemplate
CR in Git, and then push to the Git repository being monitored by the Argo CD application.Subsequent changes to the referenced
ConfigMap
CR are not automatically synced to the applied policies. You need to manually sync the newConfigMap
changes to update existing PolicyGenTemplate CRs. See “Syncing new ConfigMap changes to existing PolicyGenTemplate CRs”.
Syncing new ConfigMap changes to existing PolicyGenTemplate CRs
Prerequisites
You have installed the OpenShift CLI (
oc
).You have logged in to the hub cluster as a user with
cluster-admin
privileges.You have created a
PolicyGenTemplate
CR that pulls information from aConfigMap
CR using hub cluster templates.
Procedure
Update the contents of your
ConfigMap
CR, and apply the changes in the hub cluster.To sync the contents of the updated
ConfigMap
CR to the deployed policy, do either of the following:Option 1: Delete the existing policy. ArgoCD uses the
PolicyGenTemplate
CR to immediately recreate the deleted policy. For example, run the following command:$ oc delete policy <policy_name> -n <policy_namespace>
Option 2: Apply a special annotation
policy.open-cluster-management.io/trigger-update
to the policy with a different value every time when you update theConfigMap
. For example:$ oc annotate policy <policy_name> -n <policy_namespace> policy.open-cluster-management.io/trigger-update="1"
You must apply the updated policy for the changes to take effect. For more information, see Special annotation for reprocessing.
Optional: If it exists, delete the
ClusterGroupUpdate
CR that contains the policy. For example:$ oc delete clustergroupupgrade <cgu_name> -n <cgu_namespace>
Create a new
ClusterGroupUpdate
CR that includes the policy to apply with the updatedConfigMap
changes. For example, add the following YAML to the filecgr-example.yaml
:apiVersion: ran.openshift.io/v1alpha1
kind: ClusterGroupUpgrade
metadata:
name: <cgr_name>
namespace: <policy_namespace>
spec:
managedPolicies:
- <managed_policy>
enable: true
clusters:
- <managed_cluster_1>
- <managed_cluster_2>
remediationStrategy:
maxConcurrency: 2
timeout: 240
Apply the updated policy:
$ oc apply -f cgr-example.yaml